Updates from: 02/28/2023 02:17:59
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector.md
Content-type: application/json
"displayName": "John Smith", "objectId": "11111111-0000-0000-0000-000000000000", "givenName":"John",
- "lastName":"Smith",
+ "surname":"Smith",
"step": "PostFederationSignup", "client_id":"<guid>", "ui_locales":"en-US"
active-directory-b2c Custom Policies Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-overview.md
In Azure Active Directory B2C (Azure AD B2C), you can create user experiences by
User flows are already customizable such as [changing UI](customize-ui.md), [customizing language](language-customization.md) and using [custom attributes](user-flow-custom-attributes.md). However, these customizations might not cover all your business specific needs, which is the reason why you need custom policies.
-While you can use pre-made [custom policy starter pack](/tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack), it's important for you understand how custom policy is built from scratch. In this how-to guide series, you'll learn what you need to understand for you to customize the behavior of your user experience by using custom policies. At the end of this how-to guide series, you should be able to read and understand existing custom policies or write your own from scratch.
+While you can use pre-made [custom policy starter pack](/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy#custom-policy-starter-pack), it's important for you understand how custom policy is built from scratch. In this how-to guide series, you'll learn what you need to understand for you to customize the behavior of your user experience by using custom policies. At the end of this how-to guide series, you should be able to read and understand existing custom policies or write your own from scratch.
## Prerequisites
This how-to guide series consists of multiple articles. We recommend that you st
- Learn about [Azure AD B2C TrustFrameworkPolicy BuildingBlocks](buildingblocks.md) -- [Write your first Azure Active Directory B2C custom policy - Hello World!](custom-policies-series-hello-world.md)
+- [Write your first Azure Active Directory B2C custom policy - Hello World!](custom-policies-series-hello-world.md)
active-directory-b2c Enable Authentication React Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-react-spa-app.md
The sample code is made up of the following components. Add these components fro
> [!IMPORTANT] > If the App component file name is `App.js`, change it to `App.jsx`. -- [src/pages/Hello.jsx](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/6-AdvancedScenarios/1-call-api-obo/SPA/src/pages/Hello.jsx) - Demonstrate how to call a protected resource with OAuth2 bearer token.
+- [src/pages/Hello.jsx](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/6-AdvancedScenarios/1-call-api-obo/SPA/src/pages/Home.jsx) - Demonstrate how to call a protected resource with OAuth2 bearer token.
- It uses the [useMsal](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-react/docs/hooks.md) hook that returns the PublicClientApplication instance. - With PublicClientApplication instance, it acquires an access token to call the REST API. - Invokes the [callApiWithToken](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/4-Deployment/2-deploy-static/App/src/fetch.js) function to fetch the data from the REST API and renders the result using the **DataDisplay** component.
active-directory-b2c Manage Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-users-portal.md
Title: Create & delete Azure AD B2C consumer user accounts in the Azure portal description: Learn how to use the Azure portal to create and delete consumer users in your Azure AD B2C directory. -+ Previously updated : 09/20/2021- Last updated : 02/24/2023+
To reset a user's password:
1. In your Azure AD B2C directory, select **Users**, and then select the user you want to reset the password. 1. Search for and select the user that needs the reset, and then select **Reset Password**.
- The **Alain Charon - Profile** page appears with the **Reset password** option.
-
- ![User's profile page, with Reset password option highlighted](media/manage-users-portal/user-profile-reset-password-link.png)
1. In the **Reset password** page, select **Reset password**. 1. Copy the password and give it to the user. The user will be required to change the password during the next sign-in process.
active-directory-b2c Partner Dynamics 365 Fraud Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md
Previously updated : 11/29/2022 Last updated : 02/27/2023
In the provided [custom policies](https://github.com/azure-ad-b2c/partner-integr
|{Settings:DfpTenantId}|The ID of the Azure AD tenant (not B2C) where DFP is licensed and installed|`01234567-89ab-cdef-0123-456789abcdef` or `consoto.onmicrosoft.com` | |{Settings:DfpAppClientIdKeyContainer}|Name of the policy key-in which you save the DFP client ID|`B2C_1A_DFPClientId`| |{Settings:DfpAppClientSecretKeyContainer}|Name of the policy key-in which you save the DFP client secret |`B2C_1A_DFPClientSecret`|
+|{Settings:DfpEnvironment}| The ID of the DFP environment.|Environment ID is a global unique identifier of the DFP environment that you sends the data to. Your custom policy should invoke the API endpoint including the `x-ms-dfpenvid=<your-env-id>` in the query string parameter.|
*You can set up application insights in an Azure AD tenant or subscription. This value is optional but [recommended to assist with debugging](./troubleshoot-with-application-insights.md).
active-directory-b2c Phone Based Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-based-mfa.md
With Azure Active Directory (Azure AD) Multi-Factor Authentication (MFA), users can choose to receive an automated voice call at a phone number they register for verification. Malicious users could take advantage of this method by creating multiple accounts and placing phone calls without completing the MFA registration process. These numerous failed sign-ups could exhaust the allowed sign-up attempts, preventing other users from signing up for new accounts in your Azure AD B2C tenant. To help protect against these attacks, you can use Azure Monitor to monitor phone authentication failures and mitigate fraudulent sign-ups.
+> [!IMPORTANT]
+> Authenticator app (TOTP) provides stronger security than SMS/Phone multi-factor authentication. To set this up please read our instructions for [enabling multi-factor authentication in Azure Active Directory B2C](multi-factor-authentication.md).
+ ## Prerequisites Before you begin, create a [Log Analytics workspace](azure-monitor.md).
active-directory-b2c Roles Resource Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/roles-resource-access-control.md
Title: Roles and resource access control
description: Learn how to use roles to control resource access. -+ Previously updated : 11/25/2021- Last updated : 02/24/2023+ # Roles and resource access control
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Welcome to what's new in Azure Active Directory B2C documentation. This article
### Updated articles -- [Manage your Azure Active Directory B2C tenant](tenant-management.md)
+- [Manage your Azure Active Directory B2C tenant](tenant-management-manage-administrator.md)
- [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-operations.md) - [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md) - [Roles and resource access control](roles-resource-access-control.md)
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Configure xID with Azure Active Directory B2C for passwordless authentication](partner-xid.md) - [Azure Active Directory B2C service limits and restrictions](service-limits.md) - [Localization string IDs](localization-string-ids.md)-- [Manage your Azure Active Directory B2C tenant](tenant-management.md)
+- [Manage your Azure Active Directory B2C tenant](tenant-management-manage-administrator.md)
- [Page layout versions](page-layout.md) - [Secure your API used an API connector in Azure AD B2C](secure-rest-api.md) - [Azure Active Directory B2C: What's new](whats-new-docs.md)
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/known-issues.md
Previously updated : 01/23/2023 Last updated : 02/27/2023 zone_pivot_groups: app-provisioning-cross-tenant-synchronization
For more information, see [About the Exchange Online PowerShell module](/powersh
Configuring synchronization from the target tenant isn't supported. All configurations must be done in the source tenant. Note that the target administrator is able to turn off cross-tenant synchronization at any time.
+### Two users in the source tenant matched with the same user in the target tenant
+
+When two users in the source tenant have the same mail, and they both need to be created in the target tenant, one user will be created in the target and linked to the two users in the source. Please ensure that the mail attribute is not shared among users in the source tenant. In addition, please ensure that the mail of the user in the source tenant is from a verified domain. The external user will not be created successfully if the mail is from an unverified domain.
+ ### Usage of Azure AD B2B collaboration for cross-tenant access - B2B users are unable to manage certain Microsoft 365 services in remote tenants (such as Exchange Online), as there's no directory picker.
active-directory Partner Driven Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/partner-driven-integrations.md
If you have built a SCIM Gateway and would like to add it to this list, follow t
* To avoid duplication, only include applications that don't already have out of the box provisioning connectors in the [Azure AD application gallery](../saas-apps/tutorial-list.md). ## Disclaimer
-For independent software vendors: The Microsoft Azure Active Directory Application Gallery Terms & Conditions, excluding Sections 2ΓÇô4, apply to this Partner-Driven Integrations Catalog (https://aka.ms/PartnerDrivenProvisioning, the ΓÇ£Integrations CatalogΓÇ¥). References to the ΓÇ£GalleryΓÇ¥ shall be read as the ΓÇ£Integrations CatalogΓÇ¥ and references to an ΓÇ£AppΓÇ¥ shall be read as ΓÇ£IntegrationΓÇ¥.
+For independent software vendors: The Microsoft Azure Active Directory Application Gallery Terms & Conditions, excluding Sections 2ΓÇô4, apply to this Partner-Driven Integrations Catalog (the ΓÇ£Integrations CatalogΓÇ¥). References to the ΓÇ£GalleryΓÇ¥ shall be read as the ΓÇ£Integrations CatalogΓÇ¥ and references to an ΓÇ£AppΓÇ¥ shall be read as ΓÇ£IntegrationΓÇ¥.
If you don't agree with these terms, you shouldn't submit your Integration for listing in the Integrations Catalog. If you submit an Integration to the Integrations Catalog, you agree that you or the entity you represent (ΓÇ£YOUΓÇ¥ or ΓÇ£YOURΓÇ¥) is bound by these terms.
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Previously updated : 02/23/2023 Last updated : 02/27/2023
To automate provisioning to an application, it requires building and integrating
1. [Build a SCIM endpoint](#build-a-scim-endpoint) - An endpoint must be SCIM 2.0-compatible to integrate with the Azure AD provisioning service. As an option, use Microsoft Common Language Infrastructure (CLI) libraries and code samples to build your endpoint. These samples are for reference and testing only; we recommend against using them as dependencies in your production app.
-1. [Integrate your SCIM endpoint](#integrate-your-scim-endpoint-with-the-azure-ad-provisioning-service) with the Azure AD Provisioning Service. If your organization uses a third-party application to implement a profile of SCIM 2.0 that Azure AD supports, you can quickly automate both provisioning and deprovisioning of users and groups.
+1. [Integrate your SCIM endpoint](#integrate-your-scim-endpoint-with-the-azure-ad-provisioning-service) with the Azure AD Provisioning Service. Azure AD supports several third-party applications that implement SCIM 2.0. If you use one of these apps, then you can quickly automate both provisioning and deprovisioning of users and groups.
1. [Optional] [Publish your application to the Azure AD application gallery](#publish-your-application-to-the-azure-ad-application-gallery) - Make it easy for customers to discover your application and easily configure provisioning.
To design your schema, follow these steps:
1. List the attributes your application requires, then categorize as attributes needed for authentication (for example, loginName and email). Attributes are needed to manage the user lifecycle (for example, status / active), and all other attributes needed for the application to work (for example, manager, tag).
-1. Check if the attributes are already defined in the **core** user schema or **enterprise** user schema. If not, you must define an extension to the user schema that covers the missing attributes. See example below for an extension to the user to allow provisioning a user `tag`.
+1. Check if the attributes are already defined in the **core** user schema or **enterprise** user schema. If not, you must define an extension to the user schema that covers the missing attributes. See example for an extension to the user to allow provisioning a user `tag`.
-1. Map SCIM attributes to the user attributes in Azure AD. If one of the attributes you've defined in your SCIM endpoint doesn't have a clear counterpart on the Azure AD user schema, guide the tenant administrator to extend their schema, or use an extension attribute as shown below for the `tags` property.
+1. Map SCIM attributes to the user attributes in Azure AD. If one of the attributes you've defined in your SCIM endpoint doesn't have a clear counterpart on the Azure AD user schema, guide the tenant administrator to extend their schema, or use an extension attribute as shown in the example for the `tags` property.
The following table lists an example of required attributes:
Within the [SCIM 2.0 protocol specification](http://www.simplecloud.info/#Specif
|Create users, and optionally also groups|[Section 3.3](https://tools.ietf.org/html/rfc7644#section-3.3)| |Modify users or groups with PATCH requests|[Section 3.5.2](https://tools.ietf.org/html/rfc7644#section-3.5.2). Supporting ensures that groups and users are provisioned in a performant manner.| |Retrieve a known resource for a user or group created earlier|[Section 3.4.1](https://tools.ietf.org/html/rfc7644#section-3.4.1)|
-|Query users or groups|[Section 3.4.2](https://tools.ietf.org/html/rfc7644#section-3.4.2). By default, users are retrieved by their `id` and queried by their `username` and `externalId`, and groups are queried by `displayName`.|
+|Query users or groups|[Section 3.4.2](https://tools.ietf.org/html/rfc7644#section-3.4.2). By default, users are retrieved with their `id` and queried with their `username` and `externalId`, and groups are queried with `displayName`.|
|The filter [excludedAttributes=members](#get-group) when querying the group resource|Section [3.4.2.2](https://www.rfc-editor.org/rfc/rfc7644#section-3.4.2.2)| |Support listing users and paginating|[Section 3.4.2.4](https://datatracker.ietf.org/doc/html/rfc7644#section-3.4.2.4).| |Soft-deleting a user `active=false` and restoring the user `active=true`|The user object should be returned in a request whether or not the user is active. The only time the user shouldn't be returned is when it's hard deleted from the application.|
-|Support the /Schemas endpoint|[Section 7](https://tools.ietf.org/html/rfc7643#page-30) The schema discovery endpoint will be used to discover more attributes.|
+|Support the /Schemas endpoint|[Section 7](https://tools.ietf.org/html/rfc7643#page-30) The schema discovery endpoint is used to discover more attributes.|
|Accept a single bearer token for authentication and authorization of Azure AD to your application.|| Use the general guidelines when implementing a SCIM endpoint to ensure compatibility with Azure AD:
The following diagram shows the group deprovisioning sequence:
This article provides example SCIM requests emitted by the Azure Active Directory (Azure AD) Provisioning Service and example expected responses. For best results, you should code your app to handle these requests in this format and emit the expected responses. > [!IMPORTANT]
-> To understand how and when the Azure AD user provisioning service emits the operations described below, see the section [Provisioning cycles: Initial and incremental](how-provisioning-works.md#provisioning-cycles-initial-and-incremental) in [How provisioning works](how-provisioning-works.md).
+> To understand how and when the Azure AD user provisioning service emits the operations described in the example, see the section [Provisioning cycles: Initial and incremental](how-provisioning-works.md#provisioning-cycles-initial-and-incremental) in [How provisioning works](how-provisioning-works.md).
[User Operations](#user-operations)
All services must use X.509 certificates generated using cryptographic keys of s
**Cipher Suites**
-All services must be configured to use the following cipher suites, in the exact order specified below. If you only have an RSA certificate, installed the ECDSA cipher suites don't have any effect. </br>
+All services must be configured to use the following cipher suites, in the exact order specified in the example. If you only have an RSA certificate, installed the ECDSA cipher suites don't have any effect. </br>
TLS 1.2 Cipher Suites minimum bar:
Use the checklist to onboard your application quickly and customers have a smoot
> * Support at least 25 requests per second per tenant to ensure that users and groups are provisioned and deprovisioned without delay (Required) > * Establish engineering and support contacts to guide customers post gallery onboarding (Required) > * 3 Non-expiring test credentials for your application (Required)
-> * Support the OAuth authorization code grant or a long lived token as described below (Required)
+> * Support the OAuth authorization code grant or a long lived token as described in the example (Required)
> * Establish an engineering and support point of contact to support customers post gallery onboarding (Required) > * [Support schema discovery (required)](https://tools.ietf.org/html/rfc7643#section-6) > * Support updating multiple group memberships with a single PATCH
active-directory Active Directory Authentication Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-authentication-libraries.md
The Azure Active Directory Authentication Library (ADAL) v1.0 enables applicatio
| Platform | Library | Download | Source Code | Sample | Reference | | | | | | |
-| .NET Client, Windows Store, UWP, Xamarin iOS and Android |ADAL .NET v3 |[NuGet](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet) | [Desktop app](../develop/quickstart-v2-windows-desktop.md) |[Reference](/dotnet/api/microsoft.identitymodel.clients.activedirectory) |
+| .NET Client, Windows Store, UWP, Xamarin iOS and Android |ADAL .NET v3 |[NuGet](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet) | [Desktop app](../develop/quickstart-v2-windows-desktop.md) | |
| JavaScript |ADAL.js |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-js) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-js) |[Single-page app](https://github.com/Azure-Samples/active-directory-javascript-singlepageapp-dotnet-webapi) | | | iOS, macOS |ADAL |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-objc/releases) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-objc) |[iOS app](../develop/quickstart-v2-ios.md) | [Reference](http://cocoadocs.org/docsets/ADAL/2.5.1/)| | Android |ADAL |[Maven](https://search.maven.org/search?q=g:com.microsoft.aad+AND+a:adal&core=gav) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-android) |[Android app](../develop/quickstart-v2-android.md) | [JavaDocs](https://javadoc.io/doc/com.microsoft.aad/adal/)|
active-directory Active Directory Certificate Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-certificate-credentials.md
Previously updated : 02/09/2022 Last updated : 02/27/2023 -+ # Microsoft identity platform application authentication certificate credentials
If you're interested in using a JWT issued by another identity provider as a cre
## Assertion format
-To compute the assertion, you can use one of the many JWT libraries in the language of your choice - [MSAL supports this using `.WithCertificate()`](msal-net-client-assertions.md). The information is carried by the token in its Header, Claims, and Signature.
+To compute the assertion, you can use one of the many JWT libraries in the language of your choice - [MSAL supports this using `.WithCertificate()`](msal-net-client-assertions.md). The information is carried by the token in its **Header**, **Claims**, and **Signature**.
### Header
To compute the assertion, you can use one of the many JWT libraries in the langu
Claim type | Value | Description - | - | - `aud` | `https://login.microsoftonline.com/{tenantId}/oauth2/v2.0/token` | The "aud" (audience) claim identifies the recipients that the JWT is intended for (here Azure AD) See [RFC 7519, Section 4.1.3](https://tools.ietf.org/html/rfc7519#section-4.1.3). In this case, that recipient is the login server (login.microsoftonline.com).
-`exp` | 1601519414 | The "exp" (expiration time) claim identifies the expiration time on or after which the JWT MUST NOT be accepted for processing. See [RFC 7519, Section 4.1.4](https://tools.ietf.org/html/rfc7519#section-4.1.4). This allows the assertion to be used until then, so keep it short - 5-10 minutes after `nbf` at most. Azure AD does not place restrictions on the `exp` time currently.
-`iss` | {ClientID} | The "iss" (issuer) claim identifies the principal that issued the JWT, in this case your client application. Use the GUID application ID.
-`jti` | (a Guid) | The "jti" (JWT ID) claim provides a unique identifier for the JWT. The identifier value MUST be assigned in a manner that ensures that there is a negligible probability that the same value will be accidentally assigned to a different data object; if the application uses multiple issuers, collisions MUST be prevented among values produced by different issuers as well. The "jti" value is a case-sensitive string. [RFC 7519, Section 4.1.7](https://tools.ietf.org/html/rfc7519#section-4.1.7)
-`nbf` | 1601519114 | The "nbf" (not before) claim identifies the time before which the JWT MUST NOT be accepted for processing. [RFC 7519, Section 4.1.5](https://tools.ietf.org/html/rfc7519#section-4.1.5). Using the current time is appropriate.
+`exp` | 1601519414 | The "exp" (expiration time) claim identifies the expiration time on or after which the JWT **must not** be accepted for processing. See [RFC 7519, Section 4.1.4](https://tools.ietf.org/html/rfc7519#section-4.1.4). This allows the assertion to be used until then, so keep it short - 5-10 minutes after `nbf` at most. Azure AD does not place restrictions on the `exp` time currently.
+`iss` | {ClientID} | The "iss" (issuer) claim identifies the principal that issued the JWT, in this case your client application. Use the GUID application ID.
+`jti` | (a Guid) | The "jti" (JWT ID) claim provides a unique identifier for the JWT. The identifier value **must** be assigned in a manner that ensures that there is a negligible probability that the same value will be accidentally assigned to a different data object; if the application uses multiple issuers, collisions MUST be prevented among values produced by different issuers as well. The "jti" value is a case-sensitive string. [RFC 7519, Section 4.1.7](https://tools.ietf.org/html/rfc7519#section-4.1.7)
+`nbf` | 1601519114 | The "nbf" (not before) claim identifies the time before which the JWT MUST NOT be accepted for processing. [RFC 7519, Section 4.1.5](https://tools.ietf.org/html/rfc7519#section-4.1.5). Using the current time is appropriate.
`sub` | {ClientID} | The "sub" (subject) claim identifies the subject of the JWT, in this case also your application. Use the same value as `iss`. `iat` | 1601519114 | The "iat" (issued at) claim identifies the time at which the JWT was issued. This claim can be used to determine the age of the JWT. [RFC 7519, Section 4.1.5](https://tools.ietf.org/html/rfc7519#section-4.1.5).
You can associate the certificate credential with the client application in the
### Uploading the certificate file
-In the Azure app registration for the client application:
+In the **App registrations** tab for the client application:
1. Select **Certificates & secrets** > **Certificates**. 2. Click on **Upload certificate** and select the certificate file to upload. 3. Click **Add**.
In the Azure app registration for the client application:
## Using a client assertion
-Client assertions can be used anywhere a client secret would be used. So for example, in the [authorization code flow](v2-oauth2-auth-code-flow.md), you can pass in a `client_secret` to prove that the request is coming from your app. You can replace this with `client_assertion` and `client_assertion_type` parameters.
+Client assertions can be used anywhere a client secret would be used. For example, in the [authorization code flow](v2-oauth2-auth-code-flow.md), you can pass in a `client_secret` to prove that the request is coming from your app. You can replace this with `client_assertion` and `client_assertion_type` parameters.
| Parameter | Value | Description| |--|-|| |`client_assertion_type`|`urn:ietf:params:oauth:client-assertion-type:jwt-bearer`| This is a fixed value, indicating that you are using a certificate credential. |
-|`client_assertion`| JWT |This is the JWT created above. |
+|`client_assertion`| `JWT` |This is the JWT created above. |
## Next steps
active-directory Active Directory V2 Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-v2-protocols.md
description: A guide to OAuth 2.0 and OpenID Connect protocols as supported by t
- Previously updated : 03/31/2022 Last updated : 02/27/2023 -+ # OAuth 2.0 and OpenID Connect (OIDC) in the Microsoft identity platform
-You don't need to learn OAuth or OpenID Connect (OIDC) at the protocol level to use the Microsoft identity platform. You will, however, encounter these and other protocol terms and concepts as you use the identity platform to add auth functionality to your apps.
-
-As you work with the Azure portal, our documentation, and our authentication libraries, knowing a few basics like these can make your integration and debugging tasks easier.
+Knowing about OAuth or OpenID Connect (OIDC) at the protocol level is not required to use the Microsoft identity platform. However, you will encounter protocol terms and concepts as you use the identity platform to add authentication to your apps. As you work with the Azure portal, our documentation, and authentication libraries, knowing some fundamentals can assist your integration and overall experience.
## Roles in OAuth 2.0
-Four parties are typically involved in an OAuth 2.0 and OpenID Connect authentication and authorization exchange. Such exchanges are often called *authentication flows* or *auth flows*.
+Four parties are usually involved in an OAuth 2.0 and OpenID Connect authentication and authorization exchange. These exchanges are often called *authentication flows* or *auth flows*.
![Diagram showing the OAuth 2.0 roles](./media/active-directory-v2-flows/protocols-roles.svg)
-* **Authorization server** - The Microsoft identity platform itself is the authorization server. Also called an *identity provider* or *IdP*, it securely handles the end-user's information, their access, and the trust relationships between the parties in the auth flow. The authorization server issues the security tokens your apps and APIs use for granting, denying, or revoking access to resources (authorization) after the user has signed in (authenticated).
+* **Authorization server** - The identity platform is the authorization server. Also called an *identity provider* or *IdP*, it securely handles the end-user's information, their access, and the trust relationships between the parties in the auth flow. The authorization server issues the security tokens your apps and APIs use for granting, denying, or revoking access to resources (authorization) after the user has signed in (authenticated).
* **Client** - The client in an OAuth exchange is the application requesting access to a protected resource. The client could be a web app running on a server, a single-page web app running in a user's web browser, or a web API that calls another web API. You'll often see the client referred to as *client application*, *application*, or *app*.
-* **Resource owner** - The resource owner in an auth flow is typically the application user, or *end-user* in OAuth terminology. The end-user "owns" the protected resource--their data--your app accesses on their behalf. The resource owner can grant or deny your app (the client) access to the resources they own. For example, your app might call an external system's API to get a user's email address from their profile on that system. Their profile data is a resource the end-user owns on the external system, and the end-user can consent to or deny your app's request to access their data.
+* **Resource owner** - The resource owner in an auth flow is usually the application user, or *end-user* in OAuth terminology. The end-user "owns" the protected resource (their data) which your app accesses on their behalf. The resource owner can grant or deny your app (the client) access to the resources they own. For example, your app might call an external system's API to get a user's email address from their profile on that system. Their profile data is a resource the end-user owns on the external system, and the end-user can consent to or deny your app's request to access their data.
* **Resource server** - The resource server hosts or provides access to a resource owner's data. Most often, the resource server is a web API fronting a data store. The resource server relies on the authorization server to perform authentication and uses information in bearer tokens issued by the authorization server to grant or deny access to resources. ## Tokens
-The parties in an authentication flow use **bearer tokens** to assure, verify, and authenticate a principal (user, host, or service) and to grant or deny access to protected resources (authorization). Bearer tokens in the Microsoft identity platform are formatted as [JSON Web Tokens](https://tools.ietf.org/html/rfc7519) (JWT).
+The parties in an authentication flow use **bearer tokens** to assure, verify, and authenticate a principal (user, host, or service) and to grant or deny access to protected resources (authorization). Bearer tokens in the identity platform are formatted as [JSON Web Tokens](https://tools.ietf.org/html/rfc7519) (JWT).
-Three types of bearer tokens are used by the Microsoft identity platform as *security tokens*:
+Three types of bearer tokens are used by the identity platform as *security tokens*:
* [Access tokens](access-tokens.md) - Access tokens are issued by the authorization server to the client application. The client passes access tokens to the resource server. Access tokens contain the permissions the client has been granted by the authorization server. * [ID tokens](id-tokens.md) - ID tokens are issued by the authorization server to the client application. Clients use ID tokens when signing in users and to get basic information about them.
-* [Refresh tokens](refresh-tokens.md) - The client uses a refresh token, or *RT*, to request new access and ID tokens from the authorization server. Your code should treat refresh tokens and their string content as opaque because they're intended for use only by authorization server.
+* [Refresh tokens](refresh-tokens.md) - The client uses a refresh token, or *RT*, to request new access and ID tokens from the authorization server. Your code should treat refresh tokens and their string content as sensitive data because they're intended for use only by authorization server.
## App registration
-Your client app needs a way to trust the security tokens issued to it by the Microsoft identity platform. The first step in establishing that trust is by [registering your app](quickstart-register-app.md) with the identity platform in Azure Active Directory (Azure AD).
-
-When you register your app in Azure AD, the Microsoft identity platform automatically assigns it some values, while others you configure based on the application's type.
+Your client app needs a way to trust the security tokens issued to it by the identity platform. The first step in establishing trust is by [registering your app](quickstart-register-app.md). When you register your app, the identity platform automatically assigns it some values, while others you configure based on the application's type.
Two of the most commonly referenced app registration settings are:
-* **Application (client) ID** - Also called _application ID_ and _client ID_, this value is assigned to your app by the Microsoft identity platform. The client ID uniquely identifies your app in the identity platform and is included in the security tokens the platform issues.
+* **Application (client) ID** - Also called *application ID* and *client ID*, this value is assigned to your app by the identity platform. The client ID uniquely identifies your app in the identity platform and is included in the security tokens the platform issues.
* **Redirect URI** - The authorization server uses a redirect URI to direct the resource owner's *user-agent* (web browser, mobile app) to another destination after completing their interaction. For example, after the end-user authenticates with the authorization server. Not all client types use redirect URIs. Your app's registration also holds information about the authentication and authorization *endpoints* you'll use in your code to get ID and access tokens. ## Endpoints
-The Microsoft identity platform offers authentication and authorization services using standards-compliant implementations of OAuth 2.0 and OpenID Connect (OIDC) 1.0. Standards-compliant authorization servers like the Microsoft identity platform provide a set of HTTP endpoints for use by the parties in an auth flow to execute the flow.
+The identity platform offers authentication and authorization services using standards-compliant implementations of OAuth 2.0 and OpenID Connect (OIDC) 1.0. Standards-compliant authorization servers like the identity platform provide a set of HTTP endpoints for use by the parties in an auth flow to execute the flow.
-The endpoint URIs for your app are generated for you when you register or configure your app in Azure AD. The endpoints you use in your app's code depend on the application's type and the identities (account types) it should support.
+The endpoint URIs for your app are generated automatically when you register or configure your app. The endpoints you use in your app's code depend on the application's type and the identities (account types) it should support.
Two commonly used endpoints are the [authorization endpoint](v2-oauth2-auth-code-flow.md#request-an-authorization-code) and [token endpoint](v2-oauth2-auth-code-flow.md#redeem-a-code-for-an-access-token). Here are examples of the `authorize` and `token` endpoints:
Next, learn about the OAuth 2.0 authentication flows used by each application ty
* [Authentication flows and application scenarios](authentication-flows-app-scenarios.md) * [Microsoft Authentication Library (MSAL)](msal-overview.md)
-**We strongly advise against crafting your own library or raw HTTP calls to execute authentication flows.** A [Microsoft authentication library](reference-v2-libraries.md) is safer and much easier. However, if your scenario prevents you from using our libraries or you'd just like to learn more about the identity platform's implementation, we have protocol reference:
+**We strongly advise against crafting your own library or raw HTTP calls to execute authentication flows.** A [Microsoft Authentication Library](reference-v2-libraries.md) is safer and much easier. However, if your scenario prevents you from using our libraries or you'd just like to learn more about the identity platform's implementation, we have protocol reference:
* [Authorization code grant flow](v2-oauth2-auth-code-flow.md) - Single-page apps (SPA), mobile apps, native (desktop) applications * [Client credentials flow](v2-oauth2-client-creds-grant-flow.md) - Server-side processes, scripts, daemons
active-directory Msal Acquire Cache Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-acquire-cache-tokens.md
Previously updated : 03/22/2022 Last updated : 02/27/2023 -+ #Customer intent: As an application developer, I want to learn about acquiring and caching tokens so my app can support authentication and authorization.
The format of the scope value varies depending on the resource (the API) receivi
For Microsoft Graph only, the `user.read` scope maps to `https://graph.microsoft.com/User.Read`, and both scope formats can be used interchangeably.
-Certain web APIs such as the Azure Resource Manager API (`https://management.core.windows.net/`) expect a trailing forward slash ('/') in the audience claim (`aud`) of the access token. In this case, pass the scope as `https://management.core.windows.net//user_impersonation`, including the double forward slash ('//').
+Certain web APIs such as the Azure Resource Manager API (`https://management.core.windows.net/`) expect a trailing forward slash (`/`) in the audience claim (`aud`) of the access token. In this case, pass the scope as `https://management.core.windows.net//user_impersonation`, including the double forward slash (`//`).
Other APIs might require that *no scheme or host* is included in the scope value, and expect only the app ID (a GUID) and the scope name, for example:
MSAL maintains a token cache (or two caches for confidential client applications
### Recommended call pattern for public client applications
-Application code should first try to get a token silently from the cache. If the method call returns a "UI required" error or exception, try acquiring a token by other means.
+Application source code should first try to get a token silently from the cache. If the method call returns a "UI required" error or exception, try acquiring a token by other means.
There are two flows, however, in which you **should not** attempt to silently acquire a token:
In public client applications like desktop and mobile apps, you can:
### Confidential client applications
-For confidential client applications (web app, web API, or a daemon application like a Windows service), you:
+For confidential client applications (web app, web API, or a daemon application like a Windows service), you can;
- Acquire tokens **for the application itself** and not for a user, using the [client credentials flow](msal-authentication-flows.md#client-credentials). This technique can be used for syncing tools, or tools that process users in general and not a specific user. - Use the [on-behalf-of (OBO) flow](msal-authentication-flows.md#on-behalf-of-obo) for a web API to call an API on behalf of the user. The application is identified with client credentials in order to acquire a token based on a user assertion (SAML, for example, or a JWT token). This flow is used by applications that need to access resources of a particular user in service-to-service calls.
active-directory Quickstart Configure App Expose Web Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-configure-app-expose-web-apis.md
Previously updated : 03/25/2022 Last updated : 02/27/2023 -+ #Customer intent: As an application developer, I need learn to how to register my web API with the Microsoft identity platform and expose permissions (scopes) to make the API's resources available to users of my client application. # Quickstart: Configure an application to expose a web API
-In this quickstart, you register a web API with the Microsoft identity platform and expose it to client apps by adding an example scope. By registering your web API and exposing it through scopes, you can provide permissions-based access to its resources to authorized users and client apps that access your API.
+In this quickstart, you'll register a web API with the Microsoft identity platform and expose it to client apps by adding a scope. By registering your web API and exposing it through scopes, you can provide permissions-based access to its resources to authorized users and client apps that access your API.
## Prerequisites
To provide scoped access to the resources in your web API, you first need to reg
1. Skip the **Add a redirect URI** and **Configure platform settings** sections. You don't need to configure a redirect URI for a web API since no user is logged in interactively. 1. Skip the **Add credentials** section for now. Only if your API accesses a downstream API would it need its own credentials, a scenario not covered in this article.
-With your web API registered, you're ready to add the scopes that your API's code can use to provide granular permission to consumers of your API.
+With the web API registered, you can add scopes to the API's code so it can provide granular permission to consumers.
## Add a scope
To add the `Employees.Write.All` example scope, follow the steps in the [Add a s
## Verify the exposed scopes
-If you successfully added both example scopes described in the previous sections, they'll appear in the **Expose an API** pane of your web API's app registration, similar to this image:
+If you have successfully added both example scopes described in the previous sections, they'll appear in the **Expose an API** pane of your web API's app registration, similar to the following image:
:::image type="content" source="media/quickstart-configure-app-expose-web-apis/portal-03-scopes-list.png" alt-text="Screenshot of the Expose an API pane showing two exposed scopes.":::
For example, if your web API's application ID URI is `https://contoso.com/api` a
## Using the exposed scopes
-In the next article in the series, you configure a client app's registration with access to your web API and the scopes you defined by following the steps in this article.
+In the next article in this series, you configure a client app's registration with access to your web API and the scopes you defined by following the steps in this article.
-Once a client app registration is granted permission to access your web API, the client can be issued an OAuth 2.0 access token by the Microsoft identity platform. When the client calls the web API, it presents an access token whose scope (`scp`) claim is set to the permissions you've specified in the client's app registration.
+Once a client app registration is granted permission to access your web API, the client can be issued an OAuth 2.0 access token by the identity platform. When the client calls the web API, it presents an access token whose scope (`scp`) claim is set to the permissions you've specified in the client's app registration.
-You can expose additional scopes later as necessary. Consider that your web API can expose multiple scopes associated with several operations. Your resource can control access to the web API at runtime by evaluating the scope (`scp`) claim(s) in the OAuth 2.0 access token it receives.
+You can expose additional scopes later as necessary. Consider that your web API can expose multiple scopes associated with several operations. Your resource can control access to the web API at runtime by evaluating the scope (`scp`) claims in the OAuth 2.0 access token it receives.
## Next steps
active-directory Tutorial V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-desktop.md
Previously updated : 12/12/2019 Last updated : 02/21/2023
# Tutorial: Sign in users and call Microsoft Graph in Windows Presentation Foundation (WPF) desktop app
-In this tutorial, you build a native Windows Desktop .NET (XAML) app that signs in users and gets an access token to call the Microsoft Graph API.
+In this tutorial, you'll build a native Windows Desktop .NET (XAML) app that signs in users and gets an access token to call the Microsoft Graph API.
-When you've completed the guide, your application will be able to call a protected API that uses personal accounts (including outlook.com, live.com, and others). The application will also use work and school accounts from any company or organization that uses Azure Active Directory.
+When you've completed the guide, your application will able to call a protected API that uses personal accounts (including outlook.com, live.com, and others). The application will also use work and school accounts from any company or organization that uses Azure Active Directory (Azure AD).
In this tutorial: > [!div class="checklist"]
-> * Create a *Windows Presentation Foundation (WPF)* project in Visual Studio
-> * Install the Microsoft Authentication Library (MSAL) for .NET
-> * Register the application in the Azure portal
-> * Add code to support user sign-in and sign-out
-> * Add code to call Microsoft Graph API
-> * Test the app
+>
+> - Create a _Windows Presentation Foundation (WPF)_ project in Visual Studio
+> - Install the Microsoft Authentication Library (MSAL) for .NET
+> - Register the application in the Azure portal
+> - Add code to support user sign-in and sign-out
+> - Add code to call Microsoft Graph API
+> - Test the app
## Prerequisites
-* [Visual Studio 2019](https://visualstudio.microsoft.com/vs/)
+- [.NET Framework 4.8](https://dotnet.microsoft.com/en-us/download/dotnet-framework/net48)
+- [Visual Studio 2019](https://visualstudio.microsoft.com/vs/)
## How the sample app generated by this guide works
MSAL manages caching and refreshing access tokens for you, so that your applicat
This guide uses the following NuGet packages:
-|Library|Description|
-|||
-|[Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client)|Microsoft Authentication Library (MSAL.NET)|
+| Library | Description |
+| - | - |
+| [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) | Microsoft Authentication Library (MSAL.NET) |
## Set up your project
-In this section you create a new project to demonstrate how to integrate a Windows Desktop .NET application (XAML) with *Sign-In with Microsoft* so that the application can query web APIs that require a token.
+In this section you'll create a new project to demonstrate how to integrate a Windows Desktop .NET application (XAML) with _Sign-In with Microsoft_ so that the application can query web APIs that require a token.
-The application that you create with this guide displays a button that's used to call a graph, an area to show the results on the screen, and a sign-out button.
+The application that you'll create displays a button that'll call the Microsoft Graph API, an area to display the results, and a sign-out button.
> [!NOTE] > Prefer to download this sample's Visual Studio project instead? [Download a project](https://github.com/Azure-Samples/active-directory-dotnet-desktop-msgraph-v2/archive/msal3x.zip), and skip to the [Configuration step](#register-your-application) to configure the code sample before you execute it.
->
-To create your application, do the following:
+Create the application using the following steps:
-1. In Visual Studio, select **File** > **New** > **Project**.
-2. Under **Templates**, select **Visual C#**.
-3. Select **WPF App (.NET Framework)**, depending on the version of Visual Studio version you're using.
+1. Open Visual Studio
+1. On the start window, select **Create a new project**.
+1. In the **All language** dropdown, select **C#**.
+1. Search for and choose the **WPF App (.NET Framework)** template, and then select Next.
+1. In the **Project name** box, enter a name like _Win-App-calling-MsGraph_.
+1. Choose a **Location** for the project or accept the default option.
+1. In the **Framework**, select **.NET framework 4.8**.
+1. Select **Create**.
## Add MSAL to your project 1. In Visual Studio, select **Tools** > **NuGet Package Manager**> **Package Manager Console**. 2. In the Package Manager Console window, paste the following Azure PowerShell command:
- ```powershell
- Install-Package Microsoft.Identity.Client -Pre
- ```
-
- > [!NOTE]
- > This command installs the Microsoft Authentication Library. MSAL handles acquiring, caching, and refreshing user tokens that are used to access the APIs that are protected by Azure Active Directory v2.0
- >
+ ```powershell
+ Install-Package Microsoft.Identity.Client -Pre
+ ```
## Register your application
You can register your application in either of two ways.
### Option 1: Express mode
-You can quickly register your application by doing the following:
+Use the following steps to register your application:
+ 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/WinDesktopQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience. 1. Enter a name for your application and select **Register**.
-1. Follow the instructions to download and automatically configure your new application with just one click.
+1. Follow the instructions to download and automatically configure your new application.
### Option 2: Advanced mode
-To register your application and add your application registration information to your solution, do the following:
+To register and configure your application, follow these steps:
+ 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application. 1. Search for and select **Azure Active Directory**.
To register your application and add your application registration information t
1. Select **Mobile and desktop applications**. 1. In the **Redirect URIs** section, select **https://login.microsoftonline.com/common/oauth2/nativeclient**. 1. Select **Configure**.
-1. Go to Visual Studio, open the *App.xaml.cs* file, and then replace `Enter_the_Application_Id_here` in the code snippet below with the application ID that you just registered and copied.
-
- ```csharp
- private static string ClientId = "Enter_the_Application_Id_here";
- ```
## Add the code to initialize MSAL In this step, you create a class to handle interaction with MSAL, such as handling of tokens.
-1. Open the *App.xaml.cs* file, and then add the reference for MSAL to the class:
-
- ```csharp
- using Microsoft.Identity.Client;
- ```
- <!-- Workaround for Docs conversion bug -->
+1. Open the _App.xaml.cs_ file, and then add the reference for MSAL to the class:
-2. Update the app class to the following:
+ ```csharp
+ using Microsoft.Identity.Client;
+ ```
- ```csharp
- public partial class App : Application
- {
- static App()
- {
- _clientApp = PublicClientApplicationBuilder.Create(ClientId)
- .WithAuthority(AzureCloudInstance.AzurePublic, Tenant)
- .WithDefaultRedirectUri()
- .Build();
- }
+ <!-- Workaround for Docs conversion bug -->
- // Below are the clientId (Application Id) of your app registration and the tenant information.
- // You have to replace:
- // - the content of ClientID with the Application Id for your app registration
- // - the content of Tenant by the information about the accounts allowed to sign-in in your application:
- // - For Work or School account in your org, use your tenant ID, or domain
- // - for any Work or School accounts, use `organizations`
- // - for any Work or School accounts, or Microsoft personal account, use `common`
- // - for Microsoft Personal account, use consumers
- private static string ClientId = "0b8b0665-bc13-4fdc-bd72-e0227b9fc011";
-
- private static string Tenant = "common";
-
- private static IPublicClientApplication _clientApp ;
+2. Update the app class to the following:
- public static IPublicClientApplication PublicClientApp { get { return _clientApp; } }
- }
- ```
+ ```csharp
+ public partial class App : Application
+ {
+ static App()
+ {
+ _clientApp = PublicClientApplicationBuilder.Create(ClientId)
+ .WithAuthority(AzureCloudInstance.AzurePublic, Tenant)
+ .WithDefaultRedirectUri()
+ .Build();
+ }
+
+ // Below are the clientId (Application Id) of your app registration and the tenant information.
+ // You have to replace:
+ // - the content of ClientID with the Application Id for your app registration
+ // - the content of Tenant by the information about the accounts allowed to sign-in in your application:
+ // - For Work or School account in your org, use your tenant ID, or domain
+ // - for any Work or School accounts, use `organizations`
+ // - for any Work or School accounts, or Microsoft personal account, use `common`
+ // - for Microsoft Personal account, use consumers
+ private static string ClientId = "Enter_the_Application_Id_here";
+
+ private static string Tenant = "common";
+
+ private static IPublicClientApplication _clientApp ;
+
+ public static IPublicClientApplication PublicClientApp { get { return _clientApp; } }
+ }
+ ```
## Create the application UI This section shows how an application can query a protected back-end server such as Microsoft Graph.
-A *MainWindow.xaml* file should automatically be created as a part of your project template. Open this file, and then replace your application's *\<Grid>* node with the following code:
+A _MainWindow.xaml_ file is automatically be created as a part of your project template. Open this file, and then replace your application's _\<Grid>_ node with the following code:
```xml <Grid>
A *MainWindow.xaml* file should automatically be created as a part of your proje
In this section, you use MSAL to get a token for the Microsoft Graph API.
-1. In the *MainWindow.xaml.cs* file, add the reference for MSAL to the class:
-
- ```csharp
- using Microsoft.Identity.Client;
- ```
-
-2. Replace the `MainWindow` class code with the following:
-
- ```csharp
- public partial class MainWindow : Window
- {
- //Set the API Endpoint to Graph 'me' endpoint
- string graphAPIEndpoint = "https://graph.microsoft.com/v1.0/me";
-
- //Set the scope for API call to user.read
- string[] scopes = new string[] { "user.read" };
--
- public MainWindow()
- {
- InitializeComponent();
- }
-
- /// <summary>
- /// Call AcquireToken - to acquire a token requiring user to sign-in
- /// </summary>
- private async void CallGraphButton_Click(object sender, RoutedEventArgs e)
- {
- AuthenticationResult authResult = null;
- var app = App.PublicClientApp;
- ResultText.Text = string.Empty;
- TokenInfoText.Text = string.Empty;
-
- var accounts = await app.GetAccountsAsync();
- var firstAccount = accounts.FirstOrDefault();
-
- try
- {
- authResult = await app.AcquireTokenSilent(scopes, firstAccount)
- .ExecuteAsync();
- }
- catch (MsalUiRequiredException ex)
- {
- // A MsalUiRequiredException happened on AcquireTokenSilent.
- // This indicates you need to call AcquireTokenInteractive to acquire a token
- System.Diagnostics.Debug.WriteLine($"MsalUiRequiredException: {ex.Message}");
-
- try
- {
- authResult = await app.AcquireTokenInteractive(scopes)
- .WithAccount(accounts.FirstOrDefault())
- .WithPrompt(Prompt.SelectAccount)
- .ExecuteAsync();
- }
- catch (MsalException msalex)
- {
- ResultText.Text = $"Error Acquiring Token:{System.Environment.NewLine}{msalex}";
- }
- }
- catch (Exception ex)
- {
- ResultText.Text = $"Error Acquiring Token Silently:{System.Environment.NewLine}{ex}";
- return;
- }
-
- if (authResult != null)
- {
- ResultText.Text = await GetHttpContentWithToken(graphAPIEndpoint, authResult.AccessToken);
- DisplayBasicTokenInfo(authResult);
- this.SignOutButton.Visibility = Visibility.Visible;
- }
- }
- }
- ```
+1. In the _MainWindow.xaml.cs_ file, add the reference for MSAL to the class:
+
+ ```csharp
+ using Microsoft.Identity.Client;
+ ```
+
+2. Replace the `MainWindow` class code with the following code:
+
+ ```csharp
+ public partial class MainWindow : Window
+ {
+ //Set the API Endpoint to Graph 'me' endpoint
+ string graphAPIEndpoint = "https://graph.microsoft.com/v1.0/me";
+
+ //Set the scope for API call to user.read
+ string[] scopes = new string[] { "user.read" };
++
+ public MainWindow()
+ {
+ InitializeComponent();
+ }
+
+ /// <summary>
+ /// Call AcquireToken - to acquire a token requiring user to sign-in
+ /// </summary>
+ private async void CallGraphButton_Click(object sender, RoutedEventArgs e)
+ {
+ AuthenticationResult authResult = null;
+ var app = App.PublicClientApp;
+ ResultText.Text = string.Empty;
+ TokenInfoText.Text = string.Empty;
+
+ var accounts = await app.GetAccountsAsync();
+ var firstAccount = accounts.FirstOrDefault();
+
+ try
+ {
+ authResult = await app.AcquireTokenSilent(scopes, firstAccount)
+ .ExecuteAsync();
+ }
+ catch (MsalUiRequiredException ex)
+ {
+ // A MsalUiRequiredException happened on AcquireTokenSilent.
+ // This indicates you need to call AcquireTokenInteractive to acquire a token
+ System.Diagnostics.Debug.WriteLine($"MsalUiRequiredException: {ex.Message}");
+
+ try
+ {
+ authResult = await app.AcquireTokenInteractive(scopes)
+ .WithAccount(accounts.FirstOrDefault())
+ .WithPrompt(Prompt.SelectAccount)
+ .ExecuteAsync();
+ }
+ catch (MsalException msalex)
+ {
+ ResultText.Text = $"Error Acquiring Token:{System.Environment.NewLine}{msalex}";
+ }
+ }
+ catch (Exception ex)
+ {
+ ResultText.Text = $"Error Acquiring Token Silently:{System.Environment.NewLine}{ex}";
+ return;
+ }
+
+ if (authResult != null)
+ {
+ ResultText.Text = await GetHttpContentWithToken(graphAPIEndpoint, authResult.AccessToken);
+ DisplayBasicTokenInfo(authResult);
+ this.SignOutButton.Visibility = Visibility.Visible;
+ }
+ }
+ }
+ ```
### More information #### Get a user token interactively
-Calling the `AcquireTokenInteractive` method results in a window that prompts users to sign in. Applications usually require users to sign in interactively the first time they need to access a protected resource. They might also need to sign in when a silent operation to acquire a token fails (for example, when a userΓÇÖs password is expired).
+Calling the `AcquireTokenInteractive` method results in a window that prompts users to sign in. Applications usually require users to sign in interactively the first time they need to access a protected resource. They might also need to sign in when a silent operation to acquire a token fails (for example, when a userΓÇÖs password is expired).
#### Get a user token silently The `AcquireTokenSilent` method handles token acquisitions and renewals without any user interaction. After `AcquireTokenInteractive` is executed for the first time, `AcquireTokenSilent` is the usual method to use to obtain tokens that access protected resources for subsequent calls, because calls to request or renew tokens are made silently.
-Eventually, the `AcquireTokenSilent` method will fail. Reasons for failure might be that the user has either signed out or changed their password on another device. When MSAL detects that the issue can be resolved by requiring an interactive action, it fires an `MsalUiRequiredException` exception. Your application can handle this exception in two ways:
+Eventually, the `AcquireTokenSilent` method may fail. Reasons for failure might be that the user has either signed out or changed their password on another device. When MSAL detects that the issue can be resolved by requiring an interactive action, it fires an `MsalUiRequiredException` exception. Your application can handle this exception in two ways:
-* It can make a call against `AcquireTokenInteractive` immediately. This call results in prompting the user to sign in. This pattern is usually used in online applications where there is no available offline content for the user. The sample generated by this guided setup follows this pattern, which you can see in action the first time you execute the sample.
+- It can make a call against `AcquireTokenInteractive` immediately. This call results in prompting the user to sign in. This pattern is used in online applications where there's no available offline content for the user. The sample generated by this setup follows this pattern, which can be seen in action the first time you execute the sample.
-* Because no user has used the application, `PublicClientApp.Users.FirstOrDefault()` contains a null value, and an `MsalUiRequiredException` exception is thrown.
+- Because no user has used the application, `PublicClientApp.Users.FirstOrDefault()` contains a null value, and an `MsalUiRequiredException` exception is thrown.
-* The code in the sample then handles the exception by calling `AcquireTokenInteractive`, which results in prompting the user to sign in.
+- The code in the sample then handles the exception by calling `AcquireTokenInteractive`, which results in prompting the user to sign in.
-* It can instead present a visual indication to users that an interactive sign-in is required, so that they can select the right time to sign in. Or the application can retry `AcquireTokenSilent` later. This pattern is frequently used when users can use other application functionality without disruption--for example, when offline content is available in the application. In this case, users can decide when they want to sign in to either access the protected resource or refresh the outdated information. Alternatively, the application can decide to retry `AcquireTokenSilent` when the network is restored after having been temporarily unavailable.
+- It can instead present a visual indication to users that an interactive sign-in is required, so that they can select the right time to sign in. Or the application can retry `AcquireTokenSilent` later. This pattern is frequently used when users can use other application functionality without disruption. For example, when offline content is available in the application. In this case, users can decide when they want to sign in to either access the protected resource or refresh the outdated information. Alternatively, the application can decide to retry `AcquireTokenSilent` when the network is restored after having been temporarily unavailable.
## Call the Microsoft Graph API by using the token you just obtained
public async Task<string> GetHttpContentWithToken(string url, string token)
### More information about making a REST call against a protected API
-In this sample application, you use the `GetHttpContentWithToken` method to make an HTTP `GET` request against a protected resource that requires a token and then return the content to the caller. This method adds the acquired token in the HTTP Authorization header. For this sample, the resource is the Microsoft Graph API *me* endpoint, which displays the user's profile information.
+In this sample application, you use the `GetHttpContentWithToken` method to make an HTTP `GET` request against a protected resource that requires a token and then return the content to the caller. This method adds the acquired token in the HTTP Authorization header. For this sample, the resource is the Microsoft Graph API _me_ endpoint, which displays the user's profile information.
## Add a method to sign out a user
private async void SignOutButton_Click(object sender, RoutedEventArgs e)
### More information about user sign-out
-The `SignOutButton_Click` method removes users from the MSAL user cache, which effectively tells MSAL to forget the current user so that a future request to acquire a token will succeed only if it is made to be interactive.
+The `SignOutButton_Click` method removes users from the MSAL user cache, which effectively tells MSAL to forget the current user so that a future request to acquire a token succeeds only if it's made to be interactive.
Although the application in this sample supports single users, MSAL supports scenarios where multiple accounts can be signed in at the same time. An example is an email application where a user has multiple accounts. ## Display basic token information
-To display basic information about the token, add the following method to your *MainWindow.xaml.cs* file:
+To display basic information about the token, add the following method to your _MainWindow.xaml.cs_ file:
```csharp /// <summary>
private void DisplayBasicTokenInfo(AuthenticationResult authResult)
### More information
-In addition to the access token that's used to call the Microsoft Graph API, after the user signs in, MSAL also obtains an ID token. This token contain a small subset of information that's pertinent to users. The `DisplayBasicTokenInfo` method displays the basic information that's contained in the token. For example, it displays the user's display name and ID, as well as the token expiration date and the string representing the access token itself. You can select the *Call Microsoft Graph API* button multiple times and see that the same token was reused for subsequent requests. You can also see the expiration date being extended when MSAL decides it is time to renew the token.
+In addition to the access token that's used to call the Microsoft Graph API, after the user signs in, MSAL also obtains an ID token. This token contains a small subset of information that's pertinent to users. The `DisplayBasicTokenInfo` method displays the basic information that's contained in the token. For example, it displays the user's display name and ID, as well as the token expiration date and the string representing the access token itself. You can select the _Call Microsoft Graph API_ button multiple times and see that the same token was reused for subsequent requests. You can also see the expiration date being extended when MSAL decides it's time to renew the token.
[!INCLUDE [5. Test and Validate](../../../includes/active-directory-develop-guidedsetup-windesktop-test.md)]
In addition to the access token that's used to call the Microsoft Graph API, aft
Learn more about building desktop apps that call protected web APIs in our multi-part scenario series:
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Scenario: Desktop app that calls web APIs](scenario-desktop-overview.md)
active-directory B2b Direct Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-direct-connect-overview.md
Title: B2B direct connect overview - Azure AD
+ Title: B2B direct connect Azure AD overview
description: Azure Active Directory B2B direct connect lets users from other Azure AD tenants seamlessly sign in to your shared resources via Teams shared channels. There's no need for a guest user object in your Azure AD directory. Previously updated : 10/12/2022 Last updated : 02/20/2023 -+ # B2B direct connect overview
Azure AD organizations can manage their trust relationships with other Azure AD
> B2B direct connect is possible only when both organizations allow access to and from the other organization. For example, Contoso can allow inbound B2B direct connect from Fabrikam, but sharing isn't possible until Fabrikam also enables outbound B2B direct connect with Contoso. Therefore, youΓÇÖll need to coordinate with the external organizationΓÇÖs admin to make sure their cross-tenant access settings allow sharing with you. This mutual agreement is important because B2B direct connect enables limited sharing of data for the users you enable for B2B direct connect. ### Default settings
-The default cross-tenant access settings apply to all external Azure AD organizations, except organizations for which you've configured individual settings. Initially, Azure AD blocks all inbound and outbound B2B direct connect capabilities by default for all external Azure AD tenants. You can change these default settings, but typically you'll leave them as-is and enable B2B direct connect access with individual organizations.
+The default cross-tenant access settings apply to all external Azure AD organizations, except organizations for which you've configured individual settings. Initially, Azure AD blocks all inbound and outbound B2B direct connect capabilities by default for all external Azure AD tenants. You can change these default settings, but typically you can leave them as-is and enable B2B direct connect access with individual organizations.
### Organization-specific settings
-You can configure organization-specific settings by adding the organization and modifying the cross-tenant access settings. These settings will then take precedence over the default settings for this organization.
+You can configure organization-specific settings by adding the organization and modifying the cross-tenant access settings. These settings then take precedence over the default settings for this organization.
### Example 1: Allow B2B direct connect with Fabrikam and block all others
For this scenario to work, Fabrikam also needs to allow B2B direct connect with
### Example 2: Enable B2B direct connect with Fabrikam's Marketing group only
-Starting from the example above, Contoso could also choose to allow only the Fabrikam Marketing group to collaborate with Contoso's users through B2B direct connect. In this case, Contoso will need to obtain the Marketing group's object ID from Fabrikam. Then, instead of allowing inbound access to all Fabrikam's users, they'll configure their Fabrikam-specific access settings as follows:
+Starting from the example above, Contoso could also choose to allow only the Fabrikam Marketing group to collaborate with Contoso's users through B2B direct connect. In this case, Contoso needs to obtain the Marketing group's object ID from Fabrikam. Then, instead of allowing inbound access to all Fabrikam's users, they'll configure their Fabrikam-specific access settings as follows:
- Allow inbound access to B2B direct connect for Fabrikam's Marketing group only. Contoso specifies Fabrikam's Marketing group object ID in the allowed users and groups list. - Allow inbound access to all internal Contoso applications by Fabrikam B2B direct connect users.
In your cross-tenant access settings, you can use **Trust settings** to trust cl
Currently, B2B direct connect enables the Teams Connect shared channels feature. B2B direct connect users can access an external organization's Teams shared channel without having to switch tenants or sign in with a different account. The B2B direct connect userΓÇÖs access is determined by the shared channelΓÇÖs policies.
-In the resource organization, the Teams shared channel owner can search within Teams for users from an external organization and add them to the shared channel. After they're added, the B2B direct connect users can access the shared channel from within their home instance of Teams, where they collaborate using features such as chat, calls, file-sharing, and app-sharing. For details, see [Overview of teams and channels in Microsoft Teams](/microsoftteams/teams-channels-overview). For details about the resources, files, and applications, that are available to the B2B direct connect user via the Teams shared channel, refer to [Chat, teams, channels, & apps in Microsoft Teams](/microsoftteams/deploy-chat-teams-channels-microsoft-teams-landing-page).
+In the resource organization, the Teams shared channel owner can search within Teams for users from an external organization and add them to the shared channel. After they're added, the B2B direct connect users can access the shared channel from within their home instance of Teams, where they collaborate using features such as chat, calls, file-sharing, and app-sharing. For details, see [Overview of teams and channels in Microsoft Teams](/microsoftteams/teams-channels-overview). For details about the resources, files, and applications that are available to the B2B direct connect user via the Teams shared channel refer to [Chat, teams, channels, & apps in Microsoft Teams](/microsoftteams/deploy-chat-teams-channels-microsoft-teams-landing-page).
## B2B direct connect vs. B2B collaboration
-B2B collaboration and B2B direct connect are two different approaches to sharing with users outside of your organization. You'll find a [feature-to-feature comparison](external-identities-overview.md#comparing-external-identities-feature-sets) in the External Identities overview. Here, we'll discuss some key differences in how users are managed and how they access resources.
+B2B collaboration and B2B direct connect are two different approaches to sharing with users outside of your organization. You can find a [feature-to-feature comparison](external-identities-overview.md#comparing-external-identities-feature-sets) in the External Identities overview, where we discuss some key differences in how users are managed, and how they access resources.
### User access and management
active-directory One Time Passcode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/one-time-passcode.md
Previously updated : 09/16/2022 Last updated : 02/21/2023 -++
+# Customer intent: As a tenant administrator, I want to make sure that my users can authenticate themselves with one-time passcode.
# Email one-time passcode authentication The email one-time passcode feature is a way to authenticate B2B collaboration users when they can't be authenticated through other means, such as Azure AD, Microsoft account (MSA), or social identity providers. When a B2B guest user tries to redeem your invitation or sign in to your shared resources, they can request a temporary passcode, which is sent to their email address. Then they enter this passcode to continue signing in.
-![Diagram showing an overview of Email one-time passcode.](media/one-time-passcode/email-otp.png)
> [!IMPORTANT] >
At the time of invitation, there's no indication that the user you're inviting w
### Example
-Guest user teri@gmail.com is invited to Fabrikam, which doesn't have Google federation set up. Teri doesn't have a Microsoft account. They'll receive a one-time passcode for authentication.
+Guest user nicole@firstupconsultants.com is invited to Fabrikam, which doesn't have Google federation set up. Nicole doesn't have a Microsoft account. They'll receive a one-time passcode for authentication.
## Enable or disable email one-time passcodes
The email one-time passcode feature is now turned on by default for all new tena
- **Yes**: The toggle is set to **Yes** by default unless the feature has been explicitly turned it off. To enable the feature, make sure **Yes** is selected. - **No**: If you want to disable the email one-time passcode feature, select **No**.
- ![Screenshots showing the Email one-time passcode toggle.](media/one-time-passcode/email-one-time-passcode-toggle.png)
-1. Select **Save**.
+6. Select **Save**.
## Frequently asked questions
When we support the ability to disable Microsoft Account in the Identity provide
**Regarding the change to enable email one-time-passcode by default, does this include SharePoint and OneDrive integration with Azure AD B2B?**
-No, the global rollout of the change to enable email one-time passcode by default doesn't include enabling SharePoint and OneDrive integration with Azure AD B2B by default. To learn how to enable integration so that collaboration on SharePoint and OneDrive uses B2B capabilities, or how to disable this integration, see [SharePoint and OneDrive Integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration).
+No, the global rollout of the change to enable email one-time passcode by default doesn't include enabling SharePoint and OneDrive integration with Azure AD B2B by default.To learn how to enable or disable the integration of SharePoint and OneDrive with Azure AD B2B for secure collaboration, see [SharePoint and OneDrive Integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration).
## Next steps
-Learn about [Identity Providers for External Identities](identity-providers.md).
+Learn about [Identity Providers for External Identities](identity-providers.md), and how to reset [redemption status for a guest user](reset-redemption-status.md).
active-directory What Is B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/what-is-b2b.md
Title: B2B collaboration overview - Azure AD
+ Title: Azure AD B2B collaboration overview
description: Azure Active Directory B2B collaboration supports guest user access so you can securely share resources and collaborate with external partners. Previously updated : 02/14/2023 Last updated : 02/20/2023 -+ # B2B collaboration overview
Azure Active Directory (Azure AD) B2B collaboration is a feature within External
![Diagram illustrating B2B collaboration.](media/what-is-b2b/b2b-collaboration-overview.png)
-A simple invitation and redemption process lets partners use their own credentials to access your company's resources. You can also enable self-service sign-up user flows to let external users sign up for apps or resources themselves. Once the external user has redeemed their invitation or completed sign-up, they're represented in your directory as a [user object](user-properties.md). B2B collaboration user objects are typically given a user type of "guest" and can be identified by the #EXT# extension in their user principal name.
+A simple invitation and redemption process lets partners use their own credentials to access your company's resources. You can also enable self-service sign-up user flows to let external users sign up for apps or resources themselves. Once the external user has redeemed their invitation or completed sign-up, they're represented in your directory as a [user object](user-properties.md). The user type for these B2B collaboration users is typically set to "guest" and their user principal name contains the #EXT# identifier.
Developers can use Azure AD business-to-business APIs to customize the invitation process or write applications like self-service sign-up portals. For licensing and pricing information related to guest users, refer to [Azure Active Directory External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/).
Azure AD supports external identity providers like Facebook, Microsoft accounts,
## Integrate with SharePoint and OneDrive
-You can [enable integration with SharePoint and OneDrive](/sharepoint/sharepoint-azureb2b-integration) to share files, folders, list items, document libraries, and sites with people outside your organization, while using Azure B2B for authentication and management. The users you share resources with are typically added to your directory as guests, and permissions and groups work the same for these guests as they do for internal users. When enabling integration with SharePoint and OneDrive, you'll also enable the [email one-time passcode](one-time-passcode.md) feature in Azure AD B2B to serve as a fallback authentication method.
+You can [enable integration with SharePoint and OneDrive](/sharepoint/sharepoint-azureb2b-integration) to share files, folders, list items, document libraries, and sites with people outside your organization, while using Azure B2B for authentication and management. The users you share resources with are typically guest users in your directory, and permissions and groups work the same for these guests as they do for internal users. When enabling integration with SharePoint and OneDrive, you also enable the [email one-time passcode](one-time-passcode.md) feature in Azure AD B2B to serve as a fallback authentication method.
![Screenshot of the email one-time-passcode setting.](media/what-is-b2b/enable-email-otp-options.png) ## Next steps -- [External Identities pricing](external-identities-pricing.md)
+- [Invitation email](invitation-email-elements.md)
- [Add B2B collaboration guest users in the portal](add-users-administrator.md)-- [Understand the invitation redemption process](redemption-experience.md)
+- [B2B direct connect](b2b-direct-connect-overview.md)
active-directory Lifecycle Workflow History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-history.md
Workflows created using Lifecycle Workflows allow for the automation of lifecycl
## Lifecycle Workflow History Summaries
-Lifecycle Workflows introduce a history feature based on summaries and details. These history summaries allow you to quickly get information about for who a workflow ran, and whether or not this run was successful or not. This is valuable because the large set of information given by audit logs might become too numerous to be efficiently used. To make a large set of information processed easier to read, Lifecycle Workflows provide summaries for quick use. You can view these history summaries in three ways:
+Lifecycle Workflows introduce a history feature based on summaries and details. These history summaries allow you to quickly get information about for who a workflow ran, and whether or not this run was successful. This is valuable because the large set of information given by audit logs might become too numerous to be efficiently used. To make a large set of information processed easier to read, Lifecycle Workflows provide summaries for quick use. You can view these history summaries in three ways:
-- **Users summary**: Shows a summary of users processed by a workflow, and which tasks failed, successfully, and totally ran for each specific user.
+- **Users summary**: Shows a summary of users processed by a workflow. Successfully, failed, and total ran information for each specific user is shown.
- **Runs summary**: Shows a summary of workflow runs in terms of the workflow. Successful, failed, and total task information when workflow runs are noted. - **Tasks summary**: Shows a summary of tasks processed by a workflow, and which tasks failed, successfully, and totally ran in the workflow.
-Summaries allow you to quickly gain details about how a workflow ran for itself, or users, without going into further details in logs. For a step by step guide on getting this information, see [Check the status of a workflow (Preview)](check-status-workflow.md)
+Summaries allow you to quickly gain details about how a workflow ran for itself, or users, without going into further details in logs. For a step by step guide on getting this information, see [Check the status of a workflow (Preview)](check-status-workflow.md).
## Users Summary information
Task detailed history information allows you to filter for specific information
- **Completed date**: You can filter a specific range from as short as 24 hours up to 30 days of when the workflow ran. - **Tasks**: You can filter based on specific task names.
-Separating processing of the workflow from the tasks is important because, in a workflow, processing a user certain tasks could be successful, while others could fail. Whether or not a task runs after a failed task in a workflow depends on parameters such as enabling continue On Error, and their placement within the workflow. For more information, see [Common task parameters](lifecycle-workflow-tasks.md#common-task-parameters-preview).
+Separating processing of the workflow from the tasks is important because, in a workflow, processing a user certain tasks could be successful, while others could fail. Whether or not a task runs after a failed task in a workflow depends on parameters such as enabling continue On Error, and their placement within the workflow. For more information, see [Common task parameters (preview)](lifecycle-workflow-tasks.md#common-task-parameters-preview).
## Next steps
Separating processing of the workflow from the tasks is important because, in a
- [taskProcessingResult resource type](/graph/api/resources/identitygovernance-taskprocessingresult?view=graph-rest-beta&preserve-view=true) - [Understanding Lifecycle Workflows](understanding-lifecycle-workflows.md) - [Lifecycle Workflow templates](lifecycle-workflow-templates.md)-
active-directory Howto Troubleshoot Upn Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/howto-troubleshoot-upn-changes.md
Read the following sections for known issues and workarounds during UPN change.
## Apps known issues and workarounds
-Software as a service (SaaS) and line of business (LoB) applications often rely on UPNs to find users and store user profile information, including roles. Applications potentially affected by UNP changes use just-in-time (JIT) provisioning to create a user profile when users initially sign in to the app.
+Software as a service (SaaS) and line of business (LoB) applications often rely on UPNs to find users and store user profile information, including roles. Applications potentially affected by UPN changes use just-in-time (JIT) provisioning to create a user profile when users initially sign in to the app.
Learn more:
active-directory Docusign Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/docusign-provisioning-tutorial.md
It starts the initial synchronization of any users assigned to DocuSign in the U
For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md). ## Troubleshooting Tips
-* Provisioning a role or permission profile for a user in Docusign can be accomplished by using an expression in your attribute mappings using the [switch](../app-provisioning/functions-for-customizing-application-data.md#switch) and [singleAppRoleAssignment](../app-provisioning/functions-for-customizing-application-data.md#singleapproleassignment) functions. For example, the expression below will provision the ID "8032066" when a user has the "DS Admin" role assigned in Azure AD. It will not provision any permission profile if the user isn't assigned a role on the Azure AD side. The ID can be retrieved from the DocuSign [portal](https://support.docusign.com/articles/Default-settings-for-out-of-the-box-DocuSign-Permission-Profiles).
+* Provisioning a role or permission profile for a user in Docusign can be accomplished by using an expression in your attribute mappings using the [switch](../app-provisioning/functions-for-customizing-application-data.md#switch) and [singleAppRoleAssignment](../app-provisioning/functions-for-customizing-application-data.md#singleapproleassignment) functions. For example, the expression below will provision the ID "8032066" when a user has the "DS Admin" role assigned in Azure AD. It will not provision any permission profile if the user isn't assigned a role on the Azure AD side. The ID can be retrieved from the DocuSign [portal](https://support.docusign.com/).
Switch(SingleAppRoleAssignment([appRoleAssignments])," ", "DS Admin", "8032066")
active-directory Spring Cm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/spring-cm-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://na11.springcm.com/atlas/SSO/SSOEndpoint.ashx?aid=<IDENTIFIER>` > [!NOTE]
- > The value is not real. Update the value with the actual Sign-On URL. Contact [SpringCM Client support team](https://knowledge.springcm.com/support) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The value is not real. Update the value with the actual Sign-On URL. Contact [SpringCM Client support team](https://support.docusign.com/s/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Raw)** from the given options as per your requirement and save it on your computer.
aks Cluster Container Registry Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-container-registry-integration.md
ms.devlang: azurecli
You need to establish an authentication mechanism when using [Azure Container Registry (ACR)][acr-intro] with Azure Kubernetes Service (AKS). This operation is implemented as part of the Azure CLI, Azure PowerShell, and Azure portal experiences by granting the required permissions to your ACR. This article provides examples for configuring authentication between these Azure services.
-You can set up the AKS to ACR integration using the Azure CLI or Azure PowerShell. The AKS to ACR integration assigns the [**AcrPull** role][acr-pull] to the [Azure Active Directory (Azure AD) **managed identity**][aad-identity] associated with your AKS cluster.
+You can set up the AKS to ACR integration using the Azure CLI or Azure PowerShell. The AKS to ACR integration assigns the [**AcrPull** role][acr-pull] to the [Azure Active Directory (Azure AD) **managed identity**][aad-identity] associated with the agent pool in your AKS cluster. For more information on AKS managed identities, see [Summary of managed identities][summary-msi].
> [!IMPORTANT]
-> There is a latency issue with Azure Active Directory groups when attaching ACR. If the AcrPull role is granted to an Azure AD group and the kubelet identity is added to the group to complete the RBAC configuration, there might be up to a one-hour delay before the RBAC group takes effect. We recommended you to use the [Bring your own kubelet identity][byo-kubelet-identity] as a workaround. You can pre-create a user-assigned identity, add it to the Azure AD group, then use the identity as the kubelet identity to create an AKS cluster. This ensures the identity is added to the Azure AD group before a token is generated by kubelet, which avoids the latency issue.
+> There is a latency issue with Azure Active Directory groups when attaching ACR. If the AcrPull role is granted to an Azure AD group and the kubelet identity is added to the group to complete the RBAC configuration, there may be a delay before the RBAC group takes effect. If you are running automation that requires the RBAC configuration to be complete, we recommended you use the [Bring your own kubelet identity][byo-kubelet-identity] as a workaround. You can pre-create a user-assigned identity, add it to the Azure AD group, then use the identity as the kubelet identity to create an AKS cluster. This ensures the identity is added to the Azure AD group before a token is generated by kubelet, which avoids the latency issue.
> [!NOTE] > This article covers automatic authentication between AKS and ACR. If you need to pull an image from a private external registry, use an [image pull secret][image-pull-secret].
aks Command Invoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/command-invoke.md
Last updated 1/14/2022
# Use `command invoke` to access a private Azure Kubernetes Service (AKS) cluster
-Accessing a private AKS cluster requires that you connect to that cluster either from the cluster virtual network, from a peered network, or via a configured private endpoint. These approaches require configuring a VPN, Express Route, deploying a *jumpbox* within the cluster virtual network, or creating a private endpoint inside of another virtual network. Alternatively, you can use `command invoke` to access private clusters without having to configure a VPN or Express Route. Using `command invoke` allows you to remotely invoke commands like `kubectl` and `helm` on your private cluster through the Azure API without directly connecting to the cluster. Permissions for using `command invoke` are controlled through the `Microsoft.ContainerService/managedClusters/runcommand/action` and `Microsoft.ContainerService/managedclusters/commandResults/read` roles.
+Accessing a private AKS cluster requires that you connect to that cluster either from the cluster virtual network, from a peered network, or via a configured private endpoint. These approaches require configuring a VPN, Express Route, deploying a *jumpbox* within the cluster virtual network, or creating a private endpoint inside of another virtual network. Alternatively, you can use `command invoke` to access private clusters without having to configure a VPN or Express Route. Using `command invoke` allows you to remotely invoke commands like `kubectl` and `helm` on your private cluster through the Azure API without directly connecting to the cluster. Permissions for using `command invoke` are controlled through the `Microsoft.ContainerService/managedClusters/runcommand/action` and `Microsoft.ContainerService/managedclusters/commandResults/read` actions.
## Prerequisites
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
To run your applications and supporting services, you need a Kubernetes *node*.
The Azure VM size for your nodes defines CPUs, memory, size, and the storage type available (such as high-performance SSD or regular HDD). Plan the node size around whether your applications may require large amounts of CPU and memory or high-performance storage. Scale out the number of nodes in your AKS cluster to meet demand. For more information on scaling, see [Scaling options for applications in AKS](/concepts-scale.md).
-In AKS, the VM image for your cluster's nodes is based on Ubuntu Linux or Windows Server 2019. When you create an AKS cluster or scale out the number of nodes, the Azure platform automatically creates and configures the requested number of VMs. Agent nodes are billed as standard VMs, so any VM size discounts (including [Azure reservations][reservation-discounts]) are automatically applied.
+In AKS, the VM image for your cluster's nodes is based on Ubuntu Linux, [Mariner Linux](use-mariner.md), or Windows Server 2019. When you create an AKS cluster or scale out the number of nodes, the Azure platform automatically creates and configures the requested number of VMs. Agent nodes are billed as standard VMs, so any VM size discounts (including [Azure reservations][reservation-discounts]) are automatically applied.
For managed disks, the default disk size and performance will be assigned according to the selected VM SKU and vCPU count. For more information, see [Default OS disk sizing](cluster-configuration.md#default-os-disk-sizing).
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-security.md
You can control access to the API server using Kubernetes role-based access cont
## Node security
-AKS nodes are Azure virtual machines (VMs) that you manage and maintain.
-* Linux nodes run an optimized Ubuntu distribution using the `containerd` or Docker container runtime.
+AKS nodes are Azure virtual machines (VMs) that you manage and maintain.
+* Linux nodes run optimized versions of Ubuntu or Mariner.
* Windows Server nodes run an optimized Windows Server 2019 release using the `containerd` or Docker container runtime. When an AKS cluster is created or scaled up, the nodes are automatically deployed with the latest OS security updates and configurations.
aks Csi Migrate In Tree Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-migrate-in-tree-volumes.md
The following are important considerations to evaluate:
Replace **pvName** with the name of your selected PersistentVolume. Alternatively, if you want to update the reclaimPolicy for multiple PVs, create a file named **patchReclaimPVs.sh** and copy in the following code. ```bash
- # Patch the Persistent Volume in case ReclaimPolicy is Delete
#!/bin/sh
+ # Patch the Persistent Volume in case ReclaimPolicy is Delete
namespace=$1 i=1 for pvc in $(kubectl get pvc -n $namespace | awk '{ print $1}'); do
Migration from in-tree to CSI is supported by creating a static volume.
Replace **pvName** with the name of your selected PersistentVolume. Alternatively, if you want to update the reclaimPolicy for multiple PVs, create a file named **patchReclaimPVs.sh** and copy in the following code. ```bash
- # Patch the Persistent Volume in case ReclaimPolicy is Delete
#!/bin/sh
+ # Patch the Persistent Volume in case ReclaimPolicy is Delete
namespace=$1 i=1 for pvc in $(kubectl get pvc -n $namespace | awk '{ print $1}'); do
aks Dapr Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-migration.md
description: Learn how to migrate your managed clusters from Dapr OSS to the Dap
- Last updated 11/21/2022
aks Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-overview.md
Title: Dapr extension for Azure Kubernetes Service (AKS) overview description: Learn more about using Dapr on your Azure Kubernetes Service (AKS) cluster to develop applications. - Last updated 10/11/2022
aks Dapr Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-settings.md
Title: Configure the Dapr extension for your Azure Kubernetes Service (AKS) and
description: Learn how to configure the Dapr extension specifically for your Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes project - Last updated 01/09/2023
aks Deploy Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md
Title: Deploy an Azure container offer from Azure Marketplace
description: Learn how to deploy Azure container offers from Azure Marketplace on an Azure Kubernetes Service (AKS) cluster. - Last updated 09/30/2022
aks Developer Best Practices Pod Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/developer-best-practices-pod-security.md
Title: Developer best practices - Pod security in Azure Kubernetes Services (AKS) description: Learn the developer best practices for how to secure pods in Azure Kubernetes Service (AKS)- Last updated 10/27/2022
aks Developer Best Practices Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/developer-best-practices-resource-management.md
Title: Resource management best practices description: Learn the application developer best practices for resource management in Azure Kubernetes Service (AKS)-- Last updated 03/15/2021- # Best practices for application developers to manage resources in Azure Kubernetes Service (AKS)
aks Draft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/draft.md
Title: Draft extension for Azure Kubernetes Service (AKS) (preview)
description: Install and use Draft on your Azure Kubernetes Service (AKS) cluster using the Draft extension. - Last updated 5/02/2022
aks Egress Outboundtype https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-outboundtype.md
Title: Customize cluster egress with outbound types in Azure Kubernetes Service (AKS) description: Learn how to define a custom egress route in Azure Kubernetes Service (AKS) -
aks Egress Udr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-udr.md
Title: Customize cluster egress with a user-defined routing table description: Learn how to define a custom egress route in Azure Kubernetes Service (AKS) with a routing table.- Last updated 06/29/2020
aks Enable Fips Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/enable-fips-nodes.md
Title: Enable Federal Information Process Standard (FIPS) for Azure Kubernetes S
description: Learn how to enable Federal Information Process Standard (FIPS) for Azure Kubernetes Service (AKS) node pools. - Last updated 07/19/2022
aks Enable Host Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/enable-host-encryption.md
Title: Enable host-based encryption on Azure Kubernetes Service (AKS) description: Learn how to configure a host-based encryption in an Azure Kubernetes Service (AKS) cluster- Last updated 04/26/2021
aks Free Standard Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/free-standard-pricing-tiers.md
Title: Azure Kubernetes Service (AKS) Free and Standard pricing tiers for cluster management description: Learn about the Azure Kubernetes Service (AKS) Free and Standard pricing tiers for cluster management- Last updated 02/17/2023
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md
Title: Use GPUs on Azure Kubernetes Service (AKS) description: Learn how to use GPUs for high performance compute or graphics-intensive workloads on Azure Kubernetes Service (AKS)- Last updated 08/06/2021
aks Gpu Multi Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-multi-instance.md
Title: Multi-instance GPU Node pool description: Learn how to create a Multi-instance GPU Node pool and schedule tasks on it- Last updated 1/24/2022
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
recommendations: false
description: Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster - Last updated 12/21/2022 keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty, aks, kubernetes
This article uses the Azure Marketplace offer for Open/WebSphere Liberty to acce
* This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed. * If running the commands in this guide locally (instead of Azure Cloud Shell):
- * Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, macOS, Windows Subsystem for Linux).
+ * Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, Mariner, macOS, Windows Subsystem for Linux).
* Install a Java SE implementation (for example, [Eclipse Open J9](https://www.eclipse.org/openj9/)). * Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher. * Install [Docker](https://docs.docker.com/get-docker/) for your OS.
aks Http Application Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-application-routing.md
Title: HTTP application routing add-on on Azure Kubernetes Service (AKS) description: Use the HTTP application routing add-on to access applications deployed on Azure Kubernetes Service (AKS).-
aks Http Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-proxy.md
Title: Configuring Azure Kubernetes Service (AKS) nodes with an HTTP proxy description: Use the HTTP proxy configuration feature for Azure Kubernetes Service (AKS) nodes.-
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
Title: Use Image Cleaner on Azure Kubernetes Service (AKS)
description: Learn how to use Image Cleaner to clean up stale images on Azure Kubernetes Service (AKS) - Last updated 02/07/2023
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
Title: Create an ingress controller in Azure Kubernetes Service (AKS)
description: Learn how to create and configure an ingress controller in an Azure Kubernetes Service (AKS) cluster. - Last updated 02/23/2023
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
Title: Use TLS with an ingress controller on Azure Kubernetes Service (AKS) description: Learn how to install and configure an ingress controller that uses TLS in an Azure Kubernetes Service (AKS) cluster.-
aks Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md
Title: Add-ons, extensions, and other integrations with Azure Kubernetes Service description: Learn about the add-ons, extensions, and open-source integrations you can use with Azure Kubernetes Service.- Last updated 02/22/2022
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
description: Learn how to create and use an internal load balancer to expose your services with Azure Kubernetes Service (AKS). - Last updated 02/22/2023
internal-app LoadBalancer 10.0.248.59 10.240.0.7 80:30555/TCP 2m
When you specify an IP address for the load balancer, the specified IP address must reside in the same subnet as the AKS cluster, but it can't already be assigned to a resource. For example, you shouldn't use an IP address in the range designated for the Kubernetes subnet within the AKS cluster.
-You can use the [`az network vnet subnet list`](https://learn.microsoft.com/cli/azure/network/vnet/subnet?view=azure-cli-latest#az-network-vnet-subnet-list) Azure CLI command or the [`Get-AzVirtualNetworkSubnetConfig`](https://learn.microsoft.com/powershell/module/az.network/get-azvirtualnetworksubnetconfig?view=azps-9.4.0) PowerShell cmdlet to get the subnets in your virtual network.
+You can use the [`az network vnet subnet list`][az-network-vnet-subnet-list] Azure CLI command or the [`Get-AzVirtualNetworkSubnetConfig`][get-azvirtualnetworksubnetconfig] PowerShell cmdlet to get the subnets in your virtual network.
For more information on subnets, see [Add a node pool with a unique subnet][unique-subnet].
To learn more about Kubernetes services, see the [Kubernetes services documentat
[different-subnet]: #specify-a-different-subnet [aks-vnet-subnet]: configure-kubenet.md#create-a-virtual-network-and-subnet [unique-subnet]: use-multiple-node-pools.md#add-a-node-pool-with-a-unique-subnet
+[az-network-vnet-subnet-list]: /cli/azure/network/vnet/subnet#az-network-vnet-subnet-list
+[get-azvirtualnetworksubnetconfig]: /powershell/module/az.network/get-azvirtualnetworksubnetconfig
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md
Title: Introduction to Azure Kubernetes Service description: Learn the features and benefits of Azure Kubernetes Service to deploy and manage container-based applications in Azure.- Last updated 11/18/2022
aks Keda About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-about.md
Title: Kubernetes Event-driven Autoscaling (KEDA) (Preview) description: Simplified application autoscaling with Kubernetes Event-driven Autoscaling (KEDA) add-on.- Last updated 05/24/2022
aks Keda Deploy Add On Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-arm.md
Title: Install the Kubernetes Event-driven Autoscaling (KEDA) add-on by using an ARM template description: Use an ARM template to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS).- Last updated 10/10/2022
aks Keda Deploy Add On Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-cli.md
Title: Install the Kubernetes Event-driven Autoscaling (KEDA) add-on by using Az
description: Use Azure CLI to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS). - Last updated 10/10/2022
aks Keda Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-integrations.md
Title: Integrations with Kubernetes Event-driven Autoscaling (KEDA) on Azure Kubernetes Service (AKS) (Preview) description: Integrations with Kubernetes Event-driven Autoscaling (KEDA) on Azure Kubernetes Service (AKS) (Preview).- Last updated 05/24/2022
aks Kubelet Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubelet-logs.md
Title: View kubelet logs in Azure Kubernetes Service (AKS) description: Learn how to view troubleshooting information in the kubelet logs from Azure Kubernetes Service (AKS) nodes- Last updated 03/05/2019
aks Kubernetes Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-action.md
Title: Build, test, and deploy containers to Azure Kubernetes Service using GitHub Actions description: Learn how to use GitHub Actions to deploy your container to Kubernetes- Last updated 08/02/2022
aks Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-helm.md
Title: Install existing applications with Helm in AKS description: Learn how to use the Helm packaging tool to deploy containers in an Azure Kubernetes Service (AKS) cluster- Last updated 12/07/2020
aks Kubernetes Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-portal.md
Title: Access Kubernetes resources from the Azure portal description: Learn how to interact with Kubernetes resources to manage an Azure Kubernetes Service (AKS) cluster from the Azure portal.- Last updated 12/16/2020
aks Kubernetes Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-service-principal.md
Title: Use a service principal with Azure Kubernetes Services (AKS) description: Create and manage an Azure Active Directory service principal with a cluster in Azure Kubernetes Service (AKS)- Last updated 06/08/2022
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
Title: Restrict egress traffic in Azure Kubernetes Service (AKS) description: Learn what ports and addresses are required to control egress traffic in Azure Kubernetes Service (AKS)-
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
Title: Use a public load balancer description: Learn how to use a public load balancer with a Standard SKU to expose your services with Azure Kubernetes Service (AKS).- Last updated 02/22/2023
aks Manage Abort Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-abort-operations.md
Title: Abort an Azure Kubernetes Service (AKS) long running operation (preview)
-description: Learn how to terminate a long running operation on an Azure Kubernetes Service cluster at the node pool or cluster level.
-
+description: Learn how to terminate a long running operation on an Azure Kubernetes Service cluster at the node pool or cluster level.
Last updated 11/23/2022
aks Manage Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-azure-rbac.md
Title: Manage Azure RBAC in Kubernetes From Azure description: Learn how to use Azure RBAC for Kubernetes Authorization with Azure Kubernetes Service (AKS).- Last updated 02/09/2021
aks Managed Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/managed-aad.md
Title: Use Azure AD in Azure Kubernetes Service
-description: Learn how to use Azure AD in Azure Kubernetes Service (AKS)
-
+description: Learn how to use Azure AD in Azure Kubernetes Service (AKS)
Last updated 01/23/2023
aks Monitor Aks Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks-reference.md
Title: Monitoring AKS data reference description: Important reference material needed when you monitor AKS - Last updated 07/18/2022
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
Title: Managed NAT Gateway
description: Learn how to create an AKS cluster with managed NAT integration - Last updated 10/26/2021
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
Title: Connect to Azure Kubernetes Service (AKS) cluster nodes description: Learn how to connect to Azure Kubernetes Service (AKS) cluster nodes for troubleshooting and maintenance tasks.- Last updated 11/3/2022- #Customer intent: As a cluster operator, I want to learn how to connect to virtual machines in an AKS cluster to perform maintenance or troubleshoot a problem.
aks Node Auto Repair https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-auto-repair.md
Title: Automatically repairing Azure Kubernetes Service (AKS) nodes description: Learn about node auto-repair functionality, and how AKS fixes broken worker nodes.- Last updated 03/11/2021
aks Node Image Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-image-upgrade.md
Title: Upgrade Azure Kubernetes Service (AKS) node images description: Learn how to upgrade the images on AKS cluster nodes and node pools.- Last updated 11/25/2020
aks Node Pool Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-pool-snapshot.md
Title: Snapshot Azure Kubernetes Service (AKS) node pools description: Learn how to snapshot AKS cluster node pools and create clusters and node pools from a snapshot.- Last updated 09/11/2020
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md
Title: Handle Linux node reboots with kured description: Learn how to update Linux nodes and automatically reboot them with kured in Azure Kubernetes Service (AKS)- Last updated 02/28/2019
You need the Azure CLI version 2.0.59 or later installed and configured. Run `az
## Understand the AKS node update experience
-In an AKS cluster, your Kubernetes nodes run as Azure virtual machines (VMs). These Linux-based VMs use an Ubuntu image, with the OS configured to automatically check for updates every day. If security or kernel updates are available, they are automatically downloaded and installed.
+In an AKS cluster, your Kubernetes nodes run as Azure virtual machines (VMs). These Linux-based VMs use an Ubuntu or Mariner image, with the OS configured to automatically check for updates every day. If security or kernel updates are available, they are automatically downloaded and installed.
![AKS node update and reboot process with kured](media/node-updates-kured/node-reboot-process.png)
aks Node Upgrade Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-upgrade-github-actions.md
Title: Handle AKS node upgrades with GitHub Actions description: Learn how to update AKS nodes using GitHub Actions- Last updated 11/27/2020
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md
Title: Open Service Mesh description: Open Service Mesh (OSM) in Azure Kubernetes Service (AKS)- Last updated 12/20/2021
aks Open Service Mesh Binary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-binary.md
Title: Download the OSM client Library description: Download and configure the Open Service Mesh (OSM) client library- Last updated 8/26/2021
aks Open Service Mesh Deploy Addon Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-az-cli.md
Title: Install the Open Service Mesh add-on by using the Azure CLI description: Use Azure CLI commands to install the Open Service Mesh (OSM) add-on on an Azure Kubernetes Service (AKS) cluster.- Last updated 11/10/2021
aks Open Service Mesh Deploy Addon Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-bicep.md
Title: Deploy the Open Service Mesh add-on by using Bicep description: Use a Bicep template to deploy the Open Service Mesh (OSM) add-on to Azure Kubernetes Service (AKS).- Last updated 9/20/2021
aks Open Service Mesh Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-integrations.md
Title: Integrations with Open Service Mesh on Azure Kubernetes Service (AKS) description: Integrations with Open Service Mesh on Azure Kubernetes Service (AKS)- Last updated 03/23/2022
aks Open Service Mesh Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-troubleshoot.md
Title: Troubleshooting Open Service Mesh description: How to troubleshoot Open Service Mesh- Last updated 8/26/2021
aks Open Service Mesh Uninstall Add On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-uninstall-add-on.md
Title: Uninstall the Open Service Mesh (OSM) add-on description: Deploy Open Service Mesh on Azure Kubernetes Service (AKS) using Azure CLI- Last updated 11/10/2021
aks Openfaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/openfaas.md
Title: Use OpenFaaS with Azure Kubernetes Service (AKS) description: Learn how to deploy and use OpenFaaS on an Azure Kubernetes Service (AKS) cluster to build serverless functions with containers. - Last updated 03/05/2018
aks Operator Best Practices Advanced Scheduler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-advanced-scheduler.md
Title: Best practices for scheduler features description: Learn the cluster operator best practices for using advanced scheduler features such as taints and tolerations, node selectors and affinity, or inter-pod affinity and anti-affinity in Azure Kubernetes Service (AKS)- Last updated 11/11/2022
aks Operator Best Practices Cluster Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-cluster-isolation.md
Title: Best practices for cluster isolation description: Learn the cluster operator best practices for isolation in Azure Kubernetes Service (AKS)- Last updated 03/09/2021
aks Operator Best Practices Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-cluster-security.md
Title: Best practices for cluster security description: Learn the cluster operator best practices for how to manage cluster security and upgrades in Azure Kubernetes Service (AKS)- Last updated 04/07/2021
aks Operator Best Practices Container Image Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-container-image-management.md
Title: Operator best practices - Container image management in Azure Kubernetes Services (AKS) description: Learn the cluster operator best practices for how to manage and secure container images in Azure Kubernetes Service (AKS)- Last updated 03/11/2021
aks Operator Best Practices Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-identity.md
Title: Best practices for managing identity description: Learn the cluster operator best practices for how to manage authentication and authorization for clusters in Azure Kubernetes Service (AKS)- Last updated 09/29/2022
aks Operator Best Practices Multi Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-multi-region.md
Title: Best practices for AKS business continuity and disaster recovery description: Learn a cluster operator's best practices to achieve maximum uptime for your applications, providing high availability and preparing for disaster recovery in Azure Kubernetes Service (AKS).-- Last updated 03/11/2021 #Customer intent: As an AKS cluster operator, I want to plan for business continuity or disaster recovery to help protect my cluster from region problems. + # Best practices for business continuity and disaster recovery in Azure Kubernetes Service (AKS) As you manage clusters in Azure Kubernetes Service (AKS), application uptime becomes important. By default, AKS provides high availability by using multiple nodes in a [Virtual Machine Scale Set (VMSS)](../virtual-machine-scale-sets/overview.md). But these multiple nodes donΓÇÖt protect your system from a region failure. To maximize your uptime, plan ahead to maintain business continuity and prepare for disaster recovery.
aks Operator Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-network.md
Title: Best practices for network resources description: Learn the cluster operator best practices for virtual network resources and connectivity in Azure Kubernetes Service (AKS)- Last updated 03/10/2021
aks Operator Best Practices Run At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-run-at-scale.md
Title: Best practices for running Azure Kubernetes Service (AKS) at scale description: Learn the AKS cluster operator best practices and special considerations for running large clusters at 500 node scale and beyond - Last updated 10/04/2022
aks Operator Best Practices Scheduler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-scheduler.md
Title: Operator best practices - Basic scheduler features in Azure Kubernetes Services (AKS) description: Learn the cluster operator best practices for using basic scheduler features such as resource quotas and pod disruption budgets in Azure Kubernetes Service (AKS)- Last updated 03/09/2021
aks Operator Best Practices Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-storage.md
Title: Best practices for storage and backup description: Learn the cluster operator best practices for storage, data encryption, and backups in Azure Kubernetes Service (AKS)- Last updated 11/30/2022
aks Out Of Tree https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/out-of-tree.md
Title: Enable Cloud Controller Manager description: Learn how to enable the Out of Tree cloud provider- Last updated 04/08/2022
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
Title: Use Planned Maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster (preview) description: Learn how to use Planned Maintenance to schedule and control cluster and node image upgrades in Azure Kubernetes Service (AKS).- Last updated 01/17/2023
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
Title: Create a private Azure Kubernetes Service cluster description: Learn how to create a private Azure Kubernetes Service (AKS) cluster- Last updated 01/25/2023
aks Quickstart Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-dapr.md
Title: Deploy an application with the Dapr cluster extension for Azure Kubernete
description: Use the Dapr cluster extension for Azure Kubernetes Service (AKS) or Arc-enabled Kubernetes to deploy an application - Last updated 05/03/2022
aks Quickstart Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-event-grid.md
Title: Subscribe to Azure Kubernetes Service events with Azure Event Grid description: Use Azure Event Grid to subscribe to Azure Kubernetes Service events-- Last updated 07/12/2021- # Quickstart: Subscribe to Azure Kubernetes Service (AKS) events with Azure Event Grid
aks Quickstart Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-helm.md
Title: Develop on Azure Kubernetes Service (AKS) with Helm description: Use Helm with AKS and Azure Container Registry to package and run application containers in a cluster.-- Last updated 12/17/2021- # Quickstart: Develop on Azure Kubernetes Service (AKS) with Helm
aks Quotas Skus Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quotas-skus-regions.md
Title: Limits for resources, SKUs, regions description: Learn about the default quotas, restricted node VM SKU sizes, and region availability of the Azure Kubernetes Service (AKS).- Last updated 03/25/2021
aks Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/rdp.md
Title: RDP to AKS Windows Server nodes description: Learn how to create an RDP connection with Azure Kubernetes Service (AKS) cluster Windows Server nodes for troubleshooting and maintenance tasks.- Last updated 07/06/2022
aks Reduce Latency Ppg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/reduce-latency-ppg.md
Title: Use proximity placement groups to reduce latency for Azure Kubernetes Service (AKS) clusters description: Learn how to use proximity placement groups to reduce latency for your AKS cluster workloads.-- Last updated 10/19/2020
aks Release Tracker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/release-tracker.md
Title: AKS release tracker description: Learn how to determine which Azure regions have the weekly AKS release deployments rolled out in real time. - Last updated 05/24/2022
aks Resize Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/resize-node-pool.md
Title: Resize node pools in Azure Kubernetes Service (AKS) description: Learn how to resize node pools for a cluster in Azure Kubernetes Service (AKS) by cordoning and draining.- Last updated 02/08/2023 #Customer intent: As a cluster operator, I want to resize my node pools so that I can run more or larger workloads.
aks Scale Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-cluster.md
Title: Scale an Azure Kubernetes Service (AKS) cluster description: Learn how to scale the number of nodes in an Azure Kubernetes Service (AKS) cluster.- Last updated 06/29/2022
aks Scale Down Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-down-mode.md
Title: Use Scale-down Mode for your Azure Kubernetes Service (AKS) cluster description: Learn how to use Scale-down Mode in Azure Kubernetes Service (AKS).- Last updated 09/01/2021
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service
description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 02/14/2023 - # Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS)
aks Spot Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/spot-node-pool.md
Title: Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster description: Learn how to add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster.-- Last updated 01/21/2022
aks Start Stop Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-cluster.md
Title: Start and Stop an Azure Kubernetes Service (AKS) description: Learn how to stop or start an Azure Kubernetes Service (AKS) cluster.- Last updated 08/09/2021
aks Start Stop Nodepools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-nodepools.md
Title: Start and stop a node pool on Azure Kubernetes Service (AKS) description: Learn how to start or stop a node pool on Azure Kubernetes Service (AKS).- Last updated 10/25/2021
aks Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md
Title: Use static IP with load balancer
+ Title: Use a static IP with a load balancer in Azure Kubernetes Service (AKS)
description: Learn how to create and use a static IP address with the Azure Kubernetes Service (AKS) load balancer. - Previously updated : 11/14/2020- Last updated : 02/27/2023 #Customer intent: As a cluster operator or developer, I want to create and manage static IP address resources in Azure that I can use beyond the lifecycle of an individual Kubernetes service deployed in an AKS cluster. # Use a static public IP address and DNS label with the Azure Kubernetes Service (AKS) load balancer
-By default, the public IP address assigned to a load balancer resource created by an AKS cluster is only valid for the lifespan of that resource. If you delete the Kubernetes service, the associated load balancer and IP address are also deleted. If you want to assign a specific IP address or retain an IP address for redeployed Kubernetes services, you can create and use a static public IP address.
+When you create a load balancer resource in an Azure Kubernetes Service (AKS) cluster, the public IP address assigned to it is only valid for the lifespan of that resource. If you delete the Kubernetes service, the associated load balancer and IP address are also deleted. If you want to assign a specific IP address or retain an IP address for redeployed Kubernetes services, you can create and use a static public IP address.
This article shows you how to create a static public IP address and assign it to your Kubernetes service. ## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-
-You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-
-This article covers using a *Standard* SKU IP with a *Standard* SKU load balancer. For more information, see [IP address types and allocation methods in Azure][ip-sku].
+* This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+* You need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* This article covers using a *Standard* SKU IP with a *Standard* SKU load balancer. For more information, see [IP address types and allocation methods in Azure][ip-sku].
## Create a static IP address
-Create a static public IP address with the [az network public ip create][az-network-public-ip-create] command. The following creates a static IP resource named *myAKSPublicIP* in the *myResourceGroup* resource group:
+1. Use the `az aks show`[az-aks-show] command to get the node resource group name of your AKS cluster, which follows this format: `MC_<resource group name>_<AKS cluster name>_<region>`.
-```azurecli-interactive
-az network public-ip create \
- --resource-group myResourceGroup \
- --name myAKSPublicIP \
- --sku Standard \
- --allocation-method static
-```
+ ```azurecli-interactive
+ az aks show \
+ --resource-group myResourceGroup \
+ --name myAKSCluster
+ --query nodeResourceGroup
+ --output tsv
+ ```
-> [!NOTE]
-> If you are using a *Basic* SKU load balancer in your AKS cluster, use *Basic* for the *sku* parameter when defining a public IP. Only *Basic* SKU IPs work with the *Basic* SKU load balancer and only *Standard* SKU IPs work with *Standard* SKU load balancers.
-
-The IP address is displayed, as shown in the following condensed example output:
-
-```json
-{
- "publicIp": {
- ...
- "ipAddress": "40.121.183.52",
- ...
- }
-}
-```
+2. Use the [`az network public ip create`][az-network-public-ip-create] command to create a static public IP address. The following example creates a static IP resource named *myAKSPublicIP* in the *MC_myResourceGroup_myAKSCluster_eastus* node resource group.
-You can later get the public IP address using the [az network public-ip list][az-network-public-ip-list] command. Specify the name of the node resource group and public IP address you created, and query for the *ipAddress* as shown in the following example:
+ ```azurecli-interactive
+ az network public-ip create \
+ --resource-group MC_myResourceGroup_myAKSCluster_eastus \
+ --name myAKSPublicIP \
+ --sku Standard \
+ --allocation-method static
+ ```
-```azurecli-interactive
-$ az network public-ip show --resource-group myResourceGroup --name myAKSPublicIP --query ipAddress --output tsv
+ > [!NOTE]
+ > If you're using a *Basic* SKU load balancer in your AKS cluster, use *Basic* for the `--sku` parameter when defining a public IP. Only *Basic* SKU IPs work with the *Basic* SKU load balancer and only *Standard* SKU IPs work with *Standard* SKU load balancers.
-40.121.183.52
-```
+3. After you create the static public IP address, use the [`az network public-ip list`][az-network-public-ip-list] command to get the IP address. Specify the name of the node resource group and public IP address you created, and query for the *ipAddress*.
+
+ ```azurecli-interactive
+ az network public-ip show --resource-group MC_myResourceGroup_myAKSCluster_eastus --name myAKSPublicIP --query ipAddress --output tsv
+ ```
## Create a service using the static IP address
-Before creating a service, ensure the cluster identity used by the AKS cluster has delegated permissions to the other resource group. For example:
+1. Before creating a service, use the [`az role assignment create`][az-role-assignment-create] command to ensure the cluster identity used by the AKS cluster has delegated permissions to the node resource group.
+
+ ```azurecli-interactive
+ az role assignment create \
+ --assignee <Client ID> \
+ --role "Network Contributor" \
+ --scope /subscriptions/<subscription id>/resourceGroups/<MC_myResourceGroup_myAKSCluster_eastus>
+ ```
+
+ > [!IMPORTANT]
+ > If you customized your outbound IP, make sure your cluster identity has permissions to both the outbound public IP and the inbound public IP.
+
+2. Create a file named `load-balancer-service.yaml` and copy in the contents of the following YAML file, providing your own public IP address created in the previous step and the node resource group name.
+
+ ```yaml
+ apiVersion: v1
+ kind: Service
+ metadata:
+ annotations:
+ service.beta.kubernetes.io/azure-load-balancer-resource-group: MC_myResourceGroup_myAKSCluster_eastus
+ name: azure-load-balancer
+ spec:
+ loadBalancerIP: 40.121.183.52
+ type: LoadBalancer
+ ports:
+ - port: 80
+ selector:
+ app: azure-load-balancer
+ ```
+
+3. Use the `kubectl apply` command to create the service and deployment.
-```azurecli-interactive
-az role assignment create \
- --assignee <Client ID> \
- --role "Network Contributor" \
- --scope /subscriptions/<subscription id>/resourceGroups/<resource group name>
+```console
+kubectl apply -f load-balancer-service.yaml
```
-> [!IMPORTANT]
-> If you customized your outbound IP make sure your cluster identity has permissions to both the outbound public IP and this inbound public IP.
+## Apply a DNS label to the service
-To create a *LoadBalancer* service with the static public IP address, add the `loadBalancerIP` property and the value of the static public IP address to the YAML manifest. Create a file named `load-balancer-service.yaml` and copy in the following YAML. Provide your own public IP address created in the previous step. The following example also sets the annotation to the resource group named *myResourceGroup*. Provide your own resource group name.
+If your service uses a dynamic or static public IP address, you can use the `service.beta.kubernetes.io/azure-dns-label-name` service annotation to set a public-facing DNS label. This publishes a fully qualified domain name (FQDN) for your service using Azure's public DNS servers and top-level domain. The annotation value must be unique within the Azure location, so it's recommended to use a sufficiently qualified label. Azure automatically appends a default suffix in the location you selected, such as `<location>.cloudapp.azure.com`, to the name you provide, creating the FQDN.
```yaml apiVersion: v1 kind: Service metadata: annotations:
- service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup
+ service.beta.kubernetes.io/azure-dns-label-name: myserviceuniquelabel
name: azure-load-balancer spec:
- loadBalancerIP: 40.121.183.52
type: LoadBalancer ports: - port: 80
spec:
app: azure-load-balancer ```
-Create the service and deployment with the `kubectl apply` command.
+To see the DNS label for your load balancer, run the following command:
```console
-kubectl apply -f load-balancer-service.yaml
+kubectl describe service azure-load-balancer
```
-## Apply a DNS label to the service
-
-If your service is using a dynamic or static public IP address, you can use the service annotation `service.beta.kubernetes.io/azure-dns-label-name` to set a public-facing DNS label. This publishes a fully qualified domain name for your service using Azure's public DNS servers and top-level domain. The annotation value must be unique within the Azure location, so it's recommended to use a sufficiently qualified label.
-
-Azure will then automatically append a default suffix, such as `<location>.cloudapp.azure.com` (where location is the region you selected), to the name you provide, to create the fully qualified DNS name. For example:
+The DNS label will be listed under the `Annotations`, as shown in the following condensed example output:
-```yaml
-apiVersion: v1
-kind: Service
-metadata:
- annotations:
- service.beta.kubernetes.io/azure-dns-label-name: myserviceuniquelabel
- name: azure-load-balancer
-spec:
- type: LoadBalancer
- ports:
- - port: 80
- selector:
- app: azure-load-balancer
+```console
+Name: azure-load-balancer
+Namespace: default
+Labels: <none>
+Annotations: service.beta.kuberenetes.io/azure-dns-label-name: myserviceuniquelabel
+...
```
-> [!NOTE]
+> [!NOTE]
> To publish the service on your own domain, see [Azure DNS][azure-dns-zone] and the [external-dns][external-dns] project. ## Troubleshoot
-If the static IP address defined in the *loadBalancerIP* property of the Kubernetes service manifest does not exist, or has not been created in the node resource group and no additional delegations configured, the load balancer service creation fails. To troubleshoot, review the service creation events with the [kubectl describe][kubectl-describe] command. Provide the name of the service as specified in the YAML manifest, as shown in the following example:
+If the static IP address defined in the *loadBalancerIP* property of the Kubernetes service manifest doesn't exist or hasn't been created in the node resource group and there are no additional delegations configured, the load balancer service creation fails. To troubleshoot, review the service creation events using the [`kubectl describe`][kubectl-describe] command. Provide the name of the service specified in the YAML manifest, as shown in the following example:
```console kubectl describe service azure-load-balancer ```
-Information about the Kubernetes service resource is displayed. The *Events* at the end of the following example output indicate that the *user supplied IP Address was not found*. In these scenarios, verify that you have created the static public IP address in the node resource group and that the IP address specified in the Kubernetes service manifest is correct.
+The output will show you information about the Kubernetes service resource. The following example output shows a `Warning` in the `Events`: "`user supplied IP address was not found`." In this scenario, make sure you've created the static public IP address in the node resource group and that the IP address specified in the Kubernetes service manifest is correct.
-```
+```console
Name: azure-load-balancer Namespace: default Labels: <none>
Events:
## Next steps
-For additional control over the network traffic to your applications, you may want to instead [create an ingress controller][aks-ingress-basic]. You can also [create an ingress controller with a static public IP address][aks-static-ingress].
+For additional control over the network traffic to your applications, you may want to [create an ingress controller][aks-ingress-basic]. You can also [create an ingress controller with a static public IP address][aks-static-ingress].
<!-- LINKS - External --> [kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
For additional control over the network traffic to your applications, you may wa
[external-dns]: https://github.com/kubernetes-sigs/external-dns <!-- LINKS - Internal -->
-[aks-faq-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks
[az-network-public-ip-create]: /cli/azure/network/public-ip#az_network_public_ip_create [az-network-public-ip-list]: /cli/azure/network/public-ip#az_network_public_ip_list
-[az-aks-show]: /cli/azure/aks#az_aks_show
[aks-ingress-basic]: ingress-basic.md [aks-static-ingress]: ingress-static-ip.md [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
For additional control over the network traffic to your applications, you may wa
[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md [install-azure-cli]: /cli/azure/install-azure-cli [ip-sku]: ../virtual-network/ip-services/public-ip-addresses.md#sku
+[az-role-assignment-create]: /cli/azure/role/assignment#az-role-assignment-create
+[az-aks-show]: /cli/azure/aks#az-aks-show
aks Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/support-policies.md
Title: Support policies for Azure Kubernetes Service (AKS) description: Learn about Azure Kubernetes Service (AKS) support policies, shared responsibility, and features that are in preview (or alpha or beta).- Last updated 09/18/2020
aks Trusted Access Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/trusted-access-feature.md
Title: Enable Azure resources to access Azure Kubernetes Service (AKS) clusters using Trusted Access description: Learn how to use the Trusted Access feature to enable Azure resources to access Azure Kubernetes Service (AKS) clusters. - Last updated 02/23/2023
aks Tutorial Kubernetes App Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-app-update.md
Title: Kubernetes on Azure tutorial - Update an application description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to update an existing application deployment to AKS with a new version of the application code.- Last updated 12/20/2021
aks Tutorial Kubernetes Deploy Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-application.md
Title: Kubernetes on Azure tutorial - Deploy an application description: In this Azure Kubernetes Service (AKS) tutorial, you deploy a multi-container application to your cluster using a custom image stored in Azure Container Registry.- Last updated 01/04/2023
aks Tutorial Kubernetes Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-cluster.md
Title: Kubernetes on Azure tutorial - Deploy a cluster description: In this Azure Kubernetes Service (AKS) tutorial, you create an AKS cluster and use kubectl to connect to the Kubernetes master node.- Last updated 12/01/2022
aks Tutorial Kubernetes Prepare Acr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-acr.md
Title: Kubernetes on Azure tutorial - Create a container registry description: In this Azure Kubernetes Service (AKS) tutorial, you create an Azure Container Registry instance and upload a sample application container image.- Previously updated : 12/20/2021- Last updated : 02/27/2023 #Customer intent: As a developer, I want to learn how to create and use a container registry so that I can deploy my own applications to Azure Kubernetes Service.
-# Tutorial: Deploy and use Azure Container Registry
+# Tutorial: Deploy and use Azure Container Registry (ACR)
-Azure Container Registry (ACR) is a private registry for container images. A private container registry lets you securely build and deploy your applications and custom code. In this tutorial, part two of seven, you deploy an ACR instance and push a container image to it. You learn how to:
+Azure Container Registry (ACR) is a private registry for container images. A private container registry allows you to securely build and deploy your applications and custom code. In this tutorial, part two of seven, you deploy an ACR instance and push a container image to it. You learn how to:
> [!div class="checklist"]
-> * Create an Azure Container Registry (ACR) instance
+>
+> * Create an ACR instance
> * Tag a container image for ACR > * Upload the image to ACR > * View images in your registry
-In later tutorials, this ACR instance is integrated with a Kubernetes cluster in AKS, and an application is deployed from the image.
+In later tutorials, you integrate your ACR instance with a Kubernetes cluster in AKS, and deploy an application from the image.
## Before you begin
-In the [previous tutorial][aks-tutorial-prepare-app], a container image was created for a simple Azure Voting application. If you have not created the Azure Voting app image, return to [Tutorial 1 ΓÇô Create container images][aks-tutorial-prepare-app].
+In the [previous tutorial][aks-tutorial-prepare-app], you created a container image for a simple Azure Voting application. If you haven't created the Azure Voting app image, return to [Tutorial 1: Prepare an application for AKS][aks-tutorial-prepare-app].
### [Azure CLI](#tab/azure-cli)
This tutorial requires that you're running Azure PowerShell version 5.9.0 or lat
## Create an Azure Container Registry
-To create an Azure Container Registry, you first need a resource group. An Azure resource group is a logical container into which Azure resources are deployed and managed.
+Before creating an ACR, you need a resource group. An Azure resource group is a logical container into which you deploy and manage Azure resources.
### [Azure CLI](#tab/azure-cli)
-Create a resource group with the [az group create][az-group-create] command. In the following example, a resource group named *myResourceGroup* is created in the *eastus* region:
+1. Create a resource group with the [`az group create`][az-group-create] command.
```azurecli az group create --name myResourceGroup --location eastus ```
-Create an Azure Container Registry instance with the [az acr create][az-acr-create] command and provide your own registry name. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. In the rest of this tutorial, `<acrName>` is used as a placeholder for the container registry name. Provide your own unique registry name. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput.
+2. Create an ACR instance with the [`az acr create`][az-acr-create] command and provide your own unique registry name. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. In the rest of this tutorial, `<acrName>` is used as a placeholder for the container registry name. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput.
```azurecli az acr create --resource-group myResourceGroup --name <acrName> --sku Basic
az acr create --resource-group myResourceGroup --name <acrName> --sku Basic
### [Azure PowerShell](#tab/azure-powershell)
-Create a resource group with the [New-AzResourceGroup][new-azresourcegroup] cmdlet. In the following example, a resource group named *myResourceGroup* is created in the *eastus* region:
+1. Create a resource group with the [`New-AzResourceGroup`][new-azresourcegroup] cmdlet.
```azurepowershell New-AzResourceGroup -Name myResourceGroup -Location eastus ```
-Create an Azure Container Registry instance with the [New-AzContainerRegistry][new-azcontainerregistry] cmdlet and provide your own registry name. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. In the rest of this tutorial, `<acrName>` is used as a placeholder for the container registry name. Provide your own unique registry name. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput.
+2. Create an ACR instance with the [`New-AzContainerRegistry`][new-azcontainerregistry] cmdlet and provide your own unique registry name. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. In the rest of this tutorial, `<acrName>` is used as a placeholder for the container registry name. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput.
```azurepowershell New-AzContainerRegistry -ResourceGroupName myResourceGroup -Name <acrname> -Sku Basic
New-AzContainerRegistry -ResourceGroupName myResourceGroup -Name <acrname> -Sku
### [Azure CLI](#tab/azure-cli)
-To use the ACR instance, you must first log in. Use the [az acr login][az-acr-login] command and provide the unique name given to the container registry in the previous step.
+Log in to your ACR using the [`az acr login`][az-acr-login] command and provide the unique name given to the container registry in the previous step.
```azurecli az acr login --name <acrName>
az acr login --name <acrName>
### [Azure PowerShell](#tab/azure-powershell)
-To use the ACR instance, you must first log in. Use the [Connect-AzContainerRegistry][connect-azcontainerregistry] cmdlet and provide the unique name given to the container registry in the previous step.
+Log in to your ACR using the [`Connect-AzContainerRegistry`][connect-azcontainerregistry] cmdlet and provide the unique name given to the container registry in the previous step.
```azurepowershell Connect-AzContainerRegistry -Name <acrName>
The command returns a *Login Succeeded* message once completed.
## Tag a container image
-To see a list of your current local images, use the [docker images][docker-images] command:
+To see a list of your current local images, use the [`docker images`][docker-images] command.
```console docker images ```
-The above command's output shows list of your current local images:
+
+The following example output shows a list of the current local Docker images:
```output REPOSITORY TAG IMAGE ID CREATED SIZE
mcr.microsoft.com/oss/bitnami/redis 6.0.8 3a54a920bb6c
tiangolo/uwsgi-nginx-flask python3.6 a16ce562e863 6 weeks ago 944MB ```
-To use the *azure-vote-front* container image with ACR, the image needs to be tagged with the login server address of your registry. This tag is used for routing when pushing container images to an image registry.
+To use the *azure-vote-front* container image with ACR, you need to tag the image with the login server address of your registry. The tag is used for routing when pushing container images to an image registry.
### [Azure CLI](#tab/azure-cli)
-To get the login server address, use the [az acr list][az-acr-list] command and query for the *loginServer* as follows:
+To get the login server address, use the [`az acr list`][az-acr-list] command and query for the *loginServer*.
```azurecli az acr list --resource-group myResourceGroup --query "[].{acrLoginServer:loginServer}" --output table ```+ ### [Azure PowerShell](#tab/azure-powershell)
-To get the login server address, use the [Get-AzContainerRegistry][get-azcontainerregistry] cmdlet and query for the *loginServer* as follows:
+To get the login server address, use the [`Get-AzContainerRegistry`][get-azcontainerregistry] cmdlet and query for the *loginServer*.
```azurepowershell (Get-AzContainerRegistry -ResourceGroupName myResourceGroup -Name <acrName>).LoginServer
To get the login server address, use the [Get-AzContainerRegistry][get-azcontain
-Now, tag your local *azure-vote-front* image with the *acrLoginServer* address of the container registry. To indicate the image version, add *:v1* to the end of the image name:
+Then, tag your local *azure-vote-front* image with the *acrLoginServer* address of the container registry. To indicate the image version, add *:v1* to the end of the image name:
```console docker tag mcr.microsoft.com/azuredocs/azure-vote-front:v1 <acrLoginServer>/azure-vote-front:v1 ```
-To verify the tags are applied, run [docker images][docker-images] again.
+To verify the tags are applied, run [`docker images`][docker-images] again.
```console docker images ```
-An image is tagged with the ACR instance address and a version number.
+The following example output shows an image tagged with the ACR instance address and a version number:
-```
+```console
REPOSITORY TAG IMAGE ID CREATED SIZE mcr.microsoft.com/azuredocs/azure-vote-front v1 84b41c268ad9 16 minutes ago 944MB mycontainerregistry.azurecr.io/azure-vote-front v1 84b41c268ad9 16 minutes ago 944MB
tiangolo/uwsgi-nginx-flask python3.6 a16ce562e863
## Push images to registry
-With your image built and tagged, push the *azure-vote-front* image to your ACR instance. Use [docker push][docker-push] and provide your own *acrLoginServer* address for the image name as follows:
+Push the *azure-vote-front* image to your ACR instance using the [`docker push`][docker-push] command. Make sure to provide your own *acrLoginServer* address for the image name.
```console docker push <acrLoginServer>/azure-vote-front:v1
It may take a few minutes to complete the image push to ACR.
### [Azure CLI](#tab/azure-cli)
-To return a list of images that have been pushed to your ACR instance, use the [az acr repository list][az-acr-repository-list] command. Provide your own `<acrName>` as follows:
+To return a list of images that have been pushed to your ACR instance, use the [`az acr repository list`][az-acr-repository-list] command, providing your own `<acrName>`.
```azurecli az acr repository list --name <acrName> --output table
Result
azure-vote-front ```
-To see the tags for a specific image, use the [az acr repository show-tags][az-acr-repository-show-tags] command as follows:
+To see the tags for a specific image, use the [`az acr repository show-tags`][az-acr-repository-show-tags] command.
```azurecli az acr repository show-tags --name <acrName> --repository azure-vote-front --output table
v1
### [Azure PowerShell](#tab/azure-powershell)
-To return a list of images that have been pushed to your ACR instance, use the [Get-AzContainerRegistryManifest][get-azcontainerregistrymanifest] cmdlet. Provide your own `<acrName>` as follows:
+To return a list of images that have been pushed to your ACR instance, use the [`Get-AzContainerRegistryManifest`][get-azcontainerregistrymanifest] cmdlet, providing your own `<acrName>`.
```azurepowershell Get-AzContainerRegistryManifest -RegistryName <acrName> -RepositoryName azure-vote-front
Registry ImageName ManifestsAttributes
<acrName> azure-vote-front {Microsoft.Azure.Commands.ContainerRegistry.Models.PSManifestAttributeBase} ```
-To see the tags for a specific image, use the [Get-AzContainerRegistryTag][get-azcontainerregistrytag] cmdlet as follows:
+To see the tags for a specific image, use the [`Get-AzContainerRegistryTag`][get-azcontainerregistrytag] cmdlet as follows:
```azurepowershell Get-AzContainerRegistryTag -RegistryName <acrName> -RepositoryName azure-vote-front
Registry ImageName Tags
-You now have a container image that is stored in a private Azure Container Registry instance. This image is deployed from ACR to a Kubernetes cluster in the next tutorial.
- ## Next steps
-In this tutorial, you created an Azure Container Registry and pushed an image for use in an AKS cluster. You learned how to:
+In this tutorial, you created an ACR and pushed an image to use in an AKS cluster. You learned how to:
> [!div class="checklist"]
-> * Create an Azure Container Registry (ACR) instance
+>
+> * Create an ACR instance
> * Tag a container image for ACR > * Upload the image to ACR > * View images in your registry
-Advance to the next tutorial to learn how to deploy a Kubernetes cluster in Azure.
+In the next tutorial, you'll learn how to deploy a Kubernetes cluster in Azure.
> [!div class="nextstepaction"] > [Deploy Kubernetes cluster][aks-tutorial-deploy-cluster]
aks Tutorial Kubernetes Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-app.md
Title: Kubernetes on Azure tutorial - Prepare an application description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to prepare and build a multi-container app with Docker Compose that you can then deploy to AKS.- Last updated 12/06/2022
aks Tutorial Kubernetes Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-scale.md
Title: Kubernetes on Azure tutorial - Scale Application description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to scale nodes and pods in Kubernetes, and implement horizontal pod autoscaling.- Last updated 05/24/2021
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
Title: Kubernetes on Azure tutorial - Upgrade a cluster description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to upgrade an existing AKS cluster to the latest available Kubernetes version.- Last updated 11/15/2022
aks Update Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/update-credentials.md
Title: Reset the credentials for a cluster description: Learn how update or reset the service principal or Azure AD Application credentials for an Azure Kubernetes Service (AKS) cluster.- Last updated 03/11/2019
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
Title: Upgrade an Azure Kubernetes Service (AKS) cluster description: Learn how to upgrade an Azure Kubernetes Service (AKS) cluster to get the latest features and security updates.- Last updated 12/17/2020
Part of the AKS cluster lifecycle involves performing periodic upgrades to the l
For AKS clusters that use multiple node pools or Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade]. To upgrade a specific node pool without doing a Kubernetes cluster upgrade, see [Upgrade a specific node pool][specific-nodepool].
->[!WARNING]
-> AKS clusters with Calico enabled should not upgrade to Kubernetes v1.25 preview.
- > [!NOTE] > Any upgrade operation, whether performed manually or automatically, will upgrade the node image version if not already on the latest. The latest version is contingent on a full AKS release, and can be determined by visiting the [AKS release tracker][release-tracker].
aks Upgrade Windows 2019 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-windows-2019-2022.md
Title: Upgrade Kubernetes workloads from Windows Server 2019 to 2022 description: Learn how to upgrade the OS version for Windows workloads on AKS- Last updated 8/18/2022
aks Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade.md
Title: Overview of upgrading Azure Kubernetes Service (AKS) clusters and compone
description: Learn about the various upgradeable components of an Azure Kubernetes Service (AKS) cluster and how to maintain them. - Last updated 11/11/2022
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
Title: Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview) description: Learn how to use Azure AD pod-managed identities in Azure Kubernetes Service (AKS)- Last updated 11/01/2022
aks Use Azure Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-dedicated-hosts.md
Title: Use Azure Dedicated Hosts in Azure Kubernetes Service (AKS) description: Learn how to create an Azure Dedicated Hosts Group and associate it with Azure Kubernetes Service (AKS)- Last updated 12/01/2022
az vm list-skus -l eastus -r hostGroups/hosts -o table
> First, when using host group, the nodepool fault domain count is always the same as the host group fault domain count. In order to use cluster auto-scaling to work with ADH and AKS, please make sure your host group fault domain count and capacity is enough. > Secondly, only change fault domain count from the default of 1 to any other number if you know what they are doing as a misconfiguration could lead to a unscalable configuration.
-[Determine how many hosts you would need based on the expected VM Utilization](https://learn.microsoft.com/azure/virtual-machines/dedicated-host-general-purpose-skus).
+[Determine how many hosts you would need based on the expected VM Utilization][determine-host-based-on-vm-utilization].
-Evaluate [host utilization](https://learn.microsoft.com/azure/virtual-machines/dedicated-hosts-how-to?tabs=cli#check-the-status-of-the-host) to determine the number of allocatable VMs by size before you deploy.
+Evaluate [host utilization][host-utilization-evaluate] to determine the number of allocatable VMs by size before you deploy.
```azurecli-interactive az vm host get-instance-view -g myDHResourceGroup --host-group MyHostGroup --name MyHost
You can also decide to use both availability zones and fault domains.
Now create a dedicated host in the host group. In addition to a name for the host, you're required to provide the SKU for the host. Host SKU captures the supported VM series and the hardware generation for your dedicated host.
-For more information about the host SKUs and pricing, see [Azure Dedicated Host pricing](https://azure.microsoft.com/pricing/details/virtual-machines/dedicated-host/).
+For more information about the host SKUs and pricing, see [Azure Dedicated Host pricing][azure-dedicated-host-pricing].
Use az vm host create to create a host. If you set a fault domain count for your host group, you'll be asked to specify the fault domain for your host.
In this article, you learned how to create an AKS cluster with a Dedicated host,
<!-- LINKS - External --> [kubernetes-services]: https://kubernetes.io/docs/concepts/services-networking/service/
+[azure-dedicated-host-pricing]: https://azure.microsoft.com/pricing/details/virtual-machines/dedicated-host/
<!-- LINKS - Internal --> [aks-support-policies]: support-policies.md
In this article, you learned how to create an AKS cluster with a Dedicated host,
[azure-cli-install]: /cli/azure/install-azure-cli [dedicated-hosts]: ../virtual-machines/dedicated-hosts.md [az-vm-host-group-create]: /cli/azure/vm/host/group#az_vm_host_group_create
+[determine-host-based-on-vm-utilization]: ../virtual-machines/dedicated-host-general-purpose-skus.md
+[host-utilization-evaluate]: ../virtual-machines/dedicated-hosts-how-to.md#check-the-status-of-the-host
aks Use Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-policy.md
Title: Use Azure Policy to secure your cluster description: Use Azure Policy to secure an Azure Kubernetes Service (AKS) cluster.- Last updated 09/12/2022
aks Use Byo Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-byo-cni.md
description: Learn how to utilize Azure Kubernetes Service with your own Container Network Interface (CNI) plugin - Last updated 8/12/2022
aks Use Cvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-cvm.md
Title: Use Confidential Virtual Machines (CVM) in Azure Kubernetes Service (AKS) description: Learn how to create Confidential Virtual Machines (CVM) node pools with Azure Kubernetes Service (AKS)- Last updated 10/04/2022
aks Use Group Managed Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-group-managed-service-accounts.md
Title: Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster description: Learn how to enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster for securing your pods.- Last updated 11/01/2021
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
Title: Use Key Management Service (KMS) etcd encryption in Azure Kubernetes Service (AKS) description: Learn how to use the Key Management Service (KMS) etcd encryption with Azure Kubernetes Service (AKS)- Last updated 01/17/2023
aks Use Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-labels.md
Title: Use labels in an Azure Kubernetes Service (AKS) cluster
description: Learn how to use labels in an Azure Kubernetes Service (AKS) cluster. --+ Last updated 03/03/2022- #Customer intent: As a cluster operator, I want to learn how to use labels in an AKS cluster so that I can set scheduling rules for nodes.
aks Use Mariner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-mariner.md
Title: Use the Mariner container host on Azure Kubernetes Service (AKS) description: Learn how to use the Mariner container host on Azure Kubernetes Service (AKS)- Last updated 12/08/2022
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
Title: Use multiple node pools in Azure Kubernetes Service (AKS) description: Learn how to create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS)- Last updated 05/16/2022
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
Title: Secure pod traffic with network policy description: Learn how to secure traffic that flows in and out of pods by using Kubernetes network policies in Azure Kubernetes Service (AKS)- Last updated 01/05/2023
aks Use Node Public Ips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-node-public-ips.md
Title: Use instance-level public IPs in Azure Kubernetes Service (AKS) description: Learn how to manage instance-level public IPs Azure Kubernetes Service (AKS)- Last updated 1/12/2023
aks Use Pod Sandboxing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-sandboxing.md
Title: Pod Sandboxing (preview) with Azure Kubernetes Service (AKS) description: Learn about and deploy Pod Sandboxing (preview), also referred to as Kernel Isolation, on an Azure Kubernetes Service (AKS) cluster.- Last updated 02/23/2023
aks Use Pod Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-security-policies.md
Title: Use pod security policies in Azure Kubernetes Service (AKS) description: Learn how to control pod admissions by using PodSecurityPolicy in Azure Kubernetes Service (AKS)- Last updated 03/25/2021
aks Use Psa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-psa.md
Title: Use Pod Security Admission in Azure Kubernetes Service (AKS) description: Learn how to enable and use Pod Security Admission with Azure Kubernetes Service (AKS).- Last updated 08/08/2022
aks Use System Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-system-pools.md
Title: Use system node pools in Azure Kubernetes Service (AKS) description: Learn how to create and manage system node pools in Azure Kubernetes Service (AKS)- Last updated 11/22/2022
aks Use Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-tags.md
Title: Use Azure tags in Azure Kubernetes Service (AKS) description: Learn how to use Azure provider tags to track resources in Azure Kubernetes Service (AKS).- Last updated 05/26/2022
aks Use Ultra Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-ultra-disks.md
Title: Enable Ultra Disk support on Azure Kubernetes Service (AKS) description: Learn how to enable and configure Ultra Disks in an Azure Kubernetes Service (AKS) cluster- Last updated 1/9/2022
aks Use Wasi Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-wasi-node-pools.md
Title: Create WebAssembly System Interface (WASI) node pools in Azure Kubernetes Service (AKS) to run your WebAssembly (WASM) workload (preview) description: Learn how to create a WebAssembly System Interface (WASI) node pool in Azure Kubernetes Service (AKS) to run your WebAssembly (WASM) workload on Kubernetes.- Last updated 10/19/2022
aks Use Windows Hpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-windows-hpc.md
Title: Use Windows HostProcess containers description: Learn how to use HostProcess & Privileged containers for Windows workloads on AKS- Last updated 4/6/2022
aks Virtual Nodes Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/virtual-nodes-cli.md
Title: Create virtual nodes using Azure CLI description: Learn how to use the Azure CLI to create an Azure Kubernetes Services (AKS) cluster that uses virtual nodes to run pods.- Last updated 06/25/2022
aks Virtual Nodes Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/virtual-nodes-portal.md
Title: Create virtual nodes using the portal in Azure Kubernetes Services (AKS) description: Learn how to use the Azure portal to create an Azure Kubernetes Services (AKS) cluster that uses virtual nodes to run pods.- Last updated 03/15/2021
aks Web App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md
Title: Web Application Routing add-on on Azure Kubernetes Service (AKS) (Preview) description: Use the Web Application Routing add-on to securely access applications deployed on Azure Kubernetes Service (AKS).-
aks Workload Identity Migrate From Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md
If your application is using managed identity and still relies on IMDS to get an
To update or deploy the workload, add these pod annotations only if you want to use the migration sidecar. You inject the following [annotation][pod-annotations] values to use the sidecar in your pod specification: * `azure.workload.identity/inject-proxy-sidecar` - value is `true` or `false`
-* `azure.workload.identity/proxy-sidecar-port` - value is the desired port for the proxy sidecar. The default value is `8080`.
+* `azure.workload.identity/proxy-sidecar-port` - value is the desired port for the proxy sidecar. The default value is `8000`.
When a pod with the above annotations is created, the Azure Workload Identity mutating webhook automatically injects the init-container and proxy sidecar to the pod spec.
spec:
runAsUser: 0 env: - name: PROXY_PORT
- value: "8080"
+ value: "8000"
containers: - name: nginx image: nginx:alpine
spec:
- name: proxy image: mcr.microsoft.com/oss/azure/workload-identity/proxy:v0.13.0 ports:
- - containerPort: 8080
+ - containerPort: 8000
``` This configuration applies to any configuration where a pod is being created. After updating or deploying your application, you can verify the pod is in a running state using the [kubectl describe pod][kubectl-describe] command. Replace the value `podName` with the image name of your deployed pod.
This article showed you how to set up your pod to authenticate using a workload
<!-- EXTERNAL LINKS --> [kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
-[kubelet-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs
+[kubelet-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
Title: Use an Azure AD workload identities (preview) on Azure Kubernetes Service
description: Learn about Azure Active Directory workload identity (preview) for Azure Kubernetes Service (AKS) and how to migrate your application to authenticate using this identity. Last updated 01/06/2023-
app-service Manage Custom Dns Buy Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-buy-domain.md
For pricing information on App Service domains, visit the [App Service Pricing p
> >
-1. Select **Next: Contact information** and supply your information as required by [ICANN](https://go.microsoft.com/fwlink/?linkid=2116641) for the domain registration.
+1. Select **Next: Contact information** and supply your information as required by [ICANN](https://lookup.icann.org/) for the domain registration.
It's important that you fill out all required fields with as much accuracy as possible. Incorrect data for contact information can result in failure to buy the domain.
app-service Migrate Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/migrate-wordpress.md
The prerequisite is that the WordPress on Linux Azure App Service must have been
> [!NOTE]
-> Azure Database for MySQL - Single Server is on the road to retirement by 16 September 2024. If your existing MySQL database is hosted on Azure Database for MySQL - Single Server, consider migrating to Azure Database for MySQL - Flexible Server using the following steps, or using [Azure Database Migration Service (DMS)](/mysql/single-server/whats-happening-to-mysql-single-server#migrate-from-single-server-to-flexible-server).
+> Azure Database for MySQL - Single Server is on the road to retirement by 16 September 2024. If your existing MySQL database is hosted on Azure Database for MySQL - Single Server, consider migrating to Azure Database for MySQL - Flexible Server using the following steps, or using [Azure Database Migration Service (DMS)](/azure/mysql/single-server/whats-happening-to-mysql-single-server#migrate-from-single-server-to-flexible-server).
> 6. If you migrate the database, import the SQL file downloaded from the source database into the database of your newly created WordPress site. You can do it via the PhpMyAdmin dashboard available at **\<sitename\>.azurewebsites.net/phpmyadmin**. If you're unable to one single large SQL file, separate the files into parts and try uploading again. Steps to import the database through phpmyadmin are described [here](https://docs.phpmyadmin.net/en/latest/import_export.html#import).
app-service Tutorial Secure Ntier App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-secure-ntier-app.md
# Tutorial: Create a secure n-tier app in Azure App Service
-Many applications have more than a single component. For example, you may have a front end that is publicly accessible and connects to a back-end database, storage account, key vault, another VM, or a combination of these resources. This architecture makes up an N-tier application. It's important that applications like this are architected to protect back-end resources to the greatest extent possible.
+Many applications have more than a single component. For example, you may have a front end that is publicly accessible and connects to a back-end API or web app which in turn connects to a database, storage account, key vault, another VM, or a combination of these resources. This architecture makes up an N-tier application. It's important that applications like this are architected to protect back-end resources to the greatest extent possible.
In this tutorial, you learn how to deploy a secure N-tier application, with a front-end web app that connects to another network-isolated web app. All traffic is isolated within your Azure Virtual Network using [Virtual Network integration](overview-vnet-integration.md) and [private endpoints](networking/private-endpoint.md). For more comprehensive guidance that includes other scenarios, see:
To learn how to deploy ARM/Bicep templates, see [How to deploy resources with Bi
> [!div class="nextstepaction"] > [App Service networking features](networking-features.md) > [!div class="nextstepaction"]
-> [Reliable web app pattern planning (.NET)](/azure/architecture/reference-architectures/reliable-web-app/dotnet/pattern-overview.md)
+> [Reliable web app pattern planning (.NET)](/azure/architecture/reference-architectures/reliable-web-app/dotnet/pattern-overview.md)
application-gateway Configuration Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-frontend-ip.md
Previously updated : 09/09/2020 Last updated : 02/26/2023
Only one public IP address and one private IP address is supported. You choose t
A frontend IP address is associated to a *listener*, which checks for incoming requests on the frontend IP.
+>[!NOTE]
+> You can create private and public listeners with the same port number (Preview feature). However, be aware of any Network Security Group (NSG) associated with the application gateway subnet. Depending on your NSG's configuration, you may need an inbound rule with **Destination IP addresses** as your application gateway's public and private frontend IPs.
+>
+> **Inbound Rule**:
+> - Source: (as per your requirement)
+> - Destination IP addresses: Public and Private frontend IPs of your application gateway.
+> - Destination Port: (as per listener configuration)
+> - Protocol: TCP
+>
+> **Outbound Rule**: (no specific requirement)
+ ## Next steps - [Learn about listener configuration](configuration-listeners.md)
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
Network security groups (NSGs) are supported on Application Gateway. But there a
- Traffic from the **AzureLoadBalancer** tag with the destination subnet as **Any** must be allowed.
+- To use public and private listeners with a common port number (Preview feature), you must have an inbound rule with the **destination IP address** as your gateway's **frontend IPs (public and private)**. When using this feature, your application gateway changes the "Destination" of the inbound flow to the frontend IPs of your gateway. [Learn more](./configuration-listeners.md#frontend-port).
+ ### Allow access to a few source IPs For this scenario, use NSGs on the Application Gateway subnet. Put the following restrictions on the subnet in this order of priority:
application-gateway Configuration Listeners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-listeners.md
Previously updated : 11/23/2022 Last updated : 02/27/2023
Choose the frontend IP address that you plan to associate with this listener. Th
## Frontend port
-Choose the frontend port. Select an existing port or create a new one. Choose any value from the [allowed range of ports](./application-gateway-components.md#ports). You can use not only well-known ports, such as 80 and 443, but any allowed custom port that's suitable. A port can be used for public-facing listeners or private-facing listeners, however the same port cannot be used for both at the same time.
+Associate a frontend port. You can select an existing port or create a new one. Choose any value from the [allowed range of ports](./application-gateway-components.md#ports). You can use not only well-known ports, such as 80 and 443, but any allowed custom port that's suitable. The same port can be used for public and private listeners (Preview feature).
+
+>[!NOTE]
+> When using private and public listeners with the same port number, your application gateway changes the "destination" of the inbound flow to the frontend IPs of your gateway. Hence, depending on your Network Security Group's configuration, you may need an inbound rule with **Destination IP addresses** as your application gateway's public and private frontend IPs.
+>
+> **Inbound Rule**:
+> - Source: (as per your requirement)
+> - Destination IP addresses: Public and Private frontend IPs of your application gateway.
+> - Destination Port: (as per listener configuration)
+> - Protocol: TCP
+>
+> **Outbound Rule**: (no specific requirement)
+
+**Limitation**: The portal currently supports private and public listeners creation only for the Public clouds.
## Protocol
automation Automation Runbook Graphical Error Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-graphical-error-handling.md
Title: Handle errors in Azure Automation graphical runbooks
description: This article tells how to implement error handling logic in graphical runbooks. Previously updated : 03/16/2018 Last updated : 02/27/2022
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md
Title: Azure Automation Change Tracking and Inventory overview
description: This article describes the Change Tracking and Inventory feature, which helps you identify software and Microsoft service changes in your environment. Previously updated : 01/18/2023 Last updated : 02/27/2023
automation Region Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/region-mappings.md
Title: Supported regions for linked Log Analytics workspace description: This article describes the supported region mappings between an Automation account and a Log Analytics workspace as it relates to certain features of Azure Automation. Previously updated : 01/04/2023 Last updated : 02/27/2023
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
def main(req: func.HttpRequest, todoItems: func.Out[func.SqlRow]) -> func.HttpRe
try: req_body = req.get_json()
- rows = list(map(lambda r: json.loads(r.to_json()), req_body))
+ rows = func.SqlRowList(map(lambda r: func.SqlRow.from_dict(r), req_body))
except ValueError: pass
def main(req: func.HttpRequest, todoItems: func.Out[func.SqlRow], requestLog: fu
try: req_body = req.get_json()
- rows = list(map(lambda r: json.loads(r.to_json()), req_body))
+ rows = func.SqlRowList(map(lambda r: func.SqlRow.from_dict(r), req_body))
except ValueError: pass
azure-functions Functions Host Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-host-json.md
For more information about these settings, see [Sampling in Application Insights
### applicationInsights.snapshotConfiguration
-For more information on snapshots, see [Debug snapshots on exceptions in .NET apps](../azure-monitor/app/snapshot-debugger.md) and [Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](../azure-monitor/app/snapshot-debugger-troubleshoot.md).
+For more information on snapshots, see [Debug snapshots on exceptions in .NET apps](../azure-monitor/app/snapshot-debugger.md) and [Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](https://learn.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot).
|Property | Default | Description | | | | |
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
Title: Overview for Microsoft Azure Maps description: Learn about services and capabilities in Microsoft Azure Maps and how to use them in your applications.--++ Last updated 10/21/2022
azure-maps Add Bubble Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-bubble-layer-map-ios.md
Title: Add a bubble layer to iOS maps description: Learn how to render points on maps as circles with fixed sizes. See how to use the Azure Maps iOS SDK to add and customize bubble layers for this purpose.--++ Last updated 11/23/2021
azure-maps Add Controls Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-controls-map-ios.md
Title: Add controls to an iOS map description: How to add zoom control, pitch control, rotate control and a style picker to a map in Microsoft Azure Maps iOS SDK.--++ Last updated 11/19/2021
azure-maps Add Heat Map Layer Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-heat-map-layer-ios.md
Title: Add a heat map layer to iOS maps description: Learn how to create a heat map. See how to use the Azure Maps iOS SDK to add a heat map layer to a map. Find out how to customize heat map layers.--++ Last updated 11/23/2021
azure-maps Add Image Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-image-layer-map-ios.md
Title: Add an Image layer to an iOS map description: Learn how to add images to a map. See how to use the Azure Maps iOS SDK to customize image layers and overlay images on fixed sets of coordinates.--++ Last updated 11/23/2021
azure-maps Add Line Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-line-layer-map-ios.md
Title: Add a line layer to iOS maps description: Learn how to add lines to maps. See examples that use the Azure Maps iOS SDK to add line layers to maps and to customize lines with symbols and color gradients.--++ Last updated 11/23/2021
azure-maps Add Polygon Extrusion Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-polygon-extrusion-layer-map-ios.md
Title: Add a polygon extrusion layer to an iOS map description: How to add a polygon extrusion layer to the Microsoft Azure Maps iOS SDK.--++ Last updated 11/23/2021
azure-maps Add Polygon Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-polygon-layer-map-ios.md
Title: Add a polygon layer to iOS maps description: Learn how to add polygons or circles to maps. See how to use the Azure Maps iOS SDK to customize geometric shapes and make them easy to update and maintain.--++ Last updated 11/23/2021
azure-maps Add Symbol Layer Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-symbol-layer-ios.md
Title: Add a symbol layer to iOS maps description: Learn how to add a marker to a map. See an example that uses the Azure Maps iOS SDK to add a symbol layer that contains point-based data from a data source.--++ Last updated 11/19/2021
azure-maps Add Tile Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-tile-layer-map-ios.md
Title: Add a tile layer to iOS maps description: Learn how to add a tile layer to a map. See an example that uses the Azure Maps iOS SDK to add a weather radar overlay to a map.--++ Last updated 11/23/2021
azure-maps Android Map Add Line Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/android-map-add-line-layer.md
Title: Add a line layer to Android maps | Microsoft Azure Maps description: Learn how to add lines to maps. See examples that use the Azure Maps Android SDK to add line layers to maps and to customize lines with symbols and color gradients.--++ Last updated 2/26/2021
azure-maps Android Map Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/android-map-events.md
Title: Handle map events in Android maps | Microsoft Azure Maps description: Learn which events are fired when users interact with maps. View a list of all supported map events. See how to use the Azure Maps Android SDK to handle events.--++ Last updated 2/26/2021
azure-maps Authentication Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/authentication-best-practices.md
Title: Authentication best practices in Azure Maps description: Learn tips & tricks to optimize the use of Authentication in your Azure Maps applications. --++ Last updated 05/11/2022
azure-maps Azure Maps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-authentication.md
Title: Authentication with Microsoft Azure Maps description: "Learn about two ways of authenticating requests in Azure Maps: shared key authentication and Azure Active Directory (Azure AD) authentication."--++ Last updated 05/25/2021
azure-maps Azure Maps Event Grid Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-event-grid-integration.md
Title: React to Azure Maps events by using Event Grid description: Find out how to react to Azure Maps events involving geofences. See how to listen to map events and how to use Event Grid to reroute events to event handlers.--++ Last updated 07/16/2020
azure-maps Azure Maps Qps Rate Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-qps-rate-limits.md
Title: Azure Maps QPS rate limits description: Azure Maps limitation on the number of Queries Per Second.--++ Last updated 10/15/2021
azure-maps Choose Map Style https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/choose-map-style.md
Title: Change the style of the Azure Maps Web Map Control description: "Learn how to change a map's style and options. See how to add a style picker control to a map in Azure Maps so that users can switch between different styles."--++ Last updated 04/26/2020
azure-maps Choose Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/choose-pricing-tier.md
Title: Choose the right pricing tier for Microsoft Azure Maps description: Learn about Azure Maps pricing tiers. See which features are offered at which tiers, and view key considerations for choosing a pricing tier. --++ Last updated 11/11/2021
azure-maps Clustering Point Data Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-android-sdk.md
Title: Clustering point data in the Android SDK | Microsoft Azure Maps description: Learn how to cluster point data on maps. See how to use the Azure Maps Android SDK to cluster data, react to cluster mouse events, and display cluster aggregates.--++ Last updated 03/23/2021
azure-maps Clustering Point Data Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-ios-sdk.md
Title: Clustering point data in the iOS SDK description: Learn how to cluster point data on maps. See how to use the Azure Maps iOS SDK to cluster data, react to cluster mouse events, and display cluster aggregates.--++ Last updated 11/18/2021
azure-maps Clustering Point Data Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-web-sdk.md
Title: Clustering point data in the Web SDK | Microsoft Azure Maps description: Learn how to cluster point data on maps. See how to use the Azure Maps Web SDK to cluster data, react to cluster mouse events, and display cluster aggregates.--++ Last updated 07/29/2019
azure-maps Create Data Source Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-android-sdk.md
Title: Create a data source for Android maps | Microsoft Azure Maps description: "Find out how to create a data source for a map. Learn about the data sources that the Azure Maps Android SDK uses: GeoJSON sources and vector tiles."--++ Last updated 2/26/2021
azure-maps Create Data Source Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-ios-sdk.md
Title: Create a data source for iOS maps | Microsoft Azure Maps description: "Find out how to create a data source for a map. Learn about the data sources that the Azure Maps iOS SDK uses: GeoJSON sources and vector tiles."--++ Last updated 10/22/2021
azure-maps Create Data Source Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-web-sdk.md
Title: Create a data source for a map in Microsoft Azure Maps description: "Find out how to create a data source for a map. Learn about the data sources that the Azure Maps Web SDK uses: GeoJSON sources and vector tiles."--++ Last updated 12/07/2020
azure-maps Creator Geographic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-geographic-scope.md
Title: Azure Maps Creator service geographic scope description: Learn about Azure Maps Creator service's geographic mappings in Azure Maps--++ Last updated 05/18/2021
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
Title: Work with indoor maps in Azure Maps Creator description: This article introduces concepts that apply to Azure Maps Creator services--++ Last updated 04/01/2022
azure-maps Creator Long Running Operation V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-long-running-operation-v2.md
Title: Azure Maps long-running operation API V2 description: Learn about long-running asynchronous V2 background processing in Azure Maps--++ Last updated 05/18/2021
azure-maps Creator Long Running Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-long-running-operation.md
Title: Azure Maps Long-Running Operation API description: Learn about long-running asynchronous background processing in Azure Maps--++ Last updated 12/07/2020
azure-maps Data Driven Style Expressions Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-android-sdk.md
Title: Data-driven style Expressions in Android maps | Microsoft Azure Maps description: Learn about data-driven style expressions. See how to use these expressions in the Azure Maps Android SDK to adjust styles in maps.--++ Last updated 2/26/2021
azure-maps Data Driven Style Expressions Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-ios-sdk.md
Title: Data-driven style expressions in iOS maps description: Learn about data-driven style expressions. See how to use these expressions in the Azure Maps iOS SDK to adjust styles in maps.--++ Last updated 11/18/2021
azure-maps Data Driven Style Expressions Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-web-sdk.md
Title: Data-driven style Expressions in the Azure Maps Web SDK | Microsoft Azure Maps description: Learn about data-driven style expressions. See how to use these expressions in the Azure Maps Web SDK to adjust styles in maps.--++ Last updated 4/4/2019
azure-maps Display Feature Information Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/display-feature-information-android.md
Title: Display feature information in Android maps | Microsoft Azure Maps description: Learn how to display information when users interact with map features. Use the Azure Maps Android SDK to display toast messages and other types of messages.--++ Last updated 2/26/2021
azure-maps Display Feature Information Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/display-feature-information-ios-sdk.md
Title: Display feature information in iOS maps | Microsoft Azure Maps description: Learn how to display information when users interact with map features. Use the Azure Maps iOS SDK to display toast messages and other types of messages.--++ Last updated 11/23/2021
azure-maps Drawing Conversion Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-conversion-error-codes.md
Title: Azure Maps Drawing Conversion errors and warnings description: Learn about the Conversion errors and warnings you may meet while you're using the Azure Maps Conversion service. Read the recommendations on how to resolve the errors and the warnings, with some examples.--++ Last updated 05/21/2021
azure-maps Drawing Package Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md
Title: Drawing package guide for Microsoft Azure Maps Creator description: Learn how to prepare a Drawing package for the Azure Maps Conversion service--++ Last updated 01/31/2023
azure-maps Drawing Tools Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-events.md
Title: Drawing tool events | Microsoft Azure Maps description: In this article you'll learn, how to add a drawing toolbar to a map using Microsoft Azure Maps Web SDK--++ Last updated 12/05/2019
azure-maps Drawing Tools Interactions Keyboard Shortcuts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-interactions-keyboard-shortcuts.md
Title: Drawing tools interaction types and keyboard shortcuts on map | Microsoft Azure Maps description: How to draw and edit shapes using a mouse, touch screen, or keyboard in the Microsoft Azure Maps Web SDK--++ Last updated 12/05/2019
azure-maps Geocoding Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geocoding-coverage.md
Title: Geocoding coverage in Microsoft Azure Maps Search service description: See which regions Azure Maps Search covers. Geocoding categories include address points, house numbers, street level, city level, and points of interest.--++ Last updated 11/30/2021
azure-maps Geofence Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geofence-geojson.md
Title: GeoJSON data format for geofence | Microsoft Azure Maps description: Learn about Azure Maps geofence data. See how to use the GET Geofence and POST Geofence APIs when retrieving the position of coordinates relative to a geofence.--++ Last updated 02/14/2019
azure-maps Geographic Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geographic-coverage.md
Title: Geographic coverage information in Microsoft Azure Maps description: Details of where geographic data is available within Microsoft Azure Maps.--++ Last updated 6/23/2021
azure-maps Geographic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geographic-scope.md
Title: Azure Maps service geographic scope description: Learn about Azure Maps service's geographic mappings--++ Last updated 04/18/2022
azure-maps Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/glossary.md
Title: Azure Maps Glossary | Microsoft Docs description: A glossary of commonly used terms associated with Azure Maps, Location-Based Services, and GIS. --++ Last updated 09/18/2018
azure-maps How To Add Shapes To Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-shapes-to-android-map.md
Title: Add a polygon layer to Android maps | Microsoft Azure Maps description: Learn how to add polygons or circles to maps. See how to use the Azure Maps Android SDK to customize geometric shapes and make them easy to update and maintain.--++ Last updated 2/26/2021
azure-maps How To Add Symbol To Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-symbol-to-android-map.md
Title: Add a symbol layer to Android maps | Microsoft Azure Maps description: Learn how to add a marker to a map. See an example that uses the Azure Maps Android SDK to add a symbol layer that contains point-based data from a data source.--++ Last updated 2/26/2021
azure-maps How To Add Tile Layer Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-tile-layer-android-map.md
Title: Add a tile layer to Android maps | Microsoft Azure Maps description: Learn how to add a tile layer to a map. See an example that uses the Azure Maps Android SDK to add a weather radar overlay to a map.--++ Last updated 3/25/2021
azure-maps How To Create Custom Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-custom-styles.md
Title: Create custom styles for indoor maps description: Learn how to use Maputnik with Azure Maps Creator to create custom styles for your indoor maps.--++ Last updated 9/23/2022
azure-maps How To Create Data Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-data-registries.md
Title: Create Data Registry (preview) description: Learn how to create Data Registry.-+ Last updated 2/14/2023
azure-maps How To Creator Wayfinding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wayfinding.md
Title: Indoor Maps wayfinding service description: How to use the wayfinding service to plot and display routes for indoor maps in Microsoft Azure Maps Creator--++ Last updated 10/25/2022
azure-maps How To Dataset Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md
Title: How to create a dataset using a GeoJson package description: Learn how to create a dataset using a GeoJson package.--++ Last updated 11/01/2021
azure-maps How To Dev Guide Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-csharp-sdk.md
Title: How to create Azure Maps applications using the C# REST SDK description: How to develop applications that incorporate Azure Maps using the C# SDK Developers Guide.--++ Last updated 11/11/2021
azure-maps How To Dev Guide Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-java-sdk.md
Title: How to create Azure Maps applications using the Java REST SDK (preview) description: How to develop applications that incorporate Azure Maps using the Java REST SDK Developers Guide.--++ Last updated 01/25/2023
azure-maps How To Dev Guide Js Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md
Title: How to create Azure Maps applications using the JavaScript REST SDK (preview) description: How to develop applications that incorporate Azure Maps using the JavaScript SDK Developers Guide.--++ Last updated 11/15/2021
azure-maps How To Dev Guide Py Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-py-sdk.md
Title: How to create Azure Maps applications using the Python REST SDK (preview) description: How to develop applications that incorporate Azure Maps using the Python SDK Developers Guide.--++ Last updated 01/15/2021
azure-maps How To Manage Account Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-account-keys.md
Title: Manage your Azure Maps account in the Azure portal | Microsoft Azure Maps description: Learn how to use the Azure portal to manage an Azure Maps account. See how to create a new account and how to delete an existing account.--++ Last updated 04/26/2021
azure-maps How To Manage Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-authentication.md
Title: Manage authentication in Microsoft Azure Maps description: Become familiar with Azure Maps authentication. See which approach works best in which scenario. Learn how to use the portal to view authentication settings.--++ Last updated 12/3/2021
azure-maps How To Manage Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-creator.md
Title: Manage Microsoft Azure Maps Creator description: In this article, you'll learn how to manage Microsoft Azure Maps Creator.--++ Last updated 01/20/2022
azure-maps How To Manage Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-pricing-tier.md
Title: Manage your Azure Maps account's pricing tier | Microsoft Azure Maps description: You can use the Azure portal to manage your Microsoft Azure Maps account and its pricing tier.--++ Last updated 05/12/2020
azure-maps How To Render Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-render-custom-data.md
Title: Render custom data on a raster map in Microsoft Azure Maps description: Learn how to add pushpins, labels, and geometric shapes to a raster map. See how to use the static image service in Azure Maps for this purpose.--++ Last updated 10/28/2021
azure-maps How To Request Elevation Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-elevation-data.md
Title: Request elevation data using the Azure Maps Elevation service description: Learn how to request elevation data using the Azure Maps Elevation service.--++ Last updated 10/28/2021
azure-maps How To Request Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-weather-data.md
Title: Request real-time and forecasted weather data using Azure Maps Weather services description: Learn how to request real-time (current) and forecasted (minute, hourly, daily) weather data using Microsoft Azure Maps Weather services --++ Last updated 10/28/2021
azure-maps How To Search For Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-search-for-address.md
Title: Search for a location using Azure Maps Search services description: Learn about the Azure Maps Search service. See how to use this set of APIs for geocoding, reverse geocoding, fuzzy searches, and reverse cross street searches.--++ Last updated 10/28/2021
azure-maps How To Secure Daemon App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-daemon-app.md
Title: How to secure a daemon application in Microsoft Azure Maps description: This article describes how to host daemon applications, such as background processes, timers, and jobs in a trusted and secure environment in Microsoft Azure Maps.--++ Last updated 10/28/2021
azure-maps How To Secure Device Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-device-code.md
Title: How to secure input constrained device with Azure AD and Azure Maps REST APIs description: How to configure a browser-less application which supports sign-in to Azure AD and calls Azure Maps REST APIs.--++ Last updated 06/12/2020
azure-maps How To Secure Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-app.md
Title: How to secure a single-page web application with non-interactive sign-in in Microsoft Azure Maps description: How to configure a single-page web application with non-interactive Azure role-based access control (Azure RBAC) and Azure Maps Web SDK.--++ Last updated 10/28/2021
azure-maps How To Secure Spa Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-users.md
Title: How to secure a single page application with user sign-in description: How to configure a single page application which supports Azure AD single-sign-on with Azure Maps Web SDK.--++ Last updated 06/12/2020
azure-maps How To Secure Webapp Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-webapp-users.md
Title: How to secure a web application with interactive single-sign-in description: How to configure a web application which supports Azure AD single-sign-on with Azure Maps Web SDK using OpenID Connect protocol.--++ Last updated 06/12/2020
azure-maps How To Show Attribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-show-attribution.md
Title: Show the correct map copyright attribution information description: The map copyright attribution information must be displayed in any applications that use the Render V2 API, including web and mobile applications. In this article, you'll learn how to display the correct attribution every time you display or update a tile. --++ Last updated 3/16/2022
azure-maps How To Show Traffic Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-show-traffic-android.md
Title: Show traffic data on Android maps | Microsoft Azure Maps description: In this article you'll learn, how to display traffic data on a map using the Microsoft Azure Maps Android SDK.--++ Last updated 2/26/2021
azure-maps How To Use Android Map Control Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-android-map-control-library.md
Title: Get started with Android map control | Microsoft Azure Maps description: Become familiar with the Azure Maps Android SDK. See how to create a project in Android Studio, install the SDK, and create an interactive map.--++ Last updated 2/26/2021
azure-maps How To Use Best Practices For Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-routing.md
Title: Best practices for Azure Maps Route Service in Microsoft Azure Maps description: Learn how to route vehicles by using Route Service from Microsoft Azure Maps.--++ Last updated 10/28/2021
azure-maps How To Use Best Practices For Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-search.md
Title: Best practices for Azure Maps Search Service | Microsoft Azure Maps description: Learn how to apply the best practices when using the Search Service from Microsoft Azure Maps.--++ Last updated 10/28/2021
azure-maps How To Use Feedback Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-feedback-tool.md
Title: Provide data feedback to Azure Maps | Microsoft Azure Maps description: Provide data feedback using Microsoft Azure Maps feedback tool.--++ Last updated 12/07/2020
azure-maps How To Use Image Templates Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-image-templates-web-sdk.md
Title: Image templates in the Azure Maps Web SDK | Microsoft Azure Maps description: Learn how to add image icons and pattern-filled polygons to maps by using the Azure Maps Web SDK. View available image and fill pattern templates.--++ Last updated 8/6/2019
azure-maps How To Use Indoor Module Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module-ios.md
Title: Use the Azure Maps indoor maps module to develop iOS applications with Microsoft Creator services description: Learn how to use the Microsoft Azure Maps indoor maps module for the iOS SDK to render maps by embedding the module's JavaScript libraries.--++ Last updated 12/10/2021
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
Title: Use the Azure Maps Indoor Maps module with Microsoft Creator services with custom styles (preview) description: Learn how to use the Microsoft Azure Maps Indoor Maps module to render maps by embedding the module's JavaScript libraries.--++ Last updated 09/23/2022
azure-maps How To Use Ios Map Control Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-ios-map-control-library.md
Title: Get started with iOS map control | Microsoft Azure Maps description: Become familiar with the Azure Maps iOS SDK. See how to install the SDK and create an interactive map.--++ Last updated 11/23/2021
azure-maps How To Use Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-map-control.md
Title: How to use the Azure Maps web map control description: Learn how to add and localize maps to web and mobile applications by using the Map Control client-side JavaScript library in Azure Maps. --++ Last updated 11/29/2021
azure-maps How To Use Services Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-services-module.md
Title: Use the Azure Maps Services module | Microsoft Azure Maps description: Learn about the Azure Maps services module. See how to load and use this helper library to access Azure Maps REST services in web or Node.js applications.--++ Last updated 03/25/2019
azure-maps How To Use Spatial Io Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-spatial-io-module.md
Title: How to use the Azure Maps spatial IO module | Microsoft Azure Maps description: Learn how to use the Spatial IO module provided by the Azure Maps Web SDK. This module provides robust features to make it easy for developers to integrate spatial data with the Azure Maps web sdk.--++ Last updated 02/28/2020
azure-maps How To View Api Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-view-api-usage.md
Title: View Azure Maps API usage metrics | Microsoft Azure Maps description: Learn how to view Azure Maps API usage metrics, such as total requests, total errors, and availability. See how to filter data and split results.--++ Last updated 08/06/2018
azure-maps Indoor Map Dynamic Styling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/indoor-map-dynamic-styling.md
Title: Implement dynamic styling for Azure Maps Creator indoor maps description: Learn how to Implement dynamic styling for Creator indoor maps --++ Last updated 10/28/2021
azure-maps Interact Map Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/interact-map-ios-sdk.md
Title: Handle map events in iOS maps description: Learn which events are fired when users interact with maps. View a list of all supported map events. See how to use the Azure Maps iOS SDK to handle events.--++ Last updated 11/18/2021
azure-maps Map Accessibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-accessibility.md
Title: Create an accessible map application with Azure Maps | Microsoft Azure Maps description: Learn about accessibility considerations in Azure Maps. See what features are available for making map applications accessible, and view accessibility tips.--++ Last updated 12/10/2019
azure-maps Map Add Bubble Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-bubble-layer-android.md
Title: Add a Bubble layer to Android maps | Microsoft Azure Maps description: Learn how to render points on maps as circles with fixed sizes. See how to use the Azure Maps Android SDK to add and customize bubble layers for this purpose.--++ Last updated 2/26/2021
azure-maps Map Add Bubble Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-bubble-layer.md
Title: Add a Bubble layer to a map | Microsoft Azure Maps description: Learn how to render points on maps as circles with fixed sizes. See how to use the Azure Maps Web SDK to add and customize bubble layers for this purpose.--++ Last updated 07/29/2019
azure-maps Map Add Controls Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-controls-android.md
Title: Add controls to an Android map | Microsoft Azure Maps description: How to add zoom control, pitch control, rotate control and a style picker to a map in Microsoft Azure Maps Android SDK.--++ Last updated 02/26/2021
azure-maps Map Add Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-controls.md
Title: Add controls to a map | Microsoft Azure Maps description: How to add zoom control, pitch control, rotate control and a style picker to a map in Microsoft Azure Maps.--++ Last updated 07/29/2019
azure-maps Map Add Custom Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-custom-html.md
Title: Add an HTML Marker to map | Microsoft Azure Maps description: Learn how to add HTML markers to maps. See how to use the Azure Maps Web SDK to customize markers and add popups and mouse events to a marker.--++ Last updated 07/29/2019
azure-maps Map Add Drawing Toolbar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-drawing-toolbar.md
Title: Add drawing tools toolbar to map | Microsoft Azure Maps description: How to add a drawing toolbar to a map using Azure Maps Web SDK--++ Last updated 09/04/2019
azure-maps Map Add Heat Map Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer-android.md
Title: Add a heat map layer to Android maps | Microsoft Azure Maps description: Learn how to create a heat map. See how to use the Azure MapsAndroid SDK to add a heat map layer to a map. Find out how to customize heat map layers.--++ Last updated 02/26/2021
azure-maps Map Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer.md
Title: Add a heat map layer to a map | Microsoft Azure Maps description: Learn how to create a heat map. See how to use the Azure Maps Web SDK to add a heat map layer to a map. Find out how to customize heat map layers.--++ Last updated 10/06/2021
azure-maps Map Add Image Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-image-layer-android.md
Title: Add an Image layer to an Android map | Microsoft Azure Maps description: Learn how to add images to a map. See how to use the Azure Maps Android SDK to customize image layers and overlay images on fixed sets of coordinates.--++ Last updated 02/26/2021
azure-maps Map Add Image Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-image-layer.md
Title: Add an Image layer to a map | Microsoft Azure Maps description: Learn how to add images to a map. See how to use the Azure Maps Web SDK to customize image layers and overlay images on fixed sets of coordinates.--++ Last updated 07/29/2019
azure-maps Map Add Line Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-line-layer.md
Title: Add a line layer to a map | Microsoft Azure Maps description: Learn how to add lines to maps. See examples that use the Azure Maps Web SDK to add line layers to maps and to customize lines with symbols and color gradients.--++ Last updated 08/08/2019
azure-maps Map Add Pin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-pin.md
Title: Add a Symbol layer to a map | Microsoft Azure Maps description: Learn how to add customized symbols, such as text or icons, to maps. See how to use data sources and symbol layers in the Azure Maps Web SDK for this purpose.--++ Last updated 07/29/2019
azure-maps Map Add Popup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-popup.md
Title: Add a popup to a point on a map |Microsoft Azure Maps description: Learn about popups, popup templates, and popup events in Azure Maps. See how to add a popup to a point on a map and how to reuse and customize popups.--++ Last updated 02/27/2020
azure-maps Map Add Shape https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-shape.md
Title: Add a polygon layer to a map | Microsoft Azure Maps description: Learn how to add polygons or circles to maps. See how to use the Azure Maps Web SDK to customize geometric shapes and make them easy to update and maintain.--++ Last updated 07/29/2019
azure-maps Map Add Snap Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-snap-grid.md
Title: Add snap grid to the map | Microsoft Azure Maps description: How to add a snap grid to a map using Azure Maps Web SDK--++ Last updated 07/20/2021
azure-maps Map Add Tile Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-tile-layer.md
Title: Add a tile layer to a map | Microsoft Azure Maps description: Learn how to superimpose images on maps. See an example that uses the Azure Maps Web SDK to add a tile layer containing a weather radar overlay to a map.--++ Last updated 3/25/2021
azure-maps Map Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-create.md
Title: Create a map with Azure Maps | Microsoft Azure Maps description: Find out how to add maps to web pages by using the Azure Maps Web SDK. Learn about options for animation, style, the camera, services, and user interactions.--++ Last updated 07/26/2019
azure-maps Map Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-events.md
Title: Handle map events | Microsoft Azure Maps description: Learn which events are fired when users interact with maps. View a list of all supported map events. See how to use the Azure Maps Web SDK to handle events.--++ Last updated 09/10/2019
azure-maps Map Extruded Polygon Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon-android.md
Title: Add a polygon extrusion layer to an Android map | Microsoft Azure Maps description: How to add a polygon extrusion layer to the Microsoft Azure Maps Android SDK.--++ Last updated 02/26/2021
azure-maps Map Extruded Polygon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon.md
Title: Add a polygon extrusion layer to a map | Microsoft Azure Maps description: How to add a polygon extrusion layer to the Microsoft Azure Maps Web SDK.--++ Last updated 10/08/2019
azure-maps Map Get Information From Coordinate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-information-from-coordinate.md
Title: Show information about a coordinate on a map | Microsoft Azure Maps description: Learn how to display information about an address on the map when a user selects a coordinate.--++ Last updated 07/29/2019
azure-maps Map Get Shape Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-shape-data.md
Title: Get data from shapes on a map | Microsoft Azure Maps description: In this article learn, how to get shape data drawn on a map using the Microsoft Azure Maps Web SDK.--++ Last updated 09/04/2019
azure-maps Map Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-route.md
Title: Show route directions on a map | Microsoft Azure Maps description: In this article, you'll learn how to display directions between two locations on a map using the Microsoft Azure Maps Web SDK.--++ Last updated 07/29/2019
azure-maps Map Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-search-location.md
Title: Show search results on a map | Microsoft Azure Maps description: In this article, you'll learn how to perform a search request using Microsoft Azure Maps Web SDK and display the results on the map.--++ Last updated 07/29/2019
azure-maps Map Show Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-show-traffic.md
Title: Show traffic on a map | Microsoft Azure Maps description: Find out how to add traffic data to maps. Learn about flow data, and see how to use the Azure Maps Web SDK to add incident data and flow data to maps.--++ Last updated 07/29/2019
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
Title: 'Tutorial: Migrate a web app from Bing Maps | Microsoft Azure Maps' description: Tutorial on how to migrate a web app from Bing Maps to Microsoft Azure Maps.--++ Last updated 10/28/2021
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
Title: 'Tutorial: Migrate web services from Bing Maps to Microsoft Azure Maps' description: Tutorial on how to migrate web services from Bing Maps to Microsoft Azure Maps.--++ Last updated 10/28/2021
azure-maps Migrate From Bing Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md
Title: 'Tutorial: Migrate from Bing Maps to Azure Maps' description: A tutorial on how to migrate from Bing Maps to Microsoft Azure Maps. Guidance walks you through how to switch to Azure Maps APIs and SDKs.--++ Last updated 12/1/2021
azure-maps Migrate From Google Maps Android App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-android-app.md
Title: Tutorial - Migrate an Android app description: 'Tutorial on how to migrate an Android app from Google Maps to Microsoft Azure Maps'--++ Last updated 12/1/2021
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
Title: 'Tutorial - Migrate a web app from Google Maps to Microsoft Azure Maps' description: Tutorial on how to migrate a web app from Google Maps to Microsoft Azure Maps--++ Last updated 12/07/2020
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
Title: 'Tutorial - Migrate web services from Google Maps | Microsoft Azure Maps' description: Tutorial on how to migrate web services from Google Maps to Microsoft Azure Maps--++ Last updated 06/23/2021
azure-maps Migrate From Google Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps.md
Title: 'Tutorial - Migrate from Google Maps to Azure Maps | Microsoft Azure Maps' description: Tutorial on how to migrate from Google Maps to Microsoft Azure Maps. Guidance walks you through how to switch to Azure Maps APIs and SDKs.--++ Last updated 09/23/2020
azure-maps Open Source Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/open-source-projects.md
Title: Azure Maps community Open-source projects | Microsoft Azure Maps description: Open-source projects coordinated for the Microsoft Azure Maps platform.--++ Last updated 12/07/2020
azure-maps Power Bi Visual Add Bar Chart Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-bar-chart-layer.md
Title: Add a bar chart layer to an Azure Maps Power BI visual description: In this article, you will learn how to use the bar chart layer in an Azure Maps Power BI visual.--++ Last updated 11/29/2021
azure-maps Power Bi Visual Add Bubble Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-bubble-layer.md
Title: Add a bubble layer to an Azure Maps Power BI visual description: In this article, you'll learn how to use the bubble layer in an Azure Maps Power BI visual.--++ Last updated 11/14/2022
azure-maps Power Bi Visual Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-heat-map-layer.md
Title: Add a heat map layer to an Azure Maps Power BI visual description: In this article, you will learn how to use the heat map layer in an Azure Maps Power BI visual.--++ Last updated 11/29/2021
azure-maps Power Bi Visual Add Pie Chart Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-pie-chart-layer.md
Title: Add a pie chart layer to an Azure Maps Power BI visual description: In this article, you will learn how to use the pie chart layer in an Azure Maps Power BI visual.--++ Last updated 03/15/2022
azure-maps Power Bi Visual Add Reference Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-reference-layer.md
Title: Add a reference layer to Azure Maps Power BI visual description: In this article, you will learn how to use the reference layer in Azure Maps Power BI visual.--++ Last updated 11/29/2021
azure-maps Power Bi Visual Add Tile Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-tile-layer.md
Title: Add a tile layer to an Azure Maps Power BI visual description: In this article, you will learn how to use the tile layer in Azure Maps Power BI visual.--++ Last updated 11/29/2021
azure-maps Power Bi Visual Filled Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-filled-map.md
Title: Filled map in Azure Maps Power BI Visual description: In this article, you'll learn about the Filled map feature in Azure Maps Power BI Visual.--++ Last updated 04/11/2022
azure-maps Power Bi Visual Geocode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-geocode.md
Title: Geocoding in Azure Maps Power BI visual description: In this article, you'll learn about geocoding in Azure Maps Power BI visual.--++ Last updated 03/16/2022
azure-maps Power Bi Visual Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-get-started.md
Title: Get started with Azure Maps Power BI visual description: In this article, you'll learn how to use Azure Maps Power BI visual.--++ Last updated 11/29/2021
azure-maps Power Bi Visual Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-manage-access.md
Title: Manage Azure Maps Power BI visual within your organization description: In this article, you will learn how to manage Azure Maps Power BI visual within your organization.--++ Last updated 11/29/2021
azure-maps Power Bi Visual Show Real Time Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-show-real-time-traffic.md
Title: Show real-time traffic on an Azure Maps Power BI visual description: In this article, you will learn how to show real-time traffic on an Azure Maps Power BI visual.--++ Last updated 11/29/2021
azure-maps Power Bi Visual Understanding Layers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-understanding-layers.md
Title: Layers in an Azure Maps Power BI visual description: In this article, you will learn about the different layers available in an Azure Maps Power BI visual.--++ Last updated 11/29/2021
azure-maps Quick Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-android-map.md
Title: 'Quickstart: Create an Android app with Azure Maps' description: 'Quickstart: Learn how to create an Android app using the Azure Maps Android SDK.'--++ Last updated 09/22/2022
azure-maps Quick Demo Map App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-demo-map-app.md
Title: 'Quickstart: Interactive map search with Azure Maps' titeSuffix: Microsoft Azure Maps description: 'Quickstart: Learn how to create interactive, searchable maps. See how to create an Azure Maps account, get a primary key, and use the Web SDK to set up map applications'--++ Last updated 12/23/2021
azure-maps Quick Ios App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-ios-app.md
Title: Create an iOS app description: Steps to create an Azure Maps account and the first iOS App.--++ Last updated 11/23/2021
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
Title: Release notes - Map Control description: Release notes for the Azure Maps Web SDK. --++ Last updated 1/31/2023
azure-maps Render Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/render-coverage.md
Title: Render coverage description: Render coverage tables list the countries that support Azure Maps road tiles.--++ Last updated 03/23/2022
azure-maps Rest Sdk Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md
Title: REST SDK Developer Guide description: How to develop applications that incorporate Azure Maps using the various SDK Developer how-to articles.--++ Last updated 10/31/2021
azure-maps Routing Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/routing-coverage.md
Title: Routing coverage description: Learn what level of coverage Azure Maps provides in various regions for routing, routing with traffic, and truck routing. --++ Last updated 10/21/2022
azure-maps Set Android Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-android-map-styles.md
Title: Set a map style in Android maps | Microsoft Azure Maps description: Learn two ways of setting the style of a map. See how to use the Azure Maps Android SDK in either the layout file or the activity class to adjust the style.--++ Last updated 02/26/2021
azure-maps Set Drawing Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-drawing-options.md
Title: Drawing tools module | Microsoft Azure Maps description: In this article, you'll learn how to set drawing options data using the Microsoft Azure Maps Web SDK--++ Last updated 01/29/2020
azure-maps Set Map Style Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-map-style-ios-sdk.md
Title: Set a map style in iOS maps | Microsoft Azure Maps description: Learn two ways of setting the style of a map. See how to use the Azure Maps iOS SDK in either the layout file or the activity class to adjust the style.--++ Last updated 10/22/2021
azure-maps Show Traffic Data Map Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/show-traffic-data-map-ios-sdk.md
Title: Show traffic data on iOS maps description: In this article you'll learn, how to display traffic data on a map using the Microsoft Azure Maps iOS SDK.--++ Last updated 11/18/2021
azure-maps Spatial Io Add Ogc Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-add-ogc-map-layer.md
Title: Add an Open Geospatial Consortium (OGC) map layer | Microsoft Azure Maps description: Learn how to overlay an OGC map layer on the map, and how to use the different options in the OgcMapLayer class.--++ Last updated 03/02/2020
azure-maps Spatial Io Add Simple Data Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-add-simple-data-layer.md
Title: Add a simple data layer | Microsoft Azure Maps description: Learn how to add a simple data layer using the Spatial IO module, provided by Azure Maps Web SDK.--++ Last updated 02/29/2020
azure-maps Spatial Io Connect Wfs Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-connect-wfs-service.md
Title: Connect to a Web Feature Service (WFS) service | Microsoft Azure Maps description: Learn how to connect to a WFS service, then query the WFS service using the Azure Maps web SDK and the Spatial IO module.--++ Last updated 03/03/2020
azure-maps Spatial Io Core Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-core-operations.md
Title: Core IO operations | Microsoft Azure Maps description: Learn how to efficiently read and write XML and delimited data using core libraries from the spatial IO module.--++ Last updated 03/03/2020
azure-maps Spatial Io Read Write Spatial Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-read-write-spatial-data.md
Title: Read and write spatial data | Microsoft Azure Maps description: Learn how to read and write data using the Spatial IO module, provided by Azure Maps Web SDK.--++ Last updated 03/01/2020
azure-maps Spatial Io Supported Data Format Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-supported-data-format-details.md
 Title: Supported data format details | Microsoft Azure Maps description: Learn how delimited spatial data is parsed in the spatial IO module.--++ Last updated 10/28/2021
azure-maps Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-browsers.md
Title: Web SDK supported browsers | Microsoft Azure Maps description: Find out how to check whether the Azure Maps Web SDK supports a browser. View a list of supported browsers. Learn how to use map services with legacy browsers.--++ Last updated 03/25/2019
azure-maps Supported Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-languages.md
Title: Localization support with Microsoft Azure Maps description: See which regions Azure Maps supports with services such as maps, search, routing, weather, and traffic incidents. Learn how to set up the View parameter.--++ Last updated 01/05/2022
azure-maps Supported Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-map-styles.md
Title: Supported built-in Azure Maps map styles description: Learn about the built-in map styles that Azure Maps supports, such as road, blank_accessible, satellite, satellite_road_labels, road_shaded_relief, and night.--++ Last updated 04/26/2020
azure-maps Traffic Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/traffic-coverage.md
Title: Traffic coverage | Microsoft Azure Maps description: Learn about traffic coverage in Azure Maps. See whether information on traffic flow and incidents is available in various regions throughout the world.--++ Last updated 03/24/2022
azure-maps Tutorial Create Store Locator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md
Title: 'Tutorial: Use Microsoft Azure Maps to create store locator web applications' description: Tutorial on how to use Microsoft Azure Maps to create store locator web applications.--++ Last updated 01/03/2022
azure-maps Tutorial Creator Feature Stateset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-feature-stateset.md
Title: 'Tutorial: Create a feature stateset' description: The third tutorial on Microsoft Azure Maps Creator. How to create a feature stateset.--++ Last updated 01/28/2022
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md
Title: 'Tutorial: Use Microsoft Azure Maps Creator to create indoor maps' description: Tutorial on how to use Microsoft Azure Maps Creator to create indoor maps--++ Last updated 01/28/2022
azure-maps Tutorial Creator Wfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-wfs.md
Title: 'Tutorial: Query datasets with WFS API' description: The second tutorial on Microsoft Azure Maps Creator. How to Query datasets with WFS API--++ Last updated 01/28/2022
azure-maps Tutorial Ev Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-ev-routing.md
Title: 'Tutorial: Route electric vehicles by using Azure Notebooks (Python) with Microsoft Azure Maps' description: Tutorial on how to route electric vehicles by using Microsoft Azure Maps routing APIs and Azure Notebooks--++ Last updated 04/26/2021
azure-maps Tutorial Geofence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-geofence.md
Title: 'Tutorial: Create a geofence and track devices on a Microsoft Azure Map' description: Tutorial on how to set up a geofence. See how to track devices relative to the geofence by using the Azure Maps Spatial service--++ Last updated 02/28/2021
azure-maps Tutorial Iot Hub Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-iot-hub-maps.md
Title: 'Tutorial: Implement IoT spatial analytics | Microsoft Azure Maps' description: Tutorial on how to Integrate IoT Hub with Microsoft Azure Maps service APIs--++ Last updated 10/28/2021
azure-maps Tutorial Load Geojson File Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-load-geojson-file-android.md
Title: 'Tutorial: Load GeoJSON data into Azure Maps Android SDK | Microsoft Azure Maps' description: Tutorial on how to load GeoJSON data file into the Azure Maps Android map SDK.--++ Last updated 12/10/2020
azure-maps Tutorial Prioritized Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-prioritized-routes.md
Title: 'Tutorial: Find multiple routes by mode of travel' description: Tutorial on how to use Azure Maps to find routes for specific travel modes to points of interest. See how to display multiple routes on maps.--++ Last updated 12/29/2021
azure-maps Tutorial Route Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-route-location.md
Title: 'Tutorial: Find route to a location' description: Tutorial on how to find a route to a point of interest. See how to set address coordinates and query the Azure Maps Route service for directions to the point.--++ Last updated 12/28/2021
azure-maps Tutorial Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md
Title: 'Tutorial: Search for nearby locations on a map' description: Tutorial on how to search for points of interest on a map. See how to use the Azure Maps Web SDK to add search capabilities and interactive pop-up boxes to a map.--++ Last updated 12/23/2021
azure-maps Understanding Azure Maps Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md
Title: Understanding Microsoft Azure Maps Transactions description: Learn about Microsoft Azure Maps Transactions--++ Last updated 06/23/2022
The following table summarizes the Azure Maps services that generate transaction
| Azure Maps Service | Billable | Transaction Calculation | Meter | |--|-|-|-|
-| [Data v1](/rest/api/maps/data)<br>[Data v2](/rest/api/maps/data-v2) | Yes, except for MapDataStorageService.GetDataStatus and MapDataStorageService.GetUserData, which are non-billable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>|
+| [Data v1](/rest/api/maps/data)<br>[Data v2](/rest/api/maps/data-v2)<br>[Data registry](/rest/api/maps/data-registry) | Yes, except for MapDataStorageService.GetDataStatus and MapDataStorageService.GetUserData, which are non-billable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>|
| [Elevation (DEM)](/rest/api/maps/elevation)| Yes| One request = 2 transactions<br> <ul><li>If requesting elevation for a single point then one request = 1 transaction| <ul><li>Location Insights Elevation (Gen2 pricing)</li><li>Standard S1 Elevation Service Transactions (Gen1 S1 pricing)</li></ul>| | [Geolocation](/rest/api/maps/geolocation)| Yes| One request = 1 transaction| <ul><li>Location Insights Geolocation (Gen2 pricing)</li><li>Standard S1 Geolocation Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li></ul>| | [Render v1](/rest/api/maps/render)<br>[Render v2](/rest/api/maps/render-v2) | Yes, except for Terra maps (MapTile.GetTerraTile and layer=terra) which are non-billable.|<ul><li>15 tiles = 1 transaction, except microsoft.dem is one tile = 50 transactions</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table](#azure-maps-creator). |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>|
azure-maps Weather Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-coverage.md
Title: Microsoft Azure Maps Weather services coverage description: Learn about Microsoft Azure Maps Weather services coverage--++ Last updated 11/08/2022
azure-maps Weather Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-service-tutorial.md
Title: 'Tutorial: Join sensor data with weather forecast data by using Azure Notebooks(Python) with Microsoft Azure Maps' description: Tutorial on how to join sensor data with weather forecast data from Microsoft Azure Maps Weather services using Azure Notebooks(Python).--++ Last updated 10/28/2021
azure-maps Weather Services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-services-concepts.md
Title: Weather services concepts in Microsoft Azure Maps description: Learn about the concepts that apply to Microsoft Azure Maps Weather services.--++ Last updated 09/10/2020
azure-maps Web Sdk Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md
Title: Azure Maps Web SDK best practices description: Learn tips & tricks to optimize your use of the Azure Maps Web SDK. --++ Last updated 11/29/2021
azure-maps Webgl Custom Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/webgl-custom-layer.md
Title: Add a custom WebGL layer to a map description: How to add a custom WebGL layer to a map using the Azure Maps Web SDK. --++ Last updated 10/17/2022
azure-maps Zoom Levels And Tile Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/zoom-levels-and-tile-grid.md
Title: Zoom levels and tile grid in Microsoft Azure Maps description: Learn how to set zoom levels in Azure Maps. See how to convert geographic coordinates into pixel coordinates, tile coordinates, and quadkeys. View code samples.--++ Last updated 07/14/2020
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 1/30/2023 Last updated : 2/21/2023
Azure Monitor Agent (AMA) collects monitoring data from the guest operating system of Azure and hybrid virtual machines and delivers it to Azure Monitor for use by features, insights, and other services, such as [Microsoft Sentinel](../../sentintel/../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md). Azure Monitor Agent replaces all of Azure Monitor's legacy monitoring agents. This article provides an overview of Azure Monitor Agent's capabilities and supported use cases.
-Here's a short **introduction to Azure Monitor agent video**, which includes a quick demo of how to set up the agent from the Azure portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs)
+Here's a short **introduction to Azure Monitor agent video**, which includes a quick demo of how to set up the agent from the Azure portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs)
+
+## Benefits
+Using Azure Monitor agent, you get immediate benefits as shown below:
++
+- **Cost savings** by [using data collection rules](data-collection-rule-azure-monitor-agent.md):
+ - Enables targeted and granular data collection for a machine or subset(s) of machines, as compared to the "all or nothing" approach of legacy agents.
+ - Allows filtering rules and data transformations to reduce the overall data volume being uploaded, thus lowering ingestion and storage costs significantly
+- **Simpler management** including efficient troubleshooting:
+ - Supports data uploads multiple destinations (multiple Log Analytics workspaces, i.e. *multihoming* on Windows and Linux) including cross-region and cross-tenant data collection (using Azure LightHouse)
+ - Centralized, agent configuration "in the cloud" for enterprise scale throughout the data collection lifecycle, from onboarding to deployment to updates and changes over time.
+ - Any change(s) in configuration is rolled out to all agents automatically, without requiring a client side deployment
+ - Greater transparency and control of more capabilities and services, such as Microsoft Sentinel, Defender for Cloud, and VM Insights.
+- **Security and Performance**
+ - Enhanced security through Managed Identity and Azure Active Directory (Azure AD) tokens (for clients)
+ - Higher event throughput that is 25% better than the legacy Log Analytics (MMA/OMS) agents.
+- **A single agent** that servers all data collection needs across servers and client devices running Windows 10 or 11. A single agent is the goal, although Azure Monitor Agent currently converges with the Log Analytics agents.
## Consolidating legacy agents
The tables below provide a comparison of Azure Monitor Agent with the legacy the
<sup>1</sup> To review other limitations of using Azure Monitor Metrics, see [quotas and limits](../essentials/metrics-custom-overview.md#quotas-and-limits). On Linux, using Azure Monitor Metrics as the only destination is supported in v.1.10.9.0 or higher.
-### Supported operating systems
+## Supported operating systems
The following tables list the operating systems that Azure Monitor Agent and the legacy agents support. All operating systems are assumed to be x64. x86 isn't supported for any operating system. View [supported operating systems for Azure Arc Connected Machine agent](../../azure-arc/servers/prerequisites.md#supported-operating-systems), which is a prerequisite to run Azure Monitor agent on physical servers and virtual machines hosted outside of Azure (that is, on-premises) or in other clouds.
-#### Windows
+### Windows
| Operating system | Azure Monitor agent | Log Analytics agent (legacy) | Diagnostics extension | |:|::|::|::|
View [supported operating systems for Azure Arc Connected Machine agent](../../a
<sup>2</sup> Using the Azure Monitor agent [client installer](./azure-monitor-agent-windows-client.md).<br> <sup>3</sup> Also supported on Arm64-based machines.
-#### Linux
+### Linux
| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent (legacy) <sup>1</sup> | Diagnostics extension <sup>2</sup>| |:|::|::|::| | AlmaLinux 8 | X<sup>3</sup> | X | |
-| Amazon Linux 2017.09 | | X | |
-| Amazon Linux 2 | | X | |
+| Amazon Linux 2017.09 | | X | |
+| Amazon Linux 2 | X | X | |
| CentOS Linux 8 | X | X | | | CentOS Linux 7 | X<sup>3</sup> | X | X | | CentOS Linux 6 | | X | |
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Oracle Linux 7 | X | X | X | | Oracle Linux 6 | | X | | | Oracle Linux 6.4+ | | X | X |
+| Red Hat Enterprise Linux Server 9+ | X | | |
| Red Hat Enterprise Linux Server 8.6 | X<sup>3</sup> | X | | | Red Hat Enterprise Linux Server 8+ | X | X | | | Red Hat Enterprise Linux Server 7 | X | X | X |
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
For the [official definitions](https://github.com/Microsoft/ApplicationInsights-
``` Alternatively, *cloud role instance* can be helpful for scenarios where a cloud role name tells you the problem is somewhere in your web front end. But you might be running multiple load-balanced servers across your web front end. Being able to drill in a layer deeper via Kusto queries and knowing if the issue is affecting all web front-end servers or instances or just one can be important.
-intelligent view
-A scenario when you might want to override the value for cloud role instance could be if your app is running in a containerized environment. In this case, just knowing the individual server might not be enough information to locate a specific issue.
+
+Intelligent view A scenario when you might want to override the value for cloud role instance could be if your app is running in a containerized environment. In this case, just knowing the individual server might not be enough information to locate a specific issue.
For more information about how to override the cloud role name property with telemetry initializers, see [Add properties: ITelemetryInitializer](api-filtering-sampling.md#addmodify-properties-itelemetryinitializer).
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
dependencies
In the Log Analytics query view, `timestamp` represents the moment the TrackDependency() call was initiated, which occurs immediately after the dependency call response is received. To calculate the time when the dependency call began, you would take `timestamp` and subtract the recorded `duration` of the dependency call.
+### Does dependency tracking in Application Insights include logging response bodies?
+
+Dependency tracking in Application Insights does not include logging response bodies as it would generate too much telemetry for most applications.
+ ## Open-source SDK Like every Application Insights SDK, the dependency collection module is also open source. Read and contribute to the code or report issues at [the official GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet).
A list of the latest [currently supported modules](https://github.com/microsoft/
* Set up custom dependency tracking for [OpenCensus Python](./opencensus-python-dependency.md). * [Write custom dependency telemetry](./api-custom-events-metrics.md#trackdependency) * See [data model](./data-model.md) for Application Insights types and data model.
-* Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
+* Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md
For a reference, you can find the OpenCensus data model on [this GitHub page](ht
OpenCensus Python correlates W3C Trace-Context headers from incoming requests to the spans that are generated from the requests themselves. OpenCensus will correlate automatically with integrations for these popular web application frameworks: Flask, Django, and Pyramid. You just need to populate the W3C Trace-Context headers with the [correct format](https://www.w3.org/TR/trace-context/#trace-context-http-headers-format) and send them with the request.
-**Sample Flask application**
+Explore this sample Flask application. Install Flask, OpenCensus, and the extensions for Flask and Azure.
+
+```shell
+
+pip install flask opencensus opencensus-ext-flask opencensus-ext-azure
+
+```
+
+You will need to add your Application Insights connection string to the environment variable.
+
+```shell
+APPLICATIONINSIGHTS_CONNECTION_STRING=<appinsights-connection-string>
+```
+
+**Sample Flask Application**
```python from flask import Flask
from opencensus.trace.samplers import ProbabilitySampler
app = Flask(__name__) middleware = FlaskMiddleware( app,
- exporter=AzureExporter(),
+ exporter=AzureExporter(
+ connection_string='<appinsights-connection-string>', # or set environment variable APPLICATION_INSIGHTS_CONNECTION_STRING
+ ),
sampler=ProbabilitySampler(rate=1.0), )
You can export the log data by using `AzureLogHandler`. For more information, se
We can also pass trace information from one component to another for proper correlation. For example, consider a scenario where there are two components, `module1` and `module2`. Module1 calls functions in Module2. To get logs from both `module1` and `module2` in a single trace, we can use the following approach: + ```python # module1.py import logging
import logging
from opencensus.trace import config_integration from opencensus.trace.samplers import AlwaysOnSampler from opencensus.trace.tracer import Tracer
-from module2 import function_1
+from module_2 import function_1
-config_integration.trace_integrations(['logging'])
-logging.basicConfig(format='%(asctime)s traceId=%(traceId)s spanId=%(spanId)s %(message)s')
+config_integration.trace_integrations(["logging"])
+logging.basicConfig(
+ format="%(asctime)s traceId=%(traceId)s spanId=%(spanId)s %(message)s"
+)
tracer = Tracer(sampler=AlwaysOnSampler()) logger = logging.getLogger(__name__)
-logger.warning('Before the span')
-with tracer.span(name='hello'):
- logger.warning('In the span')
- function_1(tracer)
-logger.warning('After the span')
+logger.warning("Before the span")
-# module2.py
+with tracer.span(name="hello"):
+ logger.warning("In the span")
+ function_1(logger, tracer)
+logger.warning("After the span")
+```
+```python
+# module_2.py
import logging from opencensus.trace import config_integration from opencensus.trace.samplers import AlwaysOnSampler from opencensus.trace.tracer import Tracer
-config_integration.trace_integrations(['logging'])
-logging.basicConfig(format='%(asctime)s traceId=%(traceId)s spanId=%(spanId)s %(message)s')
+config_integration.trace_integrations(["logging"])
+logging.basicConfig(
+ format="%(asctime)s traceId=%(traceId)s spanId=%(spanId)s %(message)s"
+)
+logger = logging.getLogger(__name__)
tracer = Tracer(sampler=AlwaysOnSampler())
-def function_1(parent_tracer=None):
+
+def function_1(logger=logger, parent_tracer=None):
if parent_tracer is not None: tracer = Tracer(
- span_context=parent_tracer.span_context,
- sampler=AlwaysOnSampler(),
- )
+ span_context=parent_tracer.span_context,
+ sampler=AlwaysOnSampler(),
+ )
else: tracer = Tracer(sampler=AlwaysOnSampler()) with tracer.span("function_1"): logger.info("In function_1")+ ``` ## Telemetry correlation in .NET
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
You can [switch off some of the data by editing ApplicationInsights.config][conf
No. Data is read-only and can only be deleted via the purge functionality. To learn more, see [Guidance for personal data stored in Log Analytics and Application Insights](../logs/personal-data-mgmt.md#delete).
-## Credits
-
-This product includes GeoLite2 data created by [MaxMind](https://www.maxmind.com).
- <!--Link references--> [api]: ./api-custom-events-metrics.md
azure-monitor Opentelemetry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md
Traces | Logs
The following websites consist of language-by-language guidance to enable and configure Microsoft's OpenTelemetry-based offerings. The available functionality and limitations of each offering are explained so that you can determine whether OpenTelemetry is right for your project. -- [.NET](opentelemetry-enable.md)
+- [.NET](opentelemetry-enable.md?tabs=net)
- [Java](opentelemetry-enable.md?tabs=java)-- [JavaScript](opentelemetry-enable.md)-- [Python](opentelemetry-enable.md)
+- [JavaScript](opentelemetry-enable.md?tabs=nodejs)
+- [Python](opentelemetry-enable.md?tabs=python)
azure-monitor Overview Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/overview-dashboard.md
To go back to the overview experience, select the **Overview** button.
Currently, there's a limit of 30 days of data displayed in a dashboard. If you select a time filter beyond 30 days, or if you select **Configure tile settings** and set a custom time range in excess of 30 days, your dashboard won't display beyond 30 days of data. This is the case even with the default data retention of 90 days. There's currently no workaround for this behavior.
-The default **Application Dashboard** is created during Application Insights resource creation. If you move or rename your Application Insights instance, queries on the dashboard will fail with "Resource not found" errors because the dashboard queries rely on the original resource URI. Delete the default dashboard. On the Application Insights **Overview** resource menu, select **Application Dashboard** again. The default dashboard will be re-created with the new resource name. Make other custom edits to the dashboard as needed.
+The default **Application Dashboard** is created on demand the first time you select the Application Dashboard button. If you move or rename your Application Insights instance, queries on the dashboard will fail with "Resource not found" errors because the dashboard queries rely on the original resource URI. Delete the default dashboard. On the Application Insights **Overview** resource menu, select **Application Dashboard** again. The default dashboard will be re-created with the new resource name. Make other custom edits to the dashboard as needed.
## Next steps - [Funnels](./usage-funnels.md) - [Retention](./usage-retention.md)-- [User flows](./usage-flows.md)
+- [User flows](./usage-flows.md)
azure-monitor Statsbeat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/statsbeat.md
Statsbeat supports EU Data Boundary for Application Insights resources in the fo
| Sweden | Sweden Central | | Switzerland | Switzerland North | | Switzerland | Switzerland West | -
+| United Kingdom | United Kingdom South |
+| United Kingdom | United Kingdom West |
#### [Node](#tab/eu-node)
Statsbeat supports EU Data Boundary for Application Insights resources in the fo
| Norway | Norway West | | Sweden | Sweden Central | | Switzerland | Switzerland North |
-| Switzerland | Switzerland West |
+| Switzerland | Switzerland West |
+| United Kingdom | United Kingdom South |
+| United Kingdom | United Kingdom West |
#### [Python](#tab/eu-python)
Statsbeat supports EU Data Boundary for Application Insights resources in the fo
| Norway | Norway West | | Sweden | Sweden Central | | Switzerland | Switzerland North |
-| Switzerland | Switzerland West |
+| Switzerland | Switzerland West |
+| United Kingdom | United Kingdom South |
+| United Kingdom | United Kingdom West |
azure-monitor Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/availability-zones.md
Title: Availability zones in Azure Monitor
-description: Availability zones in Azure Monitor
+description: Learn about the data and service resilience benefits Azure Monitor availability zones provide to protect against datacenter failure.
Previously updated : 08/18/2021 Last updated : 02/21/2023 --
-# Availability zones in Azure Monitor
-
-[Azure availability zones](../../availability-zones/az-overview.md) protect your applications and data from datacenter failures and can provide resilience for Azure Monitor features that rely on a Log Analytics workspace. When a workspace is linked to an availability-zone-enabled dedicated cluster, it remains active and operational even if a specific datacenter is malfunctioning or even down, by relying on the availability of other zones in the region. You donΓÇÖt need to do anything in order to switch to an alternative zone, or even be aware of the incident.
--
-## Regions
-Azure Monitor currently supports the following regions:
-- East US 2-- West US 2-- Canada Central-- France Central-- Japan East-
-## Dedicated clusters
-Azure Monitor support for availability zones requires a Log Analytics workspace linked to an [Azure Monitor dedicated cluster](logs-dedicated-clusters.md). Dedicated Clusters are a deployment option that enables advanced capabilities for Azure Monitor Logs including availability zones.
-
-Not all dedicated clusters can use availability zones. Dedicated clusters created after mid-October 2020 can be set to support availability zones when they're created. New clusters created after that date default to be enabled for availability zones in regions where Azure Monitor supports them.
+#customer-intent: As an IT manager, I want to understand the data and service resilience benefits Azure Monitor availability zones provide so that can ensure my data and services are sufficiently protected in the event of datacenter failure.
+
+# Enhance data and service resilience in Azure Monitor Logs with availability zones
-> [!NOTE]
-> Application Insights resources can use availability zones only if they're workspace-based and the workspace uses a dedicated cluster. Classic Application Insights resources can't use availability zones.
+[Azure availability zones](../../availability-zones/az-overview.md) protect applications and data from datacenter failures and can enhance the resilience of Azure Monitor features that rely on a Log Analytics workspace. This article describes the data and service resilience benefits Azure Monitor availability zones provide by default to [dedicated clusters](logs-dedicated-clusters.md) in supported regions.
+## Prerequisites
-## Determine current cluster for your workspace
-To determine the current workspace link status for your workspace, use [CLI, PowerShell or REST](logs-dedicated-clusters.md#check-workspace-link-status) to retrieve the [cluster details](logs-dedicated-clusters.md#check-cluster-provisioning-status). If the cluster uses an availability zone, then it will have a property called `isAvailabilityZonesEnabled` with a value of `true`. Once a cluster is created, this property can't be altered.
+- A Log Analytics workspace linked to a [dedicated cluster](logs-dedicated-clusters.md).
-## Create dedicated cluster with availability zone
-Move your workspace to an availability zone by [creating a new dedicated cluster](logs-dedicated-clusters.md#create-a-dedicated-cluster) in a region that supports availability zones. The cluster will automatically be enabled for availability zones. Then [link your workspace to the new cluster](logs-dedicated-clusters.md#link-a-workspace-to-a-cluster).
+ > [!NOTE]
+ > Application Insights resources can use availability zones only if they're workspace-based and the workspace uses a dedicated cluster. Classic Application Insights resources can't use availability zones.
+
+## Data resilience - supported regions
-> [!IMPORTANT]
-> Availability zone is defined on the cluster at creation time and canΓÇÖt be modified.
+Availability zones protect your data from datacenter failures by relying on datacenters in different physical locations, equipped with independent power, cooling, and networking.
-Transitioning to a new cluster can be a gradual process. Don't remove the previous cluster until it has been purged of any data. For example, if your workspace retention is set 60 days, you may want to keep your old cluster running for that period before removing it. To learn more, see [Migrate Log Analytics workspaces to availability zone support](../../availability-zones/migrate-monitor-log-analytics.md).
+Azure Monitor currently supports data resilience for availability-zone-enabled dedicated clusters in these regions:
-Any queries against your workspace will query both clusters to provide you with a single, unified result set. This allows Azure Monitor experiences, such as workbooks and dashboards, to keep getting the full result set, based on data from both clusters.
+ | Americas | Europe | Middle East | Africa | Asia Pacific |
+ ||||||
+ | Brazil South | France Central | UAE North | South Africa North | Australia East |
+ | Canada Central | Germany West Central | | | Central India |
+ | Central US | North Europe | | | Japan East |
+ | East US | Norway East | | | Korea Central |
+ | East US 2 | UK South | | | Southeast Asia |
+ | South Central US | West Europe | | | East Asia |
+ | US Gov Virginia | Sweden Central | | | China North 3 |
+ | West US 2 | Switzerland North | | | |
+ | West US 3 | | | | |
-## Billing
-There's a [cost for using a dedicated cluster](logs-dedicated-clusters.md#create-a-dedicated-cluster). It requires a daily capacity reservation of 500 GB.
+## Service resilience - supported regions
-If you already have a dedicated cluster and choose to retain it to access its data, youΓÇÖll be charged for both dedicated clusters. Starting August 4, 2021, the minimum required capacity reservation for dedicated clusters is reduced from 1000 GB/Daily to 500 GB/Daily, so weΓÇÖd recommend applying that minimum to your old cluster to reduce charges.
+When available in your region, Azure Monitor availability zones enhance your Azure Monitor service resilience automatically. Physical separation and independent infrastructure makes interruption of service availability in your Log Analytics workspace far less likely because the Log Analytics workspace can rely on resources from a different zone.
-The new cluster isnΓÇÖt billed during its first day to avoid double billing during configuration. On the date of migration, you'll be billed only for logs ingested before the migration completes.
+Azure Monitor currently supports service resilience for availability-zone-enabled dedicated clusters in these regions:
+- East US 2
+- West US 2
+- Canada Central
+- France Central
+- Japan East
## Next steps -- See [Using queries in Azure Monitor Log Analytics](queries.md) to see how users interact with query packs in Log Analytics.-- See [Migrate Log Analytics workspaces to availability zone support](../../availability-zones/migrate-monitor-log-analytics.md).
+Learn more about how to:
+- [Set up a dedicated cluster](logs-dedicated-clusters.md).
+- [Migrate Log Analytics workspaces to availability zone support](../../availability-zones/migrate-monitor-log-analytics.md).
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
In some scenarios, combining this data can result in cost savings. Typically, th
- [LinuxAuditLog](/azure/azure-monitor/reference/tables/linuxauditlog) - [SysmonEvent](/azure/azure-monitor/reference/tables/sysmonevent) - [ProtectionStatus](/azure/azure-monitor/reference/tables/protectionstatus)-- [Update](/azure/azure-monitor/reference/tables/update) and [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) when the Update Management solution isn't running in the workspace or solution targeting is enabled. See [What data types are included in the 500-MB data daily allowance?](../../defender-for-cloud/enhanced-security-features-overview.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance).
+- [Update](/azure/azure-monitor/reference/tables/update) and [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) when the Update Management solution isn't running in the workspace or solution targeting is enabled. See [What data types are included in the 500-MB data daily allowance?](../../defender-for-cloud/plan-defender-for-servers-data-workspace.md#log-analytics-pricing-faq).
The count of monitored servers is calculated on an hourly granularity. The daily data allocation contributions from each monitored server are aggregated at the workspace level. If the workspace is in the legacy Per Node pricing tier, the Microsoft Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Last updated 01/01/2023
-# Azure Monitor Logs Dedicated Clusters
+# Create and manage a dedicated cluster in Azure Monitor Logs
-Log Analytics Dedicated clusters in Azure Monitor enable advanced capabilities, and higher query utilization, provided to linked Log Analytics workspaces. Clusters require a minimum ingestion commitment of 500 GB per day. You can link and unlink workspaces from a dedicated cluster without any data loss or service interruption.
+Linking a Log Analytics workspace to a dedicated cluster in Azure Monitor provides advanced capabilities and higher query utilization. Clusters require a minimum ingestion commitment of 500 GB per day. You can link and unlink workspaces from a dedicated cluster without any data loss or service interruption.
+## Advanced capabilities
Capabilities that require dedicated clusters: -- **[Customer-managed Keys](../logs/customer-managed-keys.md)** - Encrypt the cluster data using keys that are provided and controlled by the customer.-- **[Lockbox](../logs/customer-managed-keys.md#customer-lockbox-preview)** - Control Microsoft support engineers access requests to your data.-- **[Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption)** - Protects against a scenario where one of the encryption algorithms or keys may be compromised. In this case, the additional layer of encryption continues to protect your data.
+- **[Customer-managed keys](../logs/customer-managed-keys.md)** - Encrypt cluster data using keys that you provide and control.
+- **[Lockbox](../logs/customer-managed-keys.md#customer-lockbox-preview)** - Control Microsoft support engineer access requests to your data.
+- **[Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption)** - Protect against a scenario where one of the encryption algorithms or keys may be compromised. In this case, the extra layer of encryption continues to protect your data.
- **[Cross-query optimization](../logs/cross-workspace-query.md)** - Cross-workspace queries run faster when workspaces are on the same cluster.-- **Cost optimization** - Link your workspaces in same region to cluster to get commitment tier discount to all workspaces, even to ones with low ingestion that aren't eligible for commitment tier discount.-- **[Availability zones](../../availability-zones/az-overview.md)** - Protect your data from datacenter failures with zones being separated physically by locations and equipped with independent power, cooling, and networking. The physical separation in zones and independent infrastructure makes an incident far less likely since the workspace can rely on the resources from any of the zones. [Azure Monitor availability zones](./availability-zones.md) covers broader parts of the service and when available in your region, extends your Azure Monitor resiliency automatically. Dedicated clusters are created as Availability zones enabled (`isAvailabilityZonesEnabled`: 'true') by default in supported regions. This setting canΓÇÖt be altered once created, and can be verified in clusterΓÇÖs property `isAvailabilityZonesEnabled`. Availability zones clusters are created in the following regions currently, and more regions are added periodically.
+- **Cost optimization** - Link your workspaces in same region to cluster to get commitment tier discount to all workspaces, even to ones with low ingestion that
+eligible for commitment tier discount.
+- **[Availability zones](../../availability-zones/az-overview.md)** - Protect your data from datacenter failures by relying on datacenters in different physical locations, equipped with independent power, cooling, and networking. The physical separation in zones and independent infrastructure makes an incident far less likely since the workspace can rely on the resources from any of the zones. [Azure Monitor availability zones](./availability-zones.md) covers broader parts of the service and when available in your region, extends your Azure Monitor resilience automatically. Azure Monitor creates dedicated clusters as availability-zone-enabled (`isAvailabilityZonesEnabled`: 'true') by default in supported regions. You can't alter this setting after creating the cluster.
- | Americas | Europe | Middle East | Africa | Asia Pacific |
- ||||||
- | Brazil South | France Central | UAE North | South Africa North | Australia East |
- | Canada Central | Germany West Central | | | Central India |
- | Central US | North Europe | | | Japan East |
- | East US | Norway East | | | Korea Central |
- | East US 2 | UK South | | | Southeast Asia |
- | South Central US | West Europe | | | East Asia |
- | US Gov Virginia | Sweden Central | | | China North 3 |
- | West US 2 | Switzerland North | | | |
- | West US 3 | | | | |
+ Availability zones aren't currently supported in all regions. New clusters you create in supported regions have availability zones enabled by default.
+## Cluster pricing model
+Log Analytics Dedicated Clusters use a commitment tier pricing model of at least 500 GB/day. Any usage above the tier level incurs charges based on the per-GB rate of that commitment tier. See [Azure Monitor Logs pricing details](cost-logs.md#dedicated-clusters) for pricing details for dedicated clusters.
+## Required permissions
-## Cluster management
-
-Dedicated clusters are managed with an Azure resource that represents Azure Monitor Log clusters. Operations are performed programmatically using [CLI](/cli/azure/monitor/log-analytics/cluster), [PowerShell](/powershell/module/az.operationalinsights) or the [REST](/rest/api/loganalytics/clusters).
-
-Once a cluster is created, workspaces can be linked to it, and new ingested data to them is stored on the cluster. Workspaces can be unlinked from a cluster at any time and new data then stored on shared Log Analytics clusters. The link and unlink operation doesn't affect your queries and access to data before, and after the operation. The Cluster and workspaces must be in the same region.
-
-Operations on the cluster level require Microsoft.OperationalInsights/clusters/write action permission. Linking workspaces to a cluster requires both Microsoft.OperationalInsights/clusters/write and Microsoft.OperationalInsights/workspaces/write actions. Permission could be granted by the Owner or Contributor that have `*/write` action, or by the Log Analytics Contributor role that have `Microsoft.OperationalInsights/*` action. For more information on Log Analytics permissions, see [Manage access to log data and workspaces in Azure Monitor](./manage-access.md).
+To perform cluster-related actions, you need these permissions:
+| Action | Permissions or role needed |
+|-|-|
+| Create a dedicate cluster |`Microsoft.Resources/deployments/*`and `Microsoft.OperationalInsights/clusters/write`|
+| Change cluster properties |`Microsoft.OperationalInsights/clusters/write`|
+| Link workspaces to a cluster | `Microsoft.OperationalInsights/clusters/write` and `Microsoft.OperationalInsights/workspaces/write`|
+| Grant the required permissions | Owner or Contributor role that has `*/write` permissions, or a Log Analytics Contributor role that has `Microsoft.OperationalInsights/*` permissions.|
-## Cluster pricing model
-Log Analytics Dedicated Clusters use a commitment tier pricing model of at least 500 GB/day. Any usage above the tier level will be billed at effective per-GB rate of that commitment tier. See [Azure Monitor Logs pricing details](cost-logs.md#dedicated-clusters) for pricing details for dedicated clusters.
+For more information on Log Analytics permissions, see [Manage access to log data and workspaces in Azure Monitor](./manage-access.md).
## Create a dedicated cluster Provide the following properties when creating new dedicated cluster: - **ClusterName**: Must be unique for the resource group.-- **ResourceGroupName**: You should use a central IT resource group because clusters are usually shared by many teams in the organization. For more design considerations, review Design a Log Analytics workspace configuration(../logs/workspace-design.md).
+- **ResourceGroupName**: Use a central IT resource group because many teams in the organization usually share clusters. For more design considerations, review Design a Log Analytics workspace configuration(../logs/workspace-design.md).
- **Location**-- **SkuCapacity**: The Commitment Tier (formerly called capacity reservations) can be set to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters). -- **Managed identity**: Clusters support two [managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types): System-assigned and User-assigned managed identity, while a single identity can be defined in a cluster depending on your scenario.
- - System-assigned managed identity is simpler and being generated automatically with the cluster creation when identity `type` is set to "*SystemAssigned*". This identity can be used later to grant storage access to your Key Vault for wrap and unwrap operations.
+- **SkuCapacity**: You can set the commitment tier (formerly called capacity reservations) to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters).
+- **Managed identity**: Clusters support two [managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types):
+ - System-assigned managed identity - Generated automatically with the cluster creation when identity `type` is set to "*SystemAssigned*". This identity can be used later to grant storage access to your Key Vault for wrap and unwrap operations.
*Identity in Cluster's REST Call* ```json
Provide the following properties when creating new dedicated cluster:
} } ```
- - User-assigned managed identity lets you configure Customer-managed key at cluster creation, when granting it permissions in your Key Vault before cluster creation.
+ - User-assigned managed identity - Lets you configure a customer-managed key at cluster creation, when granting it permissions in your Key Vault before cluster creation.
*Identity in Cluster's REST Call* ```json
Provide the following properties when creating new dedicated cluster:
} ```
-The user account that creates the clusters must have the standard Azure resource creation permission: `Microsoft.Resources/deployments/*` and cluster write permission `Microsoft.OperationalInsights/clusters/write` by having in their role assignments this specific action or `Microsoft.OperationalInsights/*` or `*/write`.
-
-After you create your cluster resource, you can edit additional properties such as *sku*, *keyVaultProperties, or *billingType*. See more details below.
+After you create your cluster resource, you can edit properties such as *sku*, *keyVaultProperties, or *billingType*. See more details below.
-You can have up to five active clusters per subscription per region. If the cluster is deleted, it is still reserved for 14 days. You can have up to seven clusters per subscription and region, five active, plus two deleted in past 14 days.
+You can have up to five active clusters per subscription per region. If the cluster is deleted, it's still reserved for 14 days. You can have up to seven clusters per subscription and region, five active, plus two deleted in past 14 days.
> [!NOTE] > Cluster creation triggers resource allocation and provisioning. This operation can take a few hours to complete.
Send a GET request on the cluster resource and look at the *provisioningState* v
} ```
-The *principalId* GUID is generated by the managed identity service at cluster creation.
+The managed identity service generates the *principalId* GUID when you create the cluster.
Authorization: Bearer <token>
## Change cluster properties
-After you create your cluster resource and it's fully provisioned, you can edit additional properties using CLI, PowerShell or REST API. The additional properties that can be set after the cluster has been provisioned include the following:
+After you create your cluster resource and it's fully provisioned, you can edit cluster properties using CLI, PowerShell or REST API. Properties you can set after the cluster is provisioned include:
- **keyVaultProperties** - Contains the key in Azure Key Vault with the following parameters: *KeyVaultUri*, *KeyName*, *KeyVersion*. See [Update cluster with Key identifier details](../logs/customer-managed-keys.md#update-cluster-with-key-identifier-details). - **Identity** - The identity used to authenticate to your Key Vault. This can be System-assigned or User-assigned.
The same as for 'clusters in a resource group', but in subscription scope.
## Update commitment tier in cluster
-When the data volume to your linked workspaces change over time and you want to update the Commitment Tier level appropriately. The tier is specified in units of GB and can have values of 500, 1000, 2000 or 5000 GB/day. Note that you don't have to provide the full REST request body but should include the sku.
+When the data volume to your linked workspaces changes over time, you can update the Commitment Tier level appropriately. The tier is specified in units of GB and can have values of 500, 1000, 2000 or 5000 GB/day. You don't have to provide the full REST request body, but you must include the sku.
#### [CLI](#tab/cli)
Authorization: Bearer <token>
- A maximum of five active clusters can be created in each region and subscription. -- A maximum of seven cluster allowed per subscription and region, five active, plus two deleted in past 14 days.
+- A maximum of seven clusters allowed per subscription and region, five active, plus two deleted in past 14 days.
- A maximum of 1,000 Log Analytics workspaces can be linked to a cluster.
Authorization: Bearer <token>
- Moving a cluster to another resource group or subscription isn't currently supported. -- Cluster update should not include both identity and key identifier details in the same operation. In case you need to update both, the update should be in two consecutive operations.
+- Cluster update shouldn't include both identity and key identifier details in the same operation. In case you need to update both, the update should be in two consecutive operations.
- Lockbox isn't currently available in China. - [Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) is configured automatically for clusters created from October 2020 in supported regions. You can verify if your cluster is configured for double encryption by sending a GET request on the cluster and observing that the `isDoubleEncryptionEnabled` value is `true` for clusters with Double encryption enabled. - If you create a cluster and get an error "region-name doesn't support Double Encryption for clusters.", you can still create the cluster without Double encryption by adding `"properties": {"isDoubleEncryptionEnabled": false}` in the REST request body.
- - Double encryption setting can't can not be changed after the cluster has been created.
+ - Double encryption setting can't be changed after the cluster has been created.
- Deleting a linked workspace is permitted while linked to cluster. If you decide to [recover](./delete-workspace.md#recover-a-workspace) the workspace during the [soft-delete](./delete-workspace.md#soft-delete-behavior) period, it returns to previous state and remains linked to cluster.
Authorization: Bearer <token>
- Some operations are long and can take a while to complete. These are *cluster create*, *cluster key update* and *cluster delete*. You can check the operation status by sending GET request to cluster or workspace and observe the response. For example, unlinked workspace won't have the *clusterResourceId* under *features*. -- Workspace link to cluster will fail if it is linked to another cluster.
+- If you attempt to link a Log Analytics workspace that's already linked to another cluster, the operation will fail.
## Error messages ### Cluster Create -- 400--Cluster name is not valid. Cluster name can contain characters a-z, A-Z, 0-9 and length of 3-63.
+- 400--Cluster name isn't valid. Cluster name can contain characters a-z, A-Z, 0-9 and length of 3-63.
- 400--The body of the request is null or in bad format. - 400--SKU name is invalid. Set SKU name to capacityReservation.-- 400--Capacity was provided but SKU is not capacityReservation. Set SKU name to capacityReservation.
+- 400--Capacity was provided but SKU isn't capacityReservation. Set SKU name to capacityReservation.
- 400--Missing Capacity in SKU. Set Capacity value to 500, 1000, 2000 or 5000 GB/day. - 400--Capacity is locked for 30 days. Decreasing capacity is permitted 30 days after update. - 400--No SKU was set. Set the SKU name to capacityReservation and Capacity value to 500, 1000, 2000 or 5000 GB/day. - 400--Identity is null or empty. Set Identity with systemAssigned type. - 400--KeyVaultProperties are set on creation. Update KeyVaultProperties after cluster creation.-- 400--Operation cannot be executed now. Async operation is in a state other than succeeded. Cluster must complete its operation before any update operation is performed.
+- 400--Operation can't be executed now. Async operation is in a state other than succeeded. Cluster must complete its operation before any update operation is performed.
### Cluster Update - 400--Cluster is in deleting state. Async operation is in progress. Cluster must complete its operation before any update operation is performed.-- 400--KeyVaultProperties is not empty but has a bad format. See [key identifier update](../logs/customer-managed-keys.md#update-cluster-with-key-identifier-details).
+- 400--KeyVaultProperties isn't empty but has a bad format. See [key identifier update](../logs/customer-managed-keys.md#update-cluster-with-key-identifier-details).
- 400--Failed to validate key in Key Vault. Could be due to lack of permissions or when key doesn't exist. Verify that you [set key and access policy](../logs/customer-managed-keys.md#grant-key-vault-permissions) in Key Vault.-- 400--Key is not recoverable. Key Vault must be set to Soft-delete and Purge-protection. See [Key Vault documentation](../../key-vault/general/soft-delete-overview.md)-- 400--Operation cannot be executed now. Wait for the Async operation to complete and try again.
+- 400--Key isn't recoverable. Key Vault must be set to Soft-delete and Purge-protection. See [Key Vault documentation](../../key-vault/general/soft-delete-overview.md)
+- 400--Operation can't be executed now. Wait for the Async operation to complete and try again.
- 400--Cluster is in deleting state. Wait for the Async operation to complete and try again. ### Cluster Get
+ - 404--Cluster not found, the cluster might have been deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it, or use another name to create a new cluster.
### Cluster Delete
azure-monitor Profiler Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-bring-your-own-storage.md
To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre
For general Profiler troubleshooting, refer to the [Profiler Troubleshoot documentation](profiler-troubleshooting.md).
-For general Snapshot Debugger troubleshooting, refer to the [Snapshot Debugger Troubleshoot documentation](../app/snapshot-debugger-troubleshoot.md).
+For general Snapshot Debugger troubleshooting, refer to the [Snapshot Debugger Troubleshoot documentation](https://learn.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot).
## Frequently asked questions
azure-monitor Snapshot Collector Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-collector-release-notes.md
A point release to address user-reported bugs.
### Bug fixes - Fix [Hide the IDMS dependency from dependency tracker.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/17) - Fix [ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/19)
-<br>Snapshot Collector used via SDK is not supported when Interop feature is enabled. [See more not supported scenarios.](./snapshot-debugger-troubleshoot.md#not-supported-scenarios)
+<br>Snapshot Collector used via SDK is not supported when Interop feature is enabled. [See more not supported scenarios.](https://learn.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot#not-supported-scenarios)
## [1.4.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.2) A point release to address a user-reported bug.
azure-monitor Snapshot Debugger App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-app-service.md
Below you can find scenarios where Snapshot Collector isn't supported:
* Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance. * See [snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
-* For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
+* For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](https://learn.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot).
[Enablement UI]: ./media/snapshot-debugger/enablement-ui.png [snapshot-debugger-app-setting]:./media/snapshot-debugger/snapshot-debugger-app-setting.png
azure-monitor Snapshot Debugger Function App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-function-app.md
We recommend that you have Snapshot Debugger enabled on all your apps to ease di
* Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance. * [View snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal. * Customize Snapshot Debugger configuration based on your use-case on your Function app. For more information, see [snapshot configuration in host.json](../../azure-functions/functions-host-json.md#applicationinsightssnapshotconfiguration).
-* For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
+* For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](https://learn.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot).
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-troubleshoot.md
- Title: Troubleshoot Azure Application Insights Snapshot Debugger
-description: This article presents troubleshooting steps and information to help developers enable and use Application Insights Snapshot Debugger.
---
-reviewer: cweining
- Previously updated : 08/18/2022---
-# <a id="troubleshooting"></a> Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots
-
-If you enabled Application Insights Snapshot Debugger for your application, but aren't seeing snapshots for exceptions, you can use these instructions to troubleshoot.
-
-There can be many different reasons why snapshots aren't generated. You can start by running the snapshot health check to identify some of the possible common causes.
-
-## Not Supported Scenarios
-
-Below you can find scenarios where Snapshot Collector isn't supported:
-
-|Scenario | Side Effects | Recommendation |
-||--|-|
-|When using the Snapshot Collector SDK in your application directly (*.csproj*) and you have enabled the advance option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost, therefore, no Snapshots will be available. <br/> Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor` <br/> For more information about the Application Insights feature "Interop", see the [documentation.](../app/azure-web-apps-net-core.md#troubleshooting) | If you're using the advance option "Interop", use the codeless Snapshot Collector injection (enabled thru the Azure portal UX) |
-
-## Make sure you're using the appropriate Snapshot Debugger Endpoint
-
-Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
-
-For App Service and applications using the Application Insights SDK, you have to update the connection string using the supported overrides for Snapshot Debugger as defined below:
-
-|Connection String Property | US Government Cloud | China Cloud |
-|||-|
-|SnapshotEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
-
-For more information about other connection overrides, see [Application Insights documentation](../app/sdk-connection-string.md?tabs=net#connection-string-with-explicit-endpoint-overrides).
-
-For Function App, you have to update the `host.json` using the supported overrides below:
-
-|Property | US Government Cloud | China Cloud |
-|||-|
-|AgentEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
-
-Below is an example of the `host.json` updated with the US Government Cloud agent endpoint:
-
-```json
-{
- "version": "2.0",
- "logging": {
- "applicationInsights": {
- "samplingExcludedTypes": "Request",
- "samplingSettings": {
- "isEnabled": true
- },
- "snapshotConfiguration": {
- "isEnabled": true,
- "agentEndpoint": "https://snapshot.monitor.azure.us"
- }
- }
- }
-}
-```
-
-## Use the snapshot health check
-
-Several common problems result in the Open Debug Snapshot not showing up. Using an outdated Snapshot Collector, for example; reaching the daily upload limit; or perhaps the snapshot is just taking a long time to upload. Use the Snapshot Health Check to troubleshoot common problems.
-
-There's a link in the exception pane of the end-to-end trace view that takes you to the Snapshot Health Check.
--
-The interactive, chat-like interface looks for common problems and guides you to fix them.
--
-If that doesn't solve the problem, then refer to the following manual troubleshooting steps.
-
-## Verify the instrumentation key
-
-Make sure you're using the correct instrumentation key in your published application. Usually, the instrumentation key is read from the *ApplicationInsights.config* file. Verify the value is the same as the instrumentation key for the Application Insights resource that you see in the portal.
--
-## <a id="SSL"></a>Check TLS/SSL client settings (ASP.NET)
-
-If you have an ASP.NET application that's hosted in Azure App Service or in IIS on a virtual machine, your application could fail to connect to the Snapshot Debugger service due to a missing SSL security protocol.
-
-[The Snapshot Debugger endpoint requires TLS version 1.2](snapshot-debugger-upgrade.md?toc=/azure/azure-monitor/toc.json). The set of SSL security protocols is one of the quirks enabled by the `httpRuntime targetFramework` value in the `system.web` section of `web.config`.
-If the `httpRuntime targetFramework` is 4.5.2 or lower, then TLS 1.2 isn't included by default.
-
-> [!NOTE]
-> The `httpRuntime targetFramework` value is independent of the target framework used when building your application.
-
-To check the setting, open your *web.config* file and find the system.web section. Ensure that the `targetFramework` for `httpRuntime` is set to 4.6 or above.
-
- ```xml
- <system.web>
- ...
- <httpRuntime targetFramework="4.7.2" />
- ...
- </system.web>
- ```
-
-> [!NOTE]
-> Modifying the `httpRuntime targetFramework` value changes the runtime quirks applied to your application and can cause other, subtle behavior changes. Be sure to test your application thoroughly after making this change. For a full list of compatibility changes, see [Re-targeting changes](/dotnet/framework/migration-guide/application-compatibility#retargeting-changes).
-
-> [!NOTE]
-> If the `targetFramework` is 4.7 or above then Windows determines the available protocols. In Azure App Service, TLS 1.2 is available. However, if you're using your own virtual machine, you may need to enable TLS 1.2 in the OS.
-
-## Preview Versions of .NET Core
-
-If you're using a preview version of .NET Core or your application references Application Insights SDK, directly or indirectly via a dependent assembly, follow the instructions for [Enable Snapshot Debugger for other environments](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json).
-
-## Check the Diagnostic Services site extension' Status Page
-
-If Snapshot Debugger was enabled through the [Application Insights pane](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json) in the portal, it was enabled by the Diagnostic Services site extension.
-
-> [!NOTE]
-> Codeless installation of Application Insights Snapshot Debugger follows the .NET Core support policy.
-> For more information about supported runtimes, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
-
-You can check the Status Page of this extension by going to the following url:
-`https://{site-name}.scm.azurewebsites.net/DiagnosticServices`
-
-> [!NOTE]
-> The domain of the Status Page link will vary depending on the cloud.
-This domain will be the same as the Kudu management site for App Service.
-
-This Status Page shows the installation state of the Profiler and Snapshot Collector agents. If there was an unexpected error, it will be displayed and show how to fix it.
-
-You can use the Kudu management site for App Service to get the base url of this Status Page:
-
-1. Open your App Service application in the Azure portal.
-1. Select **Advanced Tools**, or search for **Kudu**.
-1. Select **Go**.
-1. Once you are on the Kudu management site, in the URL, **append the following `/DiagnosticServices` and press enter**.
- It will end like this: `https://<kudu-url>/DiagnosticServices`
-
-## Upgrade to the latest version of the NuGet package
-
-Based on how Snapshot Debugger was enabled, see the following options:
-
-* If Snapshot Debugger was enabled through the [Application Insights pane in the portal](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json), then your application should already be running the latest NuGet package.
-
-* If Snapshot Debugger was enabled by including the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package, use Visual Studio's NuGet Package Manager to make sure you're using the latest version of `Microsoft.ApplicationInsights.SnapshotCollector`.
-
-For the latest updates and bug fixes [consult the release notes](./snapshot-collector-release-notes.md).
-
-## Check the uploader logs
-
-After a snapshot is created, a minidump file (*.dmp*) is created on disk. A separate uploader process creates that minidump file and uploads it, along with any associated PDBs, to Application Insights Snapshot Debugger storage. After the minidump has uploaded successfully, it's deleted from disk. The log files for the uploader process are kept on disk. In an App Service environment, you can find these logs in `D:\Home\LogFiles`. Use the Kudu management site for App Service to find these log files.
-
-1. Open your App Service application in the Azure portal.
-1. Select **Advanced Tools**, or search for **Kudu**.
-1. Select **Go**.
-1. In the **Debug console** drop-down list, select **CMD**.
-1. Select **LogFiles**.
-
-You should see at least one file with a name that begins with `Uploader_` or `SnapshotUploader_` and a `.log` extension. Select the appropriate icon to download any log files or open them in a browser.
-The file name includes a unique suffix that identifies the App Service instance. If your App Service instance is hosted on more than one machine, there are separate log files for each machine. When the uploader detects a new minidump file, it's recorded in the log file. Here's an example of a successful snapshot and upload:
-
-```
-SnapshotUploader.exe Information: 0 : Received Fork request ID 139e411a23934dc0b9ea08a626db16c5 from process 6368 (Low pri)
- DateTime=2018-03-09T01:42:41.8571711Z
-SnapshotUploader.exe Information: 0 : Creating minidump from Fork request ID 139e411a23934dc0b9ea08a626db16c5 from process 6368 (Low pri)
- DateTime=2018-03-09T01:42:41.8571711Z
-SnapshotUploader.exe Information: 0 : Dump placeholder file created: 139e411a23934dc0b9ea08a626db16c5.dm_
- DateTime=2018-03-09T01:42:41.8728496Z
-SnapshotUploader.exe Information: 0 : Dump available 139e411a23934dc0b9ea08a626db16c5.dmp
- DateTime=2018-03-09T01:42:45.7525022Z
-SnapshotUploader.exe Information: 0 : Successfully wrote minidump to D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\139e411a23934dc0b9ea08a626db16c5.dmp
- DateTime=2018-03-09T01:42:45.7681360Z
-SnapshotUploader.exe Information: 0 : Uploading D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\139e411a23934dc0b9ea08a626db16c5.dmp, 214.42 MB (uncompressed)
- DateTime=2018-03-09T01:42:45.7681360Z
-SnapshotUploader.exe Information: 0 : Upload successful. Compressed size 86.56 MB
- DateTime=2018-03-09T01:42:59.6184651Z
-SnapshotUploader.exe Information: 0 : Extracting PDB info from D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\139e411a23934dc0b9ea08a626db16c5.dmp.
- DateTime=2018-03-09T01:42:59.6184651Z
-SnapshotUploader.exe Information: 0 : Matched 2 PDB(s) with local files.
- DateTime=2018-03-09T01:42:59.6809606Z
-SnapshotUploader.exe Information: 0 : Stamp does not want any of our matched PDBs.
- DateTime=2018-03-09T01:42:59.8059929Z
-SnapshotUploader.exe Information: 0 : Deleted D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\139e411a23934dc0b9ea08a626db16c5.dmp
- DateTime=2018-03-09T01:42:59.8530649Z
-```
-
-> [!NOTE]
-> The example above is from version 1.2.0 of the `Microsoft.ApplicationInsights.SnapshotCollector` NuGet package. In earlier versions, the uploader process is called `MinidumpUploader.exe` and the log is less detailed.
-
-In the previous example, the instrumentation key is `c12a605e73c44346a984e00000000000`. This value should match the instrumentation key for your application.
-The minidump is associated with a snapshot with the ID `139e411a23934dc0b9ea08a626db16c5`. You can use this ID later to locate the associated exception record in Application Insights Analytics.
-
-The uploader scans for new PDBs about once every 15 minutes. Here's an example:
-
-```
-SnapshotUploader.exe Information: 0 : PDB rescan requested.
- DateTime=2018-03-09T01:47:19.4457768Z
-SnapshotUploader.exe Information: 0 : Scanning D:\home\site\wwwroot for local PDBs.
- DateTime=2018-03-09T01:47:19.4457768Z
-SnapshotUploader.exe Information: 0 : Local PDB scan complete. Found 2 PDB(s).
- DateTime=2018-03-09T01:47:19.4614027Z
-SnapshotUploader.exe Information: 0 : Deleted PDB scan marker : D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\6368.pdbscan
- DateTime=2018-03-09T01:47:19.4614027Z
-```
-
-For applications that *aren't* hosted in App Service, the uploader logs are in the same folder as the minidumps: `%TEMP%\Dumps\<ikey>` (where `<ikey>` is your instrumentation key).
-
-## Troubleshooting Cloud Services
-
-In Cloud Services, the default temporary folder could be too small to hold the minidump files, leading to lost snapshots.
-
-The space needed depends on the total working set of your application and the number of concurrent snapshots.
-
-The working set of a 32-bit ASP.NET web role is typically between 200 MB and 500 MB. Allow for at least two concurrent snapshots.
-
-For example, if your application uses 1 GB of total working set, you should make sure there is at least 2 GB of disk space to store snapshots.
-
-Follow these steps to configure your Cloud Service role with a dedicated local resource for snapshots.
-
-1. Add a new local resource to your Cloud Service by editing the Cloud Service definition (.csdef) file. The following example defines a resource called `SnapshotStore` with a size of 5 GB.
-
- ```xml
- <LocalResources>
- <LocalStorage name="SnapshotStore" cleanOnRoleRecycle="false" sizeInMB="5120" />
- </LocalResources>
- ```
-
-1. Modify your role's startup code to add an environment variable that points to the `SnapshotStore` local resource. For Worker Roles, the code should be added to your role's `OnStart` method:
-
- ```csharp
- public override bool OnStart()
- {
- Environment.SetEnvironmentVariable("SNAPSHOTSTORE", RoleEnvironment.GetLocalResource("SnapshotStore").RootPath);
- return base.OnStart();
- }
- ```
-
- For Web Roles (ASP.NET), the code should be added to your web application's `Application_Start` method:
-
- ```csharp
- using Microsoft.WindowsAzure.ServiceRuntime;
- using System;
-
- namespace MyWebRoleApp
- {
- public class MyMvcApplication : System.Web.HttpApplication
- {
- protected void Application_Start()
- {
- Environment.SetEnvironmentVariable("SNAPSHOTSTORE", RoleEnvironment.GetLocalResource("SnapshotStore").RootPath);
- // TODO: The rest of your application startup code
- }
- }
- }
- ```
-
-1. Update your role's *ApplicationInsights.config* file to override the temporary folder location used by `SnapshotCollector`
-
- ```xml
- <TelemetryProcessors>
- <Add Type="Microsoft.ApplicationInsights.SnapshotCollector.SnapshotCollectorTelemetryProcessor, Microsoft.ApplicationInsights.SnapshotCollector">
- <!-- Use the SnapshotStore local resource for snapshots -->
- <TempFolder>%SNAPSHOTSTORE%</TempFolder>
- <!-- Other SnapshotCollector configuration options -->
- </Add>
- </TelemetryProcessors>
- ```
-
-## Overriding the Shadow Copy folder
-
-When the Snapshot Collector starts up, it tries to find a folder on disk that is suitable for running the Snapshot Uploader process. The chosen folder is known as the Shadow Copy folder.
-
-The Snapshot Collector checks a few well-known locations, making sure it has permissions to copy the Snapshot Uploader binaries. The following environment variables are used:
-
-* Fabric_Folder_App_Temp
-* LOCALAPPDATA
-* APPDATA
-* TEMP
-
-If a suitable folder can't be found, Snapshot Collector reports an error saying *"Couldn't find a suitable shadow copy folder."*
-
-If the copy fails, Snapshot Collector reports a `ShadowCopyFailed` error.
-
-If the uploader can't be launched, Snapshot Collector reports an `UploaderCannotStartFromShadowCopy` error. The body of the message often contains `System.UnauthorizedAccessException`. This error usually occurs because the application is running under an account with reduced permissions. The account has permission to write to the shadow copy folder, but it doesn't have permission to execute code.
-
-Since these errors usually happen during startup, they'll usually be followed by an `ExceptionDuringConnect` error saying *Uploader failed to start*."
-
-To work around these errors, you can specify the shadow copy folder manually via the `ShadowCopyFolder` configuration option. For example, using *ApplicationInsights.config*:
-
- ```xml
- <TelemetryProcessors>
- <Add Type="Microsoft.ApplicationInsights.SnapshotCollector.SnapshotCollectorTelemetryProcessor, Microsoft.ApplicationInsights.SnapshotCollector">
- <!-- Override the default shadow copy folder. -->
- <ShadowCopyFolder>D:\SnapshotUploader</ShadowCopyFolder>
- <!-- Other SnapshotCollector configuration options -->
- </Add>
- </TelemetryProcessors>
- ```
-
-Or, if you're using *appsettings.json* with a .NET Core application:
-
- ```json
- {
- "ApplicationInsights": {
- "InstrumentationKey": "<your instrumentation key>"
- },
- "SnapshotCollectorConfiguration": {
- "ShadowCopyFolder": "D:\\SnapshotUploader"
- }
- }
- ```
-
-## Use Application Insights search to find exceptions with snapshots
-
-When a snapshot is created, the throwing exception is tagged with a snapshot ID. That snapshot ID is included as a custom property when the exception is reported to Application Insights. Using **Search** in Application Insights, you can find all records with the `ai.snapshot.id` custom property.
-
-1. Browse to your Application Insights resource in the Azure portal.
-1. Select **Search**.
-1. Type `ai.snapshot.id` in the Search text box and press Enter.
--
-If this search returns no results, then, no snapshots were reported to Application Insights in the selected time range.
-
-To search for a specific snapshot ID from the Uploader logs, type that ID in the Search box. If you can't find records for a snapshot that you know was uploaded, follow these steps:
-
-1. Double-check that you're looking at the right Application Insights resource by verifying the instrumentation key.
-
-1. Using the timestamp from the Uploader log, adjust the Time Range filter of the search to cover that time range.
-
-If you still don't see an exception with that snapshot ID, then the exception record wasn't reported to Application Insights. This situation can happen if your application crashed after it took the snapshot but before it reported the exception record. In this case, check the App Service logs under `Diagnose and solve problems` to see if there were unexpected restarts or unhandled exceptions.
-
-## Edit network proxy or firewall rules
-
-If your application connects to the Internet via a proxy or a firewall, you may need to update the rules to communicate with the Snapshot Debugger service.
-
-The IPs used by Application Insights Snapshot Debugger are included in the Azure Monitor service tag. For more information, see [Service Tags documentation](../../virtual-network/service-tags-overview.md).
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-vm.md
If your application runs in Azure Service Fabric, Cloud Service, Virtual Machine
- Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance. - See [snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.-- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
+- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](https://learn.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot).
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
The following environments are supported:
> [!NOTE] > Client applications (for example, WPF, Windows Forms or UWP) aren't supported.
-If you've enabled Snapshot Debugger but aren't seeing snapshots, check our [Troubleshooting guide](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
+If you've enabled Snapshot Debugger but aren't seeing snapshots, check our [Troubleshooting guide](https://learn.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot).
## Grant permissions
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 02/23/2023 Last updated : 02/27/2023 # What's new in Azure NetApp Files
Azure NetApp Files is updated regularly. This article provides a summary about t
Azure NetApp Files now supports a lower limit of 2 TiB for capacity pool sizing with Standard network features.
- You can now choose a minimum size of 2 TiB when creating a capacity pool. Capacity pools smaller than 4 TiB in size can only be used with volumes using standard network features. This enhancement provides a more cost effective solution for running workloads such as SAP-shared files and VDI that require lower capacity pool sizes for their capacity and performance needs. When you have less than 2-4 TiB capacity with proportional performance, this enhancement allows you to start with 2 TiB as a minimum pool size and increase with 1-TiB increments. For capacities less than 3 TiB, this enhancement saves cost by allowing you to re-evaluate volume planning to take advantage of savings of smaller capacity pools. This feature is supported in all [regions with Standard network features](azure-netapp-files-network-topologies.md#supported-regions).
+ You can now choose a minimum size of 2 TiB when creating a capacity pool. Capacity pools smaller than 4 TiB in size can only be used with volumes using [standard network features](configure-network-features.md#options-for-network-features). This enhancement provides a more cost effective solution for running workloads such as SAP-shared files and VDI that require lower capacity pool sizes for their capacity and performance needs. When you have less than 2-4 TiB capacity with proportional performance, this enhancement allows you to start with 2 TiB as a minimum pool size and increase with 1-TiB increments. For capacities less than 3 TiB, this enhancement saves cost by allowing you to re-evaluate volume planning to take advantage of savings of smaller capacity pools. This feature is supported in all [regions with Standard network features](azure-netapp-files-network-topologies.md#supported-regions).
## December 2022
azure-resource-manager Manage Resource Groups Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-python.md
+
+ Title: Manage resource groups - Python
+description: Use Python to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups.
++ Last updated : 02/27/2023++
+# Manage Azure resource groups by using Python
+
+Learn how to use Python with [Azure Resource Manager](overview.md) to manage your Azure resource groups.
++
+## Prerequisites
+
+* Python 3.7 or later installed. To install the latest, see [Python.org](https://www.python.org/downloads/)
+
+* The following Azure library packages for Python installed in your virtual environment. To install any of the packages, use `pip install {package-name}`
+ * azure-identity
+ * azure-mgmt-resource
+ * azure-mgmt-storage
+
+* The examples in this article use CLI-based authentication (`AzureCliCredential`). Depending on your environment, you may need to run `az login` first to authenticate.
+
+* An environment variable with your Azure subscription ID. To get your Azure subscription ID, use:
+
+ ```azurecli-interactive
+ az account show --name 'your subscription name' --query id -o tsv
+ ```
+
+ To set the value, use the option for your environment.
+
+ #### [Windows](#tab/windows)
+
+ ```console
+ setx AZURE_SUBSCRIPTION_ID your-subscription-id
+ ```
+
+ > [!NOTE]
+ > If you only need to access the environment variable in the current running console, you can set the environment variable with `set` instead of `setx`.
+
+ After you add the environment variables, you may need to restart any running programs that will need to read the environment variable, including the console window. For example, if you're using Visual Studio as your editor, restart Visual Studio before running the example.
+
+ #### [Linux](#tab/linux)
+
+ ```bash
+ export AZURE_SUBSCRIPTION_ID=your-subscription-id
+ ```
+
+ After you add the environment variables, run `source ~/.bashrc` from your console window to make the changes effective.
+
+ #### [macOS](#tab/macos)
+
+ ##### Bash
+
+ Edit your .bash_profile, and add the environment variables:
+
+ ```bash
+ export AZURE_SUBSCRIPTION_ID=your-subscription-id
+ ```
+
+ After you add the environment variables, run `source ~/.bash_profile` from your console window to make the changes effective.
+
+## What is a resource group?
+
+A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to add resources to resource groups based on what makes the most sense for your organization. Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group.
+
+The resource group stores metadata about the resources. When you specify a location for the resource group, you're specifying where that metadata is stored. For compliance reasons, you may need to ensure that your data is stored in a particular region.
+
+## Create resource groups
+
+To create a resource group, use [ResourceManagementClient.resource_groups.create_or_update](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.resourcegroupsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-resourcegroupsoperations-create-or-update).
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+rg_result = resource_client.resource_groups.create_or_update(
+ "exampleGroup",
+ {
+ "location": "westus"
+ }
+)
+
+print(f"Provisioned resource group with ID: {rg_result.id}")
+```
+
+## List resource groups
+
+To list the resource groups in your subscription, use [ResourceManagementClient.resource_groups.list](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.resourcegroupsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-resourcegroupsoperations-list).
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+rg_list = resource_client.resource_groups.list()
+
+for rg in rg_list:
+ print(rg.name)
+```
+
+To get one resource group, provide the name of the resource group.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+rg_result = resource_client.resource_groups.get("exampleGroup")
+
+print(f"Retrieved resource group {rg_result.name} in the {rg_result.location} region with resource ID {rg_result.id}")
+```
+
+## Delete resource groups
+
+To delete a resource group, use [ResourceManagementClient.resource_groups.begin_delete](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.resourcegroupsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-resourcegroupsoperations-begin-delete).
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+rg_result = resource_client.resource_groups.begin_delete("exampleGroup")
+```
+
+For more information about how Azure Resource Manager orders the deletion of resources, see [Azure Resource Manager resource group deletion](delete-resource-group.md).
+
+## Deploy resources
+
+You can deploy Azure resources by using Python classes or by deploying an Azure Resource Manager (ARM) template.
+
+The following example creates a storage account. The name you provide for the storage account must be unique across Azure.
+
+```python
+import os
+import random
+from azure.identity import AzureCliCredential
+from azure.mgmt.storage import StorageManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+random_postfix = ''.join(random.choices('abcdefghijklmnopqrstuvwxyz1234567890', k=13))
+storage_account_name = "demostore" + random_postfix
+
+storage_client = StorageManagementClient(credential, subscription_id)
+
+storage_account_result = storage_client.storage_accounts.begin_create(
+ "exampleGroup",
+ storage_account_name,
+ {
+ "location": "westus",
+ "sku": {
+ "name": "Standard_LRS"
+ }
+ }
+)
+```
+
+To deploy an ARM template, use [ResourceManagementClient.deployments.begin_create_or_update](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.deploymentsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-deploymentsoperations-begin-create-or-update).
+
+```python
+import os
+import json
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import DeploymentMode
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+with open("storage.json", "r") as template_file:
+ template_body = json.load(template_file)
+
+rg_deployment_result = resource_client.deployments.begin_create_or_update(
+ "exampleGroup",
+ "exampleDeployment",
+ {
+ "properties": {
+ "template": template_body,
+ "parameters": {
+ "storagePrefix": {
+ "value": "demostore"
+ },
+ },
+ "mode": DeploymentMode.incremental
+ }
+ }
+)
+```
+
+The following example shows the ARM template you're deploying:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "storagePrefix": {
+ "type": "string",
+ "minLength": 3,
+ "maxLength": 11
+ }
+ },
+ "variables": {
+ "uniqueStorageName": "[concat(parameters('storagePrefix'), uniqueString(resourceGroup().id))]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2022-09-01",
+ "name": "[variables('uniqueStorageName')]",
+ "location": "eastus",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "StorageV2",
+ "properties": {
+ "supportsHttpsTrafficOnly": true
+ }
+ }
+ ]
+}
+```
+
+For more information about deploying an ARM template, see [Deploy resources with ARM templates and Azure CLI](../templates/deploy-cli.md).
+
+## Lock resource groups
+
+Locking prevents other users in your organization from accidentally deleting or modifying critical resources.
+
+To prevent a resource group and its resources from being deleted, use [ManagementLockClient.management_locks.create_or_update_at_resource_group_level](/python/api/azure-mgmt-resource/azure.mgmt.resource.locks.v2016_09_01.operations.managementlocksoperations#azure-mgmt-resource-locks-v2016-09-01-operations-managementlocksoperations-create-or-update-at-resource-group-level).
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ManagementLockClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+lock_client = ManagementLockClient(credential, subscription_id)
+
+lock_result = lock_client.management_locks.create_or_update_at_resource_group_level(
+ "exampleGroup",
+ "lockGroup",
+ {
+ "level": "CanNotDelete"
+ }
+)
+```
+
+To get the locks for a resource group, use [ManagementLockClient.management_locks.list_at_resource_group_level](/python/api/azure-mgmt-resource/azure.mgmt.resource.locks.v2016_09_01.operations.managementlocksoperations#azure-mgmt-resource-locks-v2016-09-01-operations-managementlocksoperations-list-at-resource-group-level).
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ManagementLockClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+lock_client = ManagementLockClient(credential, subscription_id)
+
+lock_result = lock_client.management_locks.get_at_resource_group_level("exampleGroup", "lockGroup")
+
+print(f"Lock {lock_result.name} applies {lock_result.level} lock")
+```
+
+To delete a lock on a resource group, use [ManagementLockClient.management_locks.delete_at_resource_group_level](/python/api/azure-mgmt-resource/azure.mgmt.resource.locks.v2016_09_01.operations.managementlocksoperations#azure-mgmt-resource-locks-v2016-09-01-operations-managementlocksoperations-delete-at-resource-group-level).
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ManagementLockClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+lock_client = ManagementLockClient(credential, subscription_id)
+
+lock_client.management_locks.delete_at_resource_group_level("exampleGroup", "lockGroup")
+```
+
+For more information, see [Lock resources with Azure Resource Manager](lock-resources.md).
+
+## Tag resource groups
+
+You can apply tags to resource groups and resources to logically organize your assets. For information, see [Using tags to organize your Azure resources](tag-resources.md).
+
+## Export resource groups to templates
+
+To assist with creating ARM templates, you can export a template from existing resources. For more information, see [Use Azure portal to export a template](../templates/export-template-portal.md).
+
+## Manage access to resource groups
+
+[Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) is the way that you manage access to resources in Azure. For more information, see [Add or remove Azure role assignments using Azure CLI](../../role-based-access-control/role-assignments-cli.md).
+
+## Next steps
+
+- To learn Azure Resource Manager, see [Azure Resource Manager overview](overview.md).
+- For more information about authentication options, see [Authenticate Python apps to Azure services by using the Azure SDK for Python](/azure/developer/python/sdk/authentication-overview).
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts
description: Learn how to create Azure NetApp Files-based NFS datastores for Azure VMware Solution hosts. Previously updated : 02/24/2023 Last updated : 02/27/2023
Before you begin the prerequisites, review the [Performance best practices](#per
## Supported regions
-Azure VMware Solution are currently supported in the following regions:
-**Asia**: East Asia, Japan East, Japan West, Southeast Asia.
-**Australia**: Australia East, Australia Southeast.
-**Brazil**: Brazil South.
-**Europe**: France Central, Germany West Central, North Europe, Sweden Central, Sweden North, Switzerland West, UK South, UK West, West Europe.
-**North America**: Canada Central, Canada East, Central US, East US, East US 2, North Central US, South Central US, West US, West US 2.
-
+Azure VMware Solution is currently supported in the following regions:
+
+* Australia East
+* Australia Southeast
+* Brazil South
+* Canada Central
+* Canada East
+* Central US
+* East Asia
+* East US
+* East US 2
+* France Central
+* Germany West Central
+* Japan East
+* Japan West
+* North Central US
+* North Europe
+* South Africa North
+* South Central US
+* Southeast Asia
+* Sweden Central
+* Sweden North
+* Switzerland West
+* UK South
+* UK West
+* West Europe
+* West US
+* West US 2
## Performance best practices
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
description: Learn about the platform updates to Azure VMware Solution.
Previously updated : 2/03/2023 Last updated : 2/27/2023 # What's new in Azure VMware Solution Microsoft will regularly apply important updates to the Azure VMware Solution for new features and software lifecycle management. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management). + ## February 2023
+All new Azure VMware Solution private clouds are being deployed with NSX-T Data Center version 3.2.2. NSX-T Data Center versions in existing private clouds will be upgraded to NSX-T Data Center version 3.2.2 through April 2023.
+ VMware HCX Enterprise is now available and supported on Azure VMware Solution at no extra cost. HCX Enterprise brings valuable [services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED.html) like, Replicated Assisted vMotion (RAV), Mobility Optimized Networking (MON). HCX Enterprise is now automatically installed for all new HCX add-on requests, and existing HCX Advanced customers can upgrade to HCX Enterprise using the Azure portal. Learn more on how to [Install and activate VMware HCX in Azure VMware Solution](https://learn.microsoft.com/azure/azure-vmware/install-vmware-hcx). **Log analytics - monitor Azure VMware Solution**
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-networking.md
The diagram below shows the on-premises to private cloud interconnectivity, whic
For full interconnectivity to your private cloud, you need to enable ExpressRoute Global Reach and then request an authorization key and private peering ID for Global Reach in the Azure portal. The authorization key and peering ID are used to establish Global Reach between an ExpressRoute circuit in your subscription and the ExpressRoute circuit for your private cloud. Once linked, the two ExpressRoute circuits route network traffic between your on-premises environments to your private cloud. For more information on the procedures, see the [tutorial for creating an ExpressRoute Global Reach peering to a private cloud](tutorial-expressroute-global-reach-private-cloud.md).
+> [!IMPORTANT]
+> Customers should not advertise bogon routes over ExpressRoute from on-premises or their Azure VNET. Examples of bogon routes include 0.0.0.0/5 or 192.0.0.0/3.
+ ## Limitations [!INCLUDE [azure-vmware-solutions-limits](includes/azure-vmware-solutions-limits.md)]
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-identity-source-vcenter.md
You'll run the `Remove-ExternalIdentitySources` cmdlet to remove all existing ex
1. Check **Notifications** or the **Run Execution Status** pane to see the progress.
+## Rotate an existing external identity source account's username and/or password
+
+1. Use the [Get-ExternalIdentitySources](configure-identity-source-vcenter.md#list-external-identity) run command to pull current populated values.
+
+1. Run [Remove-ExternalIdentitySource](configure-identity-source-vcenter.md#remove-existing-external-identity-sources) and provide DomainName of External Identity source you'd like to rotate.
+> [!IMPORTANT]
+> If you do not provide a DomainName, all external identity sources will be removed.
+
+1. Run [New-LDAPSIdentitySource](configure-identity-source-vcenter.md#add-active-directory-over-ldap-with-ssl) or [New-LDAPIdentitySource](configure-identity-source-vcenter.md#add-active-directory-over-ldap) depending on your configuration.
+
+>[!NOTE]
+>There is work to make this an easier process than it is today with a new run command.
+>[PR with VMware](https://github.com/vmware/PowerCLI-Example-Scripts/pull/604)
+ ## Next steps Now that you've learned about how to configure LDAP and LDAPS, you can learn more about:
batch Batch Custom Image Pools To Azure Compute Gallery Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-custom-image-pools-to-azure-compute-gallery-migration-guide.md
- Title: Migrate Azure Batch Custom Image Pools to Azure Compute Gallery
-description: Learn how to migrate Azure Batch custom image pools to Azure compute gallery and plan for feature end of support.
---- Previously updated : 02/23/2023--
-# Migrate Azure Batch custom image pools to Azure Compute Gallery
-
-To improve reliability, scale, and align with modern Azure offerings, Azure Batch will retire custom image Batch pools specified from virtual hard disk (VHD) blobs in Azure Storage and Azure Managed Images on *March 31, 2024*. Learn how to migrate your Azure Batch custom image pools using Azure Compute Gallery.
--
-## Feature end of support
-
-When you create an Azure Batch pool using the Virtual Machine Configuration, you specify an image reference that provides the operating system for each compute node in the pool. You can create a pool of virtual machines either with a supported Azure Marketplace image or with a custom image. Custom images from VHD blobs and managed Images are either legacy offerings or non-scalable solutions for Azure Batch. To ensure reliable infrastructure provisioning at scale, all custom image sources other than Azure Compute Gallery will be retired on *March 31, 2024*.
-
-## Alternative: Use Azure Compute Gallery references for Batch custom image pools
-
-When you use the Azure Compute Gallery (formerly known as Shared Image Gallery) for your custom image, you have control over the operating system type and configuration, as well as the type of data disks. Your shared image can include applications and reference data that become available on all the Batch pool nodes as soon as they're provisioned. You can also have multiple versions of an image as needed for your environment. When you use an image version to create a VM, the image version is used to create new disks for the VM.
-
-Using a shared image saves time in preparing your pool's compute nodes to run your Batch workload. It's possible to use an Azure Marketplace image and install software on each compute node after provisioning, but using a shared image can lead to more efficiencies, in faster compute node to ready state and reproducible workloads. Additionally, you can specify multiple replicas for the shared image so when you create pools with many compute nodes, provisioning latencies can be lower.
-
-## Migrate Your Eligible Pools
-
-To migrate your Batch custom image pools from managed image to shared image, review the Azure Batch guide on using [Azure Compute Gallery to create a custom image pool](batch-sig-images.md).
-
-If you have either a VHD blob or a managed image, you can convert them directly to a Compute Gallery image that can be used with Azure Batch custom image pools. When you're creating a VM image definition for a Compute Gallery, on the Version tab, there is an option to select the source for image types to migrate that're being retired for Batch custom image pools:
-
-| Source | Other fields |
-|||
-| Managed image | Select the **Source image** from the drop-down. The managed image must be in the same region that you chose in **Instance details.** |
-| VHD in a storage account | Select **Browse** to choose the storage account for the VHD. |
-
-For more information about this process, see [creating an image definition and version for Compute Gallery](../virtual-machines/image-version.md#create-an-image).
-
-## FAQs
--- How can I create an Azure Compute Gallery?-
- See the [guide](../virtual-machines/create-gallery.md#create-a-private-gallery) for Compute Gallery creation.
--- How do I create a Pool with a Compute Gallery image?-
- See the [guide](batch-sig-images.md) for creating a Pool with a Compute Gallery image.
--- What considerations are there for Compute Gallery image based Pools?-
- See the [guide](batch-sig-images.md#considerations-for-large-pools) for more information.
--- Can I use Azure Compute Gallery images in different subscriptions or in different Azure AD tenants?
-
- If the Shared Image is not in the same subscription as the Batch account, you must register the Microsoft.Batch resource provider for that subscription. The two subscriptions must be in the same Azure AD tenant. The image can be in a different region as long as it has replicas in the same region as your Batch account.
--
-## Next steps
-
-For more information, see [Azure Compute Gallery](../virtual-machines/azure-compute-gallery.md).
batch Batch Docker Container Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md
These images are only supported for use in Azure Batch pools and are geared for
You can also create custom images from VMs running Docker on one of the Linux distributions that is compatible with Batch. If you choose to provide your own custom Linux image, see the instructions in [Use a managed custom image to create a pool of virtual machines](batch-custom-images.md).
-For Docker support on a custom image, install [Docker Community Edition (CE)](https://www.docker.com/community-edition) or [Docker Enterprise Edition (EE)](https://docker-docs.netlify.app/ee/).
+For Docker support on a custom image, install [Docker Community Edition (CE)](https://www.docker.com/community-edition) or [Docker Enterprise Edition (EE)](https://docs.docker.com/).
Additional considerations for using a custom Linux image:
batch Batch Pools To Simplified Compute Node Communication Model Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pools-to-simplified-compute-node-communication-model-migration-guide.md
- Title: Migrate Azure Batch pools to the Simplified compute node communication model
-description: Learn how to migrate Azure Batch pools to the simplified compute node communication model and plan for feature end of support.
---- Previously updated : 02/23/2023--
-# Migrate Azure Batch pools to the Simplified compute node communication model
-
-To improve security, simplify the user experience, and enable key future improvements, Azure Batch will retire the classic compute node communication model on *March 31, 2026*. Learn how to migrate your Batch pools to using the simplified compute node communication model.
--
-## About the feature
-
-An Azure Batch pool contains one or more compute nodes, which execute user-specified workloads in the form of Batch tasks. To enable Batch functionality and Batch pool infrastructure management, compute nodes must communicate with the Azure Batch service. In the Classic compute node communication model, the Batch service initiates communication to the compute nodes and compute nodes must be able to communicate with Azure Storage for baseline operations. In the Simplified compute node communication model, Batch pools only require outbound access to the Batch service for baseline operations.
-
-## Feature end of support
-
-The simplified compute node communication model will replace the classic compute node communication model after *March 31, 2026*. The change is introduced in two phases. From now until *September 30, 2024*, the default node communication mode for newly created [Batch pools with virtual networks](./batch-virtual-network.md) will remain as classic. After *September 30, 2024*, the default node communication mode for newly created Batch pools with virtual networks will switch to the simplified. After *March 31, 2026*, the option to use classic compute node communication mode will no longer be honored. Batch pools without user-specified virtual networks are unaffected by this change and the default communication mode is controlled by the Batch service.
-
-## Alternative: Use Simplified Compute Node Communication Model
-
-The simplified compute node communication mode streamlines the way Batch pool infrastructure is managed on behalf of users. This communication mode reduces the complexity and scope of inbound and outbound networking connections required in the baseline operations.
-
-The simplified model also provides more fine-grained data exfiltration control, since outbound communication to *Storage.region* is no longer required. You can explicitly lock down outbound communication to Azure Storage if necessary for your workflow (such as AppPackage storage accounts, other storage accounts for resource files or output files, or other similar scenarios).
-
-## Migrate Your Eligible Pools
-
-To migrate your Batch pools from classic to the simplified compute node communication model, please follow this document from the section entitled [potential impact between classic and simplified communication modes](simplified-compute-node-communication.md#potential-impact-between-classic-and-simplified-communication-modes) to either create new pools or update existing pools with simplified compute node communication.
-
-## FAQs
--- Will I still require a public IP address for my nodes?-
- The public IP address is still needed to initiate the outbound connection to Azure Batch. If you want to eliminate the need for public IP addresses entirely, see the guide to [create a simplified node communication pool without public IP addresses](./simplified-node-communication-pool-no-public-ip.md)
--- How can I connect to my nodes for diagnostic purposes?-
- RDP or SSH connectivity to the node is unaffected ΓÇô load balancer(s) will still be created which can route those requests through to the node when provisioned with a public IP address.
--- What differences will I see in billing?-
- There should be no cost or billing implications for the new model.
--- Are there any changes to agents on the compute node?
-
- An extra agent will be running on compute nodes for both Windows and Linux, azbatch-cluster-agent.
--- Will there be any change to how my linked resources from Azure Storage in Batch pools and tasks are downloaded?-
- This behavior is unaffected ΓÇô all user-specified resources that require Azure Storage such as resource files, output files, or application packages will still be made from the compute node directly to Azure Storage. You'll need to ensure your networking configuration allows these flows.
--
-## Next steps
-
-For more information, see [Simplified compute node communication](./simplified-compute-node-communication.md).
-
cognitive-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md
Previously updated : 01/24/2023 Last updated : 02/27/2023 zone_pivot_groups: programming-languages-speech-services-nomore-variant
For more information, see [supported languages](language-support.md?tabs=languag
Speech supports both at-start and continuous language identification (LID). > [!NOTE]
-> Continuous language identification is only supported with Speech SDKs in C#, C++, Java ([for speech to text only](#speech-to-text)), and Python.
+> Continuous language identification is only supported with Speech SDKs in C#, C++, Java ([for speech to text only](#speech-to-text)), JavaScript ([for speech to text only](#speech-to-text)),and Python.
- At-start LID identifies the language once within the first few seconds of audio. Use at-start LID if the language in the audio won't change. With at-start LID, a single language is detected and returned in less than 5 seconds. - Continuous LID can identify multiple languages for the duration of the audio. Use continuous LID if the language in the audio could change. Continuous LID doesn't support changing languages within the same sentence. For example, if you're primarily speaking Spanish and insert some English words, it will not detect the language change per word.
recognizer.stop_continuous_recognition()
You use Speech-to-text recognition when you need to identify the language in an audio source and then transcribe it to text. For more information, see [Speech-to-text overview](speech-to-text.md). > [!NOTE]
-> Speech-to-text recognition with at-start language identification is supported with Speech SDKs in C#, C++, Python, Java, JavaScript, and Objective-C. Speech-to-text recognition with continuous language identification is only supported with Speech SDKs in C#, C++, Java, and Python.
+> Speech-to-text recognition with at-start language identification is supported with Speech SDKs in C#, C++, Python, Java, JavaScript, and Objective-C. Speech-to-text recognition with continuous language identification is only supported with Speech SDKs in C#, C++, Java, JavaScript, and Python.
> > Currently for speech-to-text recognition with continuous language identification, you must create a SpeechConfig from the `wss://{region}.stt.speech.microsoft.com/speech/universal/v2` endpoint string, as shown in code examples. In a future SDK release you won't need to set it.
cognitive-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md
Previously updated : 03/28/2022 Last updated : 02/27/2022
Below is a sample command to set file/directory ownership.
sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ... ``` ++ ## Usage records When operating Docker containers in a disconnected environment, the container will write usage records to a volume where they're collected over time. You can also call a REST endpoint to generate a report about service usage.
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
scopes:
Once you've created a Dapr secret store using one of the above approaches, you can reference that secret store from other Dapr components in the same environment. In the following example, the `secretStoreComponent` field is populated with the name of the secret store specified above, where the `sb-root-connectionstring` is stored. ```yaml
-componentType: pubsub.azure.servicebus
+componentType: pubsub.azure.servicebus.queue
version: v1 secretStoreComponent: "my-secret-store" metadata:
az containerapp env dapr-component set --name ENVIRONMENT_NAME --resource-group
```yaml # pubsub.yaml for Azure Service Bus component
-componentType: pubsub.azure.servicebus
+componentType: pubsub.azure.servicebus.queue
version: v1 secretStoreComponent: "my-secret-store" metadata:
This resource defines a Dapr component called `dapr-pubsub` via Bicep. The Dapr
resource daprComponent 'daprComponents@2022-03-01' = { name: 'dapr-pubsub' properties: {
- componentType: 'pubsub.azure.servicebus'
+ componentType: 'pubsub.azure.servicebus.queue'
version: 'v1' secretStoreComponent: 'my-secret-store' metadata: [
This resource defines a Dapr component called `dapr-pubsub` via ARM.
"type": "daprComponents", "name": "dapr-pubsub", "properties": {
- "componentType": "pubsub.azure.servicebus",
+ "componentType": "pubsub.azure.servicebus.queue",
"version": "v1", "secretScoreComponent": "my-secret-store", "metadata": [
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
az role assignment create --assignee $PRINCIPAL_ID \
```azurepowershell Install-Module Az.Resources
-New-AzRoleAssignment -ObjectId $PrincipalId -RoleDefinitionName 'Storage Blob Data Contributor' -Scope '/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Storage/storageAccounts/$StorageAcctName'
+New-AzRoleAssignment -ObjectId $PrincipalId -RoleDefinitionName 'Storage Blob Data Contributor' -Scope "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Storage/storageAccounts/$StorageAcctName"
```
container-instances Container Instances Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-vnet.md
az network vnet delete --resource-group $RES_GROUP --name aci-vnet
## Next steps
-To deploy a new virtual network, subnet, network profile, and container group using a Resource Manager template, see [Create an Azure container group with VNet](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet
+* To deploy a new virtual network, subnet, network profile, and container group using a Resource Manager template, see [Create an Azure container group with VNet](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet
).
+* To deploy Azure Container Instances that can pull images from an Azure Container Registry through a private endpoint, see [Deploy to Azure Container Instances from Azure Container Registry using a managed identity](../container-instances/using-azure-container-registry-mi.md).
+ <!-- IMAGES --> [aci-vnet-01]: ./media/container-instances-vnet/aci-vnet-01.png
container-instances Using Azure Container Registry Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/using-azure-container-registry-mi.md
[Azure Container Registry][acr-overview] (ACR) is an Azure-based, managed container registry service used to store private Docker container images. This article describes how to pull container images stored in an Azure container registry when deploying to container groups with Azure Container Instances. One way to configure registry access is to create an Azure Active Directory managed identity.
+When access to an Azure Container Registry (ACR) is [restricted using a private endpoint](../container-registry/container-registry-private-link.md), using a managed identity allows Azure Container Instances [deployed into a virtual network](container-instances-vnet.md) to access the container registry through the private endpoint.
+ ## Prerequisites **Azure container registry**: You need a premium SKU Azure container registry with at least one image. If you need to create a registry, see [Create a container registry using the Azure CLI][acr-get-started]. Be sure to take note of the registry's `id` and `loginServer`
To deploy a container group using managed identity to authenticate image pulls v
az container create --name my-containergroup --resource-group myResourceGroup --image <loginServer>/hello-world:v1 --acr-identity $userID --assign-identity $userID --ports 80 --dns-name-label <dns-label> ```
+## Deploy in a virtual network using the Azure CLI
+
+To deploy a container group to a virtual network using managed identity to authenticate image pulls from an ACR that runs behind a private endpoint via the Azure CLI, use the following command:
+
+```azurecli-interactive
+az container create --name my-containergroup --resource-group myResourceGroup --image <loginServer>/hello-world:v1 --acr-identity $userID --assign-identity $userID --vnet "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/myVNetResourceGroup/providers/ --subnet mySubnetName
+```
+
+For more info on how to deploy to a virtual network see [Deploy container instances into an Azure virtual network](./container-instances-vnet.md).
+
+## Deploy a multi-container group in a virtual network using YAML and the Azure CLI
+
+To deploy a multi-container group to a virtual network using managed identity to authenticate image pulls from an ACR that runs behind a private endpoint via the Azure CLI, you can specify the container group configuration in a YAML file. Then pass the YAML file as a parameter to the command.
+
+```yaml
+apiVersion: '2021-10-01'
+location: eastus
+type: Microsoft.ContainerInstance/containerGroups
+identity:
+ type: UserAssigned
+ userAssignedIdentities: {
+ '/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACRId': {}
+ }
+properties:
+ osType: Linux
+ imageRegistryCredentials:
+ - server: myacr.azurecr.io
+ identity: '/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACRId'
+ subnetIds:
+ - id: '/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/myVNetResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNetName/subnets/mySubnetName'
+ name: mySubnetName
+ containers:
+ - name: myContainer-1
+ properties:
+ resources:
+ requests:
+ cpu: '.4'
+ memoryInGb: '1'
+ environmentVariables:
+ - name: CONTAINER
+ value: 1
+ image: 'myacr.azurecr.io/myimage:latest'
+ - name: myContainer-2
+ properties:
+ resources:
+ requests:
+ cpu: '.4'
+ memoryInGb: '1'
+ environmentVariables:
+ - name: CONTAINER
+ value: 2
+ image: 'myacr.azurecr.io/myimage:latest'
+```
+
+```azurecli-interactive
+az container create --name my-containergroup --resource-group myResourceGroup --file my-YAML-file.yaml
+```
+
+For more info on how to deploy to a multi-container group see [Deploy a multi-container group](./container-instances-multi-container-yaml.md).
+ ## Clean up resources To remove all resources from your Azure subscription, delete the resource group:
container-registry Container Registry Auto Purge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auto-purge.md
At a minimum, specify the following when you run `acr purge`:
`acr purge` supports several optional parameters. The following two are used in examples in this article:
-* `--untagged` - Specifies that all manifests that don't have associated tags (*untagged manifests*) are deleted.
+* `--untagged` - Specifies that all manifests that don't have associated tags (*untagged manifests*) are deleted. This parameter also deletes untagged manifests in addition to tags that are already being deleted.
* `--dry-run` - Specifies that no data is deleted, but the output is the same as if the command is run without this flag. This parameter is useful for testing a purge command to make sure it does not inadvertently delete data you intend to preserve. * `--keep` - Specifies that the latest x number of to-be-deleted tags are retained. * `--concurrency` - Specifies a number of purge tasks to process concurrently. A default value is used if this parameter is not provided.
container-registry Container Registry Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-private-link.md
This article shows how to configure a private endpoint for your registry using t
[!INCLUDE [container-registry-scanning-limitation](../../includes/container-registry-scanning-limitation.md)] > [!NOTE]
-> Starting from October 2021, new container registries allow a maximum of 200 private endpoints. Registries created earlier allow a maximum of 10 private endpoints. Use the [az acr show-usage](/cli/azure/acr#az-acr-show-usage) command to see the limit for your registry. Please open a support ticket if the maximum limit of private endpoints increases to 200.
+> Starting from October 2021, new container registries allow a maximum of 200 private endpoints. Registries created earlier allow a maximum of 10 private endpoints. Use the [az acr show-usage](/cli/azure/acr#az-acr-show-usage) command to see the limit for your registry. Please open a support ticket to increase the limit to 200 private endpoints.
## Prerequisites
GEO_REPLICA_DATA_ENDPOINT_FQDN=$(az network nic show \
--query "ipConfigurations[?privateLinkConnectionProperties.requiredMemberName=='registry_data_$REPLICA_LOCATION'].privateLinkConnectionProperties.fqdns" \ --output tsv) ```+
+Once a new geo-replication is added, a private endpoint connection is set to be pending. To approve a private endpoint connection configured manually run [az acr private-endpoint-connection approve][az-acr-private-endpoint-connection-approve] command.
+ ### Create DNS records in the private zone The following commands create DNS records in the private zone for the registry endpoint and its data endpoint. For example, if you have a registry named *myregistry* in the *westeurope* region, the endpoint names are `myregistry.azurecr.io` and `myregistry.westeurope.data.azurecr.io`.
az group delete --name $RESOURCE_GROUP
* To learn more about Private Link, see the [Azure Private Link](../private-link/private-link-overview.md) documentation.
-* To verify DNS settings in the virtual network that route to a private endpoint, run the [az acr check-health](/cli/azure/acr#az-acr-check-health) command with the `--vnet` parameter. For more information, see [Check the health of an Azure container registry](container-registry-check-health.md)
+* To verify DNS settings in the virtual network that route to a private endpoint, run the [az acr check-health](/cli/azure/acr#az-acr-check-health) command with the `--vnet` parameter. For more information, see [Check the health of an Azure container registry](container-registry-check-health.md).
* If you need to set up registry access rules from behind a client firewall, see [Configure rules to access an Azure container registry behind a firewall](container-registry-firewall-access-rules.md).
-* [Troubleshoot Azure Private Endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md)
+* [Troubleshoot Azure Private Endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md).
+
+* If you need to deploy Azure Container Instances that can pull images from an ACR through a private endpoint, see [Deploy to Azure Container Instances from Azure Container Registry using a managed identity](../container-instances/using-azure-container-registry-mi.md).
<!-- LINKS - external --> [docker-linux]: https://docs.docker.com/engine/installation/#supported-platforms
container-registry Monitor Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/monitor-service.md
When you have critical applications and business processes relying on Azure reso
## Monitor overview
-The **Overview** page in the Azure portal for each registry includes a brief view of recent resource usage and activity, such as push and pull operations. This high-level information is useful, but only a small amount of data is shown there.
+The **Overview** page in the Azure portal for each registry includes a brief view of recent resource usage and activity, such as push and pull operations. This high-level information is useful, but only a small amount of data is shown there.
## What is Azure Monitor?
Start with the article [Monitoring Azure resources with Azure Monitor](../azure-
The following sections build on this article by describing the specific data gathered for Azure Container Registry and providing examples for configuring data collection and analyzing this data with Azure tools.
-## Monitoring data
+## Monitoring data
-Azure Container Registry collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
+Azure Container Registry collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
See [Monitoring Azure Container Registry data reference](monitor-service-reference.md) for detailed information on the metrics and logs created by Azure Container Registry.
The following image shows the options when you enable diagnostic settings for a
The metrics and logs you can collect are discussed in the following sections.
-## Analyzing metrics (preview)
+## Analyzing metrics
-You can analyze metrics for an Azure container registry with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for an Azure container registry with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
> [!TIP]
-> You can also go to the metrics explorer by navigating to your registry in the portal. In the menu, select **Metrics (preview)** under **Monitoring**.
+> You can also go to the metrics explorer by navigating to your registry in the portal. In the menu, select **Metrics** under **Monitoring**.
For a list of the platform metrics collected for Azure Container Registry, see [Monitoring Azure Container Registry data reference metrics](monitor-service-reference.md#metrics)
The following Azure CLI commands can be used to get information about the Azure
* [az monitor metrics list-definitions](/cli/azure/monitor/metrics#az-monitor-metrics-list-definitions) - List metric definitions and dimensions * [az monitor metrics list](/cli/azure/monitor/metrics#az-monitor-metrics-list) - Retrieve metric values
-### REST API
+### REST API
You can use the Azure Monitor REST API to get information programmatically about the Azure Container Registry metrics.
The following image shows sample output:
:::image type="content" source="media/monitor-service/azure-monitor-query.png" alt-text="Query log data":::
-Following are queries that you can use to help you monitor your registry resource.
+Following are queries that you can use to help you monitor your registry resource.
### Error events from the last hour
ContainerRegistryLoginEvents
| project TimeGenerated, Identity, CallerIpAddress, ResultDescription ``` - ## Alerts Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks. - <!-- only include next line if applications run on your service and work with App Insights. If you are creating or running an application which run on <*service*> [Azure Monitor Application Insights](../azure-monitor/overview.md#application-insights) may offer additional types of alerts.
The following table lists common and recommended alert rules for Azure Container
### Example: Send email alert when registry storage used exceeds a value 1. In the Azure portal, navigate to your registry.
-1. Select **Metrics (preview)** under **Monitoring**.
+1. Select **Metrics** under **Monitoring**.
1. In the metrics explorer, in **Metric**, select **Storage used**. 1. Select **New alert rule**. 1. In **Scope**, confirm the registry resource for which you want to create an alert rule.
The following table lists common and recommended alert rules for Azure Container
- See [Monitoring Azure Container Registry data reference](monitor-service-reference.md) for a reference of the metrics, logs, and other important values created by Azure Container Registry. - See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.-- See [Show registry usage](container-registry-skus.md#show-registry-usage) for information about how to get a snapshot of storage usage and other resource consumption in your registry.
+- See [Show registry usage](container-registry-skus.md#show-registry-usage) for information about how to get a snapshot of storage usage and other resource consumption in your registry.
cosmos-db Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/change-feed.md
Title: Working with the change feed support in Azure Cosmos DB
-description: Use Azure Cosmos DB change feed support to track changes in documents, event-based processing like triggers, and keep caches and analytic systems up-to-date
+ Title: Working with the change feed
+
+description: Use Azure Cosmos DB change feed to track changes, process events, and keep other systems up-to-date.
Previously updated : 06/07/2021 Last updated : 02/27/2023 + # Change feed in Azure Cosmos DB+ [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin](includes/appliesto-nosql-mongodb-cassandra-gremlin.md)] Change feed in Azure Cosmos DB is a persistent record of changes to a container in the order they occur. Change feed support in Azure Cosmos DB works by listening to an Azure Cosmos DB container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified. The persisted changes can be processed asynchronously and incrementally, and the output can be distributed across one or more consumers for parallel processing.
Learn more about [change feed design patterns](change-feed-design-patterns.md).
## Supported APIs and client SDKs
-This feature is currently supported by the following Azure Cosmos DB APIs and client SDKs.
+The change feed feature is currently supported in the following Azure Cosmos DB SDKs.
-| **Client drivers** | **NoSQL** | **Apache Cassandra** | **MongoDB** | **Apache Gremlin** | **Table** |
+| **Client drivers** | **NoSQL** | **Apache Cassandra** | **MongoDB** | **Apache Gremlin** | **Table** | **PostgreSQL** |
| | | | | | | |
-| .NET | Yes | Yes | Yes | Yes | No |
-|Java|Yes|Yes|Yes|Yes|No|
-|Python|Yes|Yes|Yes|Yes|No|
-|Node/JS|Yes|Yes|Yes|Yes|No|
+| .NET | ![Icon indicating that this feature is supported in the .NET SDK for the API for NoSQL.](media/change-feed/yes-icon.svg) | ![Icon indicating that this feature is supported in the .NET SDK for the API for Apache Cassandra.](media/change-feed/yes-icon.svg) | ![Icon indicating that this feature is supported in the .NET SDK for the API for MongoDB.](media/change-feed/yes-icon.svg) | ![Icon indicating that this feature is supported in the .NET SDK for the API for Apache Gremlin.](media/change-feed/yes-icon.svg) | ![Icon indicating that this feature is not supported in the .NET SDK for the API for Table.](media/change-feed/no-icon.svg) | ![Icon indicating that this feature is not supported in the .NET SDK for the API for PostgreSQL.](media/change-feed/no-icon.svg) |
+| Java | ![Icon indicating that this feature is supported in the Java SDK for the API for NoSQL.](media/change-feed/yes-icon.svg) | ![Icon indicating that this feature is supported in the Java SDK for the API for Apache Cassandra.](media/change-feed/yes-icon.svg) | ![Icon indicating that this feature is supported in the Java SDK for the API for MongoDB.](media/change-feed/yes-icon.svg) | ![Icon indicating that this feature is supported in the Java SDK for the API for Apache Gremlin.](media/change-feed/yes-icon.svg) | ![Icon indicating that this feature is not supported in the Java SDK for the API for Table.](media/change-feed/no-icon.svg) | ![Icon indicating that this feature is not supported in the Java SDK for the API for PostgreSQL.](media/change-feed/no-icon.svg) |
+| Python | ![Icon indicating that this feature is supported in the Python SDK for the API for NoSQL.](media/change-feed/yes-icon.svg) | ![Icon indicating that this feature is supported in the Python SDK for the API for Apache Cassandra.](media/change-feed/yes-icon.svg) | ![Icon indicating that this feature is supported in the Python SDK for the API for MongoDB.](media/change-feed/yes-icon.svg) | ![Icon indicating that this feature is supported in the Python SDK for the API for Apache Gremlin.](media/change-feed/yes-icon.svg) | ![Icon indicating that this feature is not supported in the Python SDK for the API for Table.](media/change-feed/no-icon.svg) | ![Icon indicating that this feature is not supported in the Python SDK for the API for PostgreSQL.](media/change-feed/no-icon.svg) |
+| Node/JavaScript | ![Icon indicating that this feature is supported in the JavaScript SDK for the API for NoSQL.](media/change-feed/yes-icon.svg) | ![Icon indicating that this feature is supported in the JavaScript SDK for the API for Apache Cassandra.](media/change-feed/yes-icon.svg) | ![Icon indicating that this feature is supported in the JavaScript SDK for the API for MongoDB.](media/change-feed/yes-icon.svg) | ![Icon indicating that this feature is supported in the JavaScript SDK for the API for Apache Gremlin.](media/change-feed/yes-icon.svg) | ![Icon indicating that this feature is not supported in the JavaScript SDK for the API for Table.](media/change-feed/no-icon.svg) | ![Icon indicating that this feature is not supported in the JavaScript SDK for the API for PostgreSQL.](media/change-feed/no-icon.svg) |
## Change feed and different operations
-Today, you see all inserts and updates in the change feed. You can't filter the change feed for a specific type of operation. One possible alternative, is to add a "soft marker" on the item for updates and filter based on that when processing items in the change feed.
+Today, you see all inserts and updates in the change feed. You can't filter the change feed for a specific type of operation.
+
+> [!TIP]
+> One possible alternative, is to add a "soft marker" on the item for updates and filter based on that when processing items in the change feed.
+
+Currently change feed doesn't log delete operations. As a workaround, you can add a soft marker on the items that are being deleted. For example, you can add an attribute in the item called "deleted," set its value to "true," and then set a time-to-live (TTL) value on the item. Setting the TTL ensures that the item is automatically deleted. For more information, see [Time to Live (TTL)](nosql/time-to-live.md).
-Currently change feed doesn't log deletes. Similar to the previous example, you can add a soft marker on the items that are being deleted. For example, you can add an attribute in the item called "deleted" and set it to "true" and set a TTL on the item, so that it can be automatically deleted. You can read the change feed for historic items (the most recent change corresponding to the item, it doesn't include the intermediate changes), for example, items that were added five years ago. You can read the change feed as far back as the origin of your container but if an item is deleted, it will be removed from the change feed.
+You can read the change feed for historic items. This historical data includes the most recent change corresponding to the item. The historical data doesn't include the intermediate changes. For example, you can use the change feed to read items that were added five years ago. However, you can't see the intermediate changes since then. You can read the change feed as far back as the origin of your container. If an item is deleted, it's removed from the change feed entirely.
### Sort order of items in change feed
-Change feed items come in the order of their modification time. This sort order is guaranteed per logical partition key.
+Change feed items arrive in the order of their modification time. This sort order is guaranteed per logical partition key.
### Consistency level
-While consuming the change feed in an Eventual consistency level, there could be duplicate events in-between subsequent change feed read operations (the last event of one read operation appears as the first of the next).
+Consuming the change feed in an Eventual consistency level can result in duplicate events in-between subsequent change feed read operations. For example, the last event of one read operation could appear as the first event of the next operation.
### Change feed in multi-region Azure Cosmos DB accounts
-In a multi-region Azure Cosmos DB account, if a write-region fails over, change feed will work across the manual failover operation and it will be contiguous.
+In a multi-region Azure Cosmos DB account, if a write-region fails over, change feed works across the manual failover operation and the feed remains contiguous.
### Change feed and Time to Live (TTL)
-If a TTL (Time to Live) property is set on an item to -1, change feed will persist forever. If the data is not deleted, it will remain in the change feed.
+If the TTL property is set on an item to `-1`, the change feed persists that item forever. If the data isn't deleted, it remains in the change feed.
-### Change feed and _etag, _lsn or _ts
+### Change feed and \_etag, \_lsn, or \_ts
-The _etag format is internal and you should not take dependency on it, because it can change anytime. _ts is a modification or a creation timestamp. You can use _ts for chronological comparison. _lsn is a batch ID that is added for change feed only; it represents the transaction ID. Many items may have same _lsn. ETag on FeedResponse is different from the _etag you see on the item. _etag is an internal identifier and it is used for concurrency control. The _etag property tells about the version of the item, whereas the ETag property is used for sequencing the feed.
+Azure Cosmos DB includes multiple internal fields that are automatically assigned to a new item. These fields are important to understand in the context of the change feed.
+
+The `_etag` field is internal and you shouldn't take dependency on it, because it can change at any time. Typically, the `_etag` field changes whenever the item is modified. For more information, see [optimistic concurrency control](nosql/database-transactions-optimistic-concurrency.md#optimistic-concurrency-control). The `_etag` value from the change feed item is different than the `_etag` value on the original source item. In the context of the change feed, the `_etag` value is used to sequence the feed.
+
+`_ts` is a modification or a creation timestamp. You can use `_ts` for chronological comparison.
+
+`_lsn` is a batch ID that is added to the item within the context of the change feed only. The `_lsn` field doesn't exist on the original source item. In the change feed, the field represents the transaction ID. Many items may have the same `_lsn`.
## Working with change feed You can work with change feed using the following options:
-* [Using change feed with Azure Functions](change-feed-functions.md)
-* [Using change feed with change feed processor](change-feed-processor.md)
+* [Use change feed with Azure Functions](change-feed-functions.md)
+* [Use change feed with the change feed processor](change-feed-processor.md)
-Change feed is available for each logical partition key within the container, and it can be distributed across one or more consumers for parallel processing as shown in the image below.
+Change feed is available for each logical partition key within the container, and it can be distributed across one or more consumers for parallel processing.
:::image type="content" source="./media/change-feed/changefeedvisual.png" alt-text="Distributed processing of Azure Cosmos DB change feed" border="false":::
Change feed is available for each logical partition key within the container, an
* Each change included in the change log appears exactly once in the change feed, and the clients must manage the checkpointing logic. If you want to avoid the complexity of managing checkpoints, the change feed processor provides automatic checkpointing and "at least once" semantics. [using change feed with change feed processor](change-feed-processor.md).
-* The change feed is sorted by the order of modification within each logical partition key value. There is no guaranteed order across the partition key values.
+* The change feed is sorted by the order of modification within each logical partition key value. There's no guaranteed order across the partition key values.
-* Changes can be synchronized from any point-in-time, that is there is no fixed data retention period for which changes are available.
+* Changes can be synchronized from any point-in-time. There's no fixed data retention period for which changes are available.
-* Changes are available in parallel for all logical partition keys of an Azure Cosmos DB container. This capability allows changes from large containers to be processed in parallel by multiple consumers.
+* Changes are available in parallel for all logical partition keys of an Azure Cosmos DB container. This capability allows multiple consumers to process changes from large containers in parallel.
* Applications can request multiple change feeds on the same container simultaneously. ChangeFeedOptions.StartTime can be used to provide an initial starting point. For example, to find the continuation token corresponding to a given clock time. The ContinuationToken, if specified, takes precedence over the StartTime and StartFromBeginning values. The precision of ChangeFeedOptions.StartTime is ~5 secs.
Change feed is available for each logical partition key within the container, an
Change feed functionality is surfaced as change stream in API for MongoDB and Query with predicate in API for Cassandra. To learn more about the implementation details for API for MongoDB, see the [Change streams in the Azure Cosmos DB API for MongoDB](mongodb/change-streams.md).
-Native Apache Cassandra provides change data capture (CDC), a mechanism to flag specific tables for archival as well as rejecting writes to those tables once a configurable size-on-disk for the CDC log is reached. The change feed feature in Azure Cosmos DB for Apache Cassandra enhances the ability to query the changes with predicate via CQL. To learn more about the implementation details, see [Change feed in the Azure Cosmos DB for Apache Cassandra](cassandr).
+Native Apache Cassandra provides change data capture (CDC), a mechanism to flag specific tables for archival and rejecting writes to those tables once a configurable size-on-disk for the CDC log is reached. The change feed feature in Azure Cosmos DB for Apache Cassandra enhances the ability to query the changes with predicate via CQL. To learn more about the implementation details, see [Change feed in the Azure Cosmos DB for Apache Cassandra](cassandr).
## Measuring change feed request unit consumption
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/consistency-levels.md
Title: Consistency levels in Azure Cosmos DB
+ Title: Consistency level choices
+ description: Azure Cosmos DB has five consistency levels to help balance eventual consistency, availability, and latency trade-offs. Previously updated : 09/26/2022 Last updated : 02/27/2023 + # Consistency levels in Azure Cosmos DB+ [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
-Distributed databases that rely on replication for high availability, low latency, or both, must make a fundamental tradeoff between the read consistency, availability, latency, and throughput as defined by the [PACELC theorem](https://en.wikipedia.org/wiki/PACELC_theorem). The linearizability of the strong consistency model is the gold standard of data programmability. But it adds a steep price from higher write latencies due to data having to replicate and commit across large distances. Strong consistency may also suffer from reduced availability (during failures) because data cannot replicate and commit in every region. Eventual consistency offers higher availability and better performance, but it's more difficult to program applications because data may not be completely consistent across all regions.
+Distributed databases that rely on replication for high availability, low latency, or both, must make a fundamental tradeoff between the read consistency, availability, latency, and throughput as defined by the [PACELC theorem](https://en.wikipedia.org/wiki/PACELC_theorem). The linearizability of the strong consistency model is the gold standard of data programmability. But it adds a steep price from higher write latencies due to data having to replicate and commit across large distances. Strong consistency may also suffer from reduced availability (during failures) because data can't replicate and commit in every region. Eventual consistency offers higher availability and better performance, but it's more difficult to program applications because data may not be consistent across all regions.
->
-> [!VIDEO https://aka.ms/docs.consistency-levels]
+> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4RZ0h]
Most commercially available distributed NoSQL databases available in the market today provide only strong and eventual consistency. Azure Cosmos DB offers five well-defined levels. From strongest to weakest, the levels are:
For more information on the default consistency level, see [configuring the defa
Each level provides availability and performance tradeoffs. The following image shows the different consistency levels as a spectrum. ## Consistency levels and Azure Cosmos DB APIs
-Azure Cosmos DB provides native support for wire protocol-compatible APIs for popular databases. These include MongoDB, Apache Cassandra, Apache Gremlin, and Azure Table Storage. When using API for Gremlin or Table, the default consistency level configured on the Azure Cosmos DB account is used. For details on consistency level mapping between Apache Cassandra and Azure Cosmos DB, see [API for Cassandra consistency mapping](cassandr).
+Azure Cosmos DB provides native support for wire protocol-compatible APIs for popular databases. These include MongoDB, Apache Cassandra, Apache Gremlin, and Azure Table Storage. In API for Gremlin or Table, the default consistency level configured on the Azure Cosmos DB account is used. For details on consistency level mapping between Apache Cassandra and Azure Cosmos DB, see [API for Cassandra consistency mapping](cassandr).
## Scope of the read consistency
-Read consistency applies to a single read operation scoped within a logical partition. The read operation can be issued by a remote client or a stored procedure.
+Read consistency applies to a single read operation scoped within a logical partition. A remote client or a stored procedure can issue the read operation.
## Configure the default consistency level
The semantics of the five consistency levels are described in the following sect
Strong consistency offers a linearizability guarantee. Linearizability refers to serving requests concurrently. The reads are guaranteed to return the most recent committed version of an item. A client never sees an uncommitted or partial write. Users are always guaranteed to read the latest committed write.
- The following graphic illustrates the strong consistency with musical notes. After the data is written to the "West US 2" region, when you read the data from other regions, you get the most recent value:
+The following graphic illustrates the strong consistency with musical notes. After the data is written to the "West US 2" region, when you read the data from other regions, you get the most recent value:
- :::image type="content" source="media/consistency-levels/strong-consistency.gif" alt-text="Illustration of strong consistency level":::
### Bounded staleness consistency For single-region write accounts with two or more regions, data is replicated from the primary region to all secondary (read-only) regions. For multi-region write accounts with two or more regions, data is replicated from the region it was originally written in to all other writable regions. In both scenarios, while not common, there may occasionally be a replication lag from one region to another.
-In bounded staleness consistency, data between any two regions will not lag by more than "K" versions (that is, "updates") of an item or by "T" time intervals, whichever is reached first. In other words, when you choose bounded staleness, the maximum "staleness" of the data in any region can be configured in two ways:
+In bounded staleness consistency, the lag of data between any two regions is always less than a specified amount. The amount can be "K" versions (that is, "updates") of an item or by "T" time intervals, whichever is reached first. In other words, when you choose bounded staleness, the maximum "staleness" of the data in any region can be configured in two ways:
- The number of versions (*K*) of the item - The time interval (*T*) reads might lag behind the writes
-Bounded Staleness is beneficial primarily to single-region write accounts with two or more regions. If the data lag in a region (determined per physical partition) exceeds the configured staleness value, writes for that partition will be throttled until staleness is back within the configured upper bound.
+Bounded Staleness is beneficial primarily to single-region write accounts with two or more regions. If the data lag in a region (determined per physical partition) exceeds the configured staleness value, writes for that partition are throttled until staleness is back within the configured upper bound.
-For a single-region account, Bounded Staleness provides the same write consistency guarantees as Session and Eventual Consistency with data being replicated to a local majority (three replicas in a four replica set) in the single region.
+For a single-region account, Bounded Staleness provides the same write consistency guarantees as Session and Eventual Consistency. With Bounded Staleness, data is replicated to a local majority (three replicas in a four replica set) in the single region.
> [!IMPORTANT] > With Bounded Staleness consistency, staleness checks are made only across regions and not within a region. Within a given region, data is always replicated to a local majority (three replicas in a four replica set) regardless of the consistency level.
-Reads when using Bounded Staleness will return the latest data available in that region by reading from two available replicas in that region. Since writes within a region always replicate to a local majority (3 out of 4 replicas), consulting two replicas will return the most up to date data available in that region.
+Reads when using Bounded Staleness returns the latest data available in that region by reading from two available replicas in that region. Since writes within a region always replicate to a local majority (three out of four replicas), consulting two replicas return the most updated data available in that region.
> [!IMPORTANT] > With Bounded Staleness consistency, reads issued against a non-primary region may not necessarily return the most recent version of the data globally, but are guaranteed to return the most recent version of the data in that region, which will be within the maximum staleness boundary globally.
-Bounded Staleness works best for globally distributed applications using a single-region write accounts with two or more regions, where near strong consistency across regions is desired. For multi-region write accounts with two or more regions, application servers should direct reads and writes to the same region in which the application servers are hosted. Thus, Bounded Staleness in a multi-write account is an anti-pattern as it would require a dependency on replication lag between regions, which should not matter if data is read from the same region it was written to.
+Bounded Staleness works best for globally distributed applications using a single-region write accounts with two or more regions, where near strong consistency across regions is desired. For multi-region write accounts with two or more regions, application servers should direct reads and writes to the same region in which the application servers are hosted. Bounded Staleness in a multi-write account is an anti-pattern. This level would require a dependency on replication lag between regions, which shouldn't matter if data is read from the same region it was written to.
The following graphic illustrates the bounded staleness consistency with musical notes. After the data is written to the "West US 2" region, the "East US 2" and "Australia East" regions read the written value based on the configured maximum lag time or the maximum operations:
- :::image type="content" source="media/consistency-levels/bounded-staleness-consistency.gif" alt-text="Illustration of bounded staleness consistency level":::
### Session consistency
-In session consistency, within a single client session, reads are guaranteed to honor the read-your-writes, and write-follows-reads guarantees. This assumes a single ΓÇ£writerΓÇ¥ session or sharing the session token for multiple writers.
+In session consistency, within a single client session, reads are guaranteed to honor the read-your-writes, and write-follows-reads guarantees. This guarantee assumes a single ΓÇ£writerΓÇ¥ session or sharing the session token for multiple writers.
Like all consistency levels weaker than Strong, writes are replicated to a minimum of three replicas (in a four replica set) in the local region, with asynchronous replication to all other regions.
-After every write operation, the client receives an updated Session Token from the server. These tokens are cached by the client and sent to the server for read operations in a specified region. If the replica against which the read operation is issued contains data for the specified token (or a more recent token), the requested data is returned. If the replica does not contain data for that session, the client will retry the request against another replica in the region. If necessary, the client will retry the read against additional available regions until data for the specified session token is retrieved.
+After every write operation, the client receives an updated Session Token from the server. The client caches the tokens and sends them to the server for read operations in a specified region. If the replica against which the read operation is issued contains data for the specified token (or a more recent token), the requested data is returned. If the replica doesn't contain data for that session, the client retries the request against another replica in the region. If necessary, the client retries the read against extra available regions until data for the specified session token is retrieved.
> [!IMPORTANT] > In Session Consistency, the clientΓÇÖs usage of a session token guarantees that data corresponding to an older session will never be read. However, if the client is using an older session token and more recent updates have been made to the database, the more recent version of the data will be returned despite an older session token being used. The Session Token is used as a minimum version barrier but not as a specific (possibly historical) version of the data to be retrieved from the database.
-If the client did not initiate a write to a physical partition, it will not contain a session token in its cache and reads to that physical partition will behave as reads with Eventual Consistency. Similarly, if the client is re-created, its cache of session tokens will also be re-created. Here too, read operations will follow the same behavior as Eventual Consistency until subsequent write operations rebuild the clientΓÇÖs cache of session tokens.
+If the client didn't initiate a write to a physical partition, the client doesn't contain a session token in its cache and reads to that physical partition behave as reads with Eventual Consistency. Similarly, if the client is re-created, its cache of session tokens is also re-created. Here too, read operations follow the same behavior as Eventual Consistency until subsequent write operations rebuild the clientΓÇÖs cache of session tokens.
> [!IMPORTANT] > If Session Tokens are being passed from one client instance to another, the contents of the token should not be modified.
- Session consistency is the most widely used consistency level for both single region as well as globally distributed applications. It provides write latencies, availability, and read throughput comparable to that of eventual consistency but also provides the consistency guarantees that suit the needs of applications written to operate in the context of a user. The following graphic illustrates the session consistency with musical notes. The "West US 2 writer" and the "East US 2 reader" are using the same session (Session A) so they both read the same data at the same time. Whereas the "Australia East" region is using "Session B" so, it receives data later but in the same order as the writes.
+Session consistency is the most widely used consistency level for both single region and globally distributed applications. It provides write latencies, availability, and read throughput comparable to that of eventual consistency. Session consistency also provides the consistency guarantees that suit the needs of applications written to operate in the context of a user. The following graphic illustrates the session consistency with musical notes. The "West US 2 writer" and the "East US 2 reader" are using the same session (Session A) so they both read the same data at the same time. Whereas the "Australia East" region is using "Session B" so, it receives data later but in the same order as the writes.
- :::image type="content" source="media/consistency-levels/session-consistency.gif" alt-text="Illustration of session consistency level":::
### Consistent prefix consistency Like all consistency levels weaker than Strong, writes are replicated to a minimum of three replicas (in a four-replica set) in the local region, with asynchronous replication to all other regions.
-In consistent prefix, updates made as single document writes see eventual consistency.
+In consistent prefix, updates made as single document writes see eventual consistency.
+ Updates made as a batch within a transaction, are returned consistent to the transaction in which they were committed. Write operations within a transaction of multiple documents are always visible together.
-Assume two write operations are performed transactionally (all or nothing operations) on document Doc1 followed by document Doc2, within transactions T1 and T2. When client does a read in any replica, the user will see either ΓÇ£Doc1 v1 and Doc2 v1ΓÇ¥ or ΓÇ£Doc1 v2 and Doc2 v2ΓÇ¥ or neither document if the replica is lagging, but never ΓÇ£Doc1 v1 and Doc2 v2ΓÇ¥ or ΓÇ£Doc1 v2 and Doc2 v1ΓÇ¥ for the same read or query operation.
+Assume two write operations are performed transactionally (all or nothing operations) on document Doc1 followed by document Doc2, within transactions T1 and T2. When client does a read in any replica, the user sees either ΓÇ£Doc1 v1 and Doc2 v1ΓÇ¥ or ΓÇ£Doc1 v2 and Doc2 v2ΓÇ¥ or neither document if the replica is lagging, but never ΓÇ£Doc1 v1 and Doc2 v2ΓÇ¥ or ΓÇ£Doc1 v2 and Doc2 v1ΓÇ¥ for the same read or query operation.
The following graphic illustrates the consistency prefix consistency with musical notes. In all the regions, the reads never see out of order writes for a transactional batch of writes:
- :::image type="content" source="media/consistency-levels/consistent-prefix.gif" alt-text="Illustration of consistent prefix":::
### Eventual consistency Like all consistency levels weaker than Strong, writes are replicated to a minimum of three replicas (in a four replica set) in the local region, with asynchronous replication to all other regions.
-In Eventual consistency, the client will issue read requests against any one of the four replicas in the specified region. This replica may be lagging and could return stale or no data.
+In Eventual consistency, the client issues read requests against any one of the four replicas in the specified region. This replica may be lagging and could return stale or no data.
-Eventual consistency is the weakest form of consistency because a client may read the values that are older than the ones it had read before. Eventual consistency is ideal where the application does not require any ordering guarantees. Examples include count of Retweets, Likes, or non-threaded comments. The following graphic illustrates the eventual consistency with musical notes.
+Eventual consistency is the weakest form of consistency because a client may read the values that are older than the ones it read in the past. Eventual consistency is ideal where the application doesn't require any ordering guarantees. Examples include count of Retweets, Likes, or non-threaded comments. The following graphic illustrates the eventual consistency with musical notes.
- :::image type="content" source="media/consistency-levels/eventual-consistency.gif" alt-text="viIllustration of eventual consistency":::
## Consistency guarantees in practice
In practice, you may often get stronger consistency guarantees. Consistency guar
If there are no write operations on the database, a read operation with **eventual**, **session**, or **consistent prefix** consistency levels is likely to yield the same results as a read operation with strong consistency level.
-If your Azure Cosmos DB account is configured with a consistency level other than the strong consistency, you can find out the probability that your clients may get strong and consistent reads for your workloads by looking at the *Probabilistically Bounded Staleness* (PBS) metric. This metric is exposed in the Azure portal, to learn more, see [Monitor Probabilistically Bounded Staleness (PBS) metric](how-to-manage-consistency.md#monitor-probabilistically-bounded-staleness-pbs-metric).
+If your account is configured with a consistency level other than the strong consistency, you can find out the probability that your clients may get strong and consistent reads for your workloads. You can figure out this probability by looking at the *Probabilistically Bounded Staleness* (PBS) metric. This metric is exposed in the Azure portal, to learn more, see [Monitor Probabilistically Bounded Staleness (PBS) metric](how-to-manage-consistency.md#monitor-probabilistically-bounded-staleness-pbs-metric).
-Probabilistic bounded staleness shows how eventual is your eventual consistency. This metric provides an insight into how often you can get a stronger consistency than the consistency level that you have currently configured on your Azure Cosmos DB account. In other words, you can see the probability (measured in milliseconds) of getting strongly consistent reads for a combination of write and read regions.
+Probabilistically bounded staleness shows how eventual is your eventual consistency. This metric provides an insight into how often you can get a stronger consistency than the consistency level that you've currently configured on your Azure Cosmos DB account. In other words, you can see the probability (measured in milliseconds) of getting consistent reads for a combination of write and read regions.
## Consistency levels and latency
The write latency for all consistency levels is always guaranteed to be less tha
For Azure Cosmos DB accounts configured with strong consistency with more than one region, the write latency is equal to two times round-trip time (RTT) between any of the two farthest regions, plus 10 milliseconds at the 99th percentile. High network RTT between the regions will translate to higher latency for Azure Cosmos DB requests since strong consistency completes an operation only after ensuring that it has been committed to all regions within an account.
-The exact RTT latency is a function of speed-of-light distance and the Azure networking topology. Azure networking doesn't provide any latency SLAs for the RTT between any two Azure regions, however it does publish [Azure network round-trip latency statistics](../networking/azure-network-latency.md). For your Azure Cosmos DB account, replication latencies are displayed in the Azure portal. You can use the Azure portal (go to the Metrics blade, select Consistency tab) to monitor the replication latencies between various regions that are associated with your Azure Cosmos DB account.
+The exact RTT latency is a function of speed-of-light distance and the Azure networking topology. Azure networking doesn't provide any latency SLAs for the RTT between any two Azure regions, however it does publish [Azure network round-trip latency statistics](../networking/azure-network-latency.md). For your Azure Cosmos DB account, replication latencies are displayed in the Azure portal. You can use the Azure portal by going to the Metrics section and then selecting the Consistency option. Using the Azure portal, you can monitor the replication latencies between various regions that are associated with your Azure Cosmos DB account.
> [!IMPORTANT] > Strong consistency for accounts with regions spanning more than 5000 miles (8000 kilometers) is blocked by default due to high write latency. To enable this capability please contact support.
The exact RTT latency is a function of speed-of-light distance and the Azure net
- For strong and bounded staleness, reads are done against two replicas in a four replica set (minority quorum) to provide consistency guarantees. Session, consistent prefix and eventual do single replica reads. The result is that, for the same number of request units, read throughput for strong and bounded staleness is half of the other consistency levels. -- For a given type of write operation, such as insert, replace, upsert, and delete, the write throughput for request units is identical for all consistency levels. For strong consistency, changes need to be committed in every region (global majority) while for all other consistency levels, local majority (three replicas in a four replica set) is being used.
+- For a given type of write operation, such as insert, replace, upsert, and delete, the write throughput for request units is identical for all consistency levels. For strong consistency, changes need to be committed in every region (global majority) while for all other consistency levels, local majority (three replicas in a four replica set) is being used.
|**Consistency Level**|**Quorum Reads**|**Quorum Writes**| |--|--|--|
The exact RTT latency is a function of speed-of-light distance and the Azure net
## <a id="rto"></a>Consistency levels and data durability
-Within a globally distributed database environment there is a direct relationship between the consistency level and data durability in the presence of a region-wide outage. As you develop your business continuity plan, you need to understand the maximum period of recent data updates the application can tolerate losing when recovering after a disruptive event. The time period of updates that you might afford to lose is known as **recovery point objective** (**RPO**).
+Within a globally distributed database environment, there's a direct relationship between the consistency level and data durability in the presence of a region-wide outage. As you develop your business continuity plan, you need to understand the maximum period of recent data updates the application can tolerate losing when recovering after a disruptive event. The time period of updates that you might afford to lose is known as **recovery point objective** (**RPO**).
-The table below defines the relationship between consistency model and data durability in the presence of a region wide outage.
+This table defines the relationship between consistency model and data durability in the presence of a region wide outage.
|**Region(s)**|**Replication mode**|**Consistency level**|**RPO**| |||||
The table below defines the relationship between consistency model and data dura
|>1|Multiple write regions|Session, Consistent Prefix, Eventual|< 15 minutes| |>1|Multiple write regions|Bounded Staleness|*K* & *T*|
-*K* = The number of *"K"* versions (i.e., updates) of an item.
+*K* = The number of *"K"* versions (that is, updates) of an item.
*T* = The time interval *"T"* since the last update.
-For a single region account, the minimum value of *K* and *T* is 10 write operations or 5 seconds. For multi-region accounts the minimum value of *K* and *T* is 100,000 write operations or 300 seconds. This defines the minimum RPO for data when using Bounded Staleness.
+For a single region account, the minimum value of *K* and *T* is 10 write operations or 5 seconds. For multi-region accounts the minimum value of *K* and *T* is 100,000 write operations or 300 seconds. This value defines the minimum RPO for data when using Bounded Staleness.
## Strong consistency and multiple write regions
-Azure Cosmos DB accounts configured with multiple write regions cannot be configured for strong consistency as it is not possible for a distributed system to provide an RPO of zero and an RTO of zero. Additionally, there are no write latency benefits on using strong consistency with multiple write regions because a write to any region must be replicated and committed to all configured regions within the account. This results in the same write latency as a single write region account.
+Azure Cosmos DB accounts configured with multiple write regions can't be configured for strong consistency as it isn't possible for a distributed system to provide an RPO of zero and an RTO of zero. Additionally, there are no write latency benefits on using strong consistency with multiple write regions because a write to any region must be replicated and committed to all configured regions within the account. This scenario results in the same write latency as a single write region account.
-## Additional reading
+## More reading
To learn more about consistency concepts, read the following articles:
cosmos-db Custom Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/custom-commands.md
Title: MongoDB extension commands to manage data in Azure Cosmos DBΓÇÖs API for MongoDB
-description: This article describes how to use MongoDB extension commands to manage data stored in Azure Cosmos DBΓÇÖs API for MongoDB.
+ Title: MongoDB extension commands
+
+description: This article describes how to use MongoDB extension commands to manage data stored in Azure Cosmos DB for MongoDB.
- Previously updated : 07/30/2021+ Last updated : 02/27/2023 ms.devlang: javascript
-# Use MongoDB extension commands to manage data stored in Azure Cosmos DBΓÇÖs API for MongoDB
+# Use MongoDB extension commands to manage data stored in Azure Cosmos DB for MongoDB
+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)]
-The following document contains the custom action commands that are specific to Azure Cosmos DB's API for MongoDB. These commands can be used to create and obtain database resources that are specific to the [Azure Cosmos DB capacity model](../account-databases-containers-items.md).
+The following document contains the custom action commands that are specific to Azure Cosmos DB for MongoDB. These commands can be used to create and obtain database resources that are specific to the [Azure Cosmos DB capacity model](../account-databases-containers-items.md).
+
+By using Azure Cosmos DB for MongoDB, you can enjoy the shared benefits of Azure Cosmos DB. These benefits include, but aren't limited to:
+
+* Global distribution
+* Automatic sharding
+* High availability
+* Latency guarantees
+* Encryption at rest
+* Backups
-By using the Azure Cosmos DBΓÇÖs API for MongoDB, you can enjoy the benefits Azure Cosmos DB such as global distribution, automatic sharding, high availability, latency guarantees, automatic, encryption at rest, backups, and many more, while preserving your investments in your MongoDB app. You can communicate with the Azure Cosmos DBΓÇÖs API for MongoDB by using any of the open-source [MongoDB client drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DBΓÇÖs API for MongoDB enables the use of existing client drivers by adhering to the [MongoDB wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
+You can enjoy these benefits while preserving your investments in your existing MongoDB application\[s\]. You can communicate with the Azure Cosmos DB for MongoDB by using any of the open-source [MongoDB client drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB for MongoDB enables the use of existing client drivers by adhering to the [MongoDB wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
## MongoDB protocol support
-Azure Cosmos DBΓÇÖs API for MongoDB is compatible with MongoDB server version 4.0, 3.6, and 3.2. See supported features and syntax in [4.0](feature-support-40.md), [3.6](feature-support-36.md), and [3.2](feature-support-32.md) articles for more details.
+Azure Cosmos DB for MongoDB is compatible with MongoDB server version 4.0, 3.6, and 3.2. For more information, see supported features and syntax in versions [4.0](feature-support-40.md), [3.6](feature-support-36.md), and [3.2](feature-support-32.md).
-The following extension commands provide the ability to create and modify Azure Cosmos DB-specific resources via database requests:
+The following extension commands create and modify Azure Cosmos DB-specific resources via database requests:
* [Create database](#create-database) * [Update database](#update-database)
The following extension commands provide the ability to create and modify Azure
The create database extension command creates a new MongoDB database. The database name can be used from the database context set by the `use database` command. The following table describes the parameters within the command:
-|**Field**|**Type** |**Description** |
-||||
-| `customAction` | `string` | Name of the custom command, it must be "CreateDatabase". |
-| `offerThroughput` | `int` | Provisioned throughput that you set on the database. This parameter is optional. |
-| `autoScaleSettings` | `Object` | Required for [Autoscale mode](../provision-throughput-autoscale.md). This object contains the settings associated with the Autoscale capacity mode. You can set up the `maxThroughput` value, which describes the highest amount of Request Units that the collection will be increased to dynamically. |
+| Field | Type | Description |
+| | | |
+| `customAction` | `string` | Name of the custom command. The value must be `CreateDatabase`. |
+| `offerThroughput` | `int` | Provisioned throughput that you set on the database. This parameter is optional. |
+| `autoScaleSettings` | `Object` | Required for [Autoscale mode](../provision-throughput-autoscale.md). This object contains the settings associated with the Autoscale capacity mode. You can set up the `maxThroughput` value, which describes the highest number of Request Units that the collection can increase to dynamically. |
### Output
-If the command is successful, it will return the following response:
+If the command is successful, it returns the following response:
```javascript { "ok" : 1 }
If the command is successful, it will return the following response:
See the [default output](#default-output) of custom command for the parameters in the output.
-### Examples
-
-#### Create a database
+### Example: Create a database
To create a database named `"test"` that uses all the default values, use the following command:
use test
db.runCommand({customAction: "CreateDatabase"}); ```
-This command will create a database without database-level throughput. This means that the collections within this database will need to specify the amount of throughput that you need to use.
+This command creates a database without database-level throughput. This operation means that the collections within this database need to specify the amount of throughput that you need to use.
-#### Create a database with throughput
+### Example: Create a database with throughput
To create a database named `"test"` and to specify a [database-level](../set-throughput.md#set-throughput-on-a-database) provisioned throughput of 1000 RUs, use the following command:
use test
db.runCommand({customAction: "CreateDatabase", offerThroughput: 1000 }); ```
-This will create a database and set a throughput to it. All collections within this database will share the set throughput, unless the collections are created with [a specific throughput level](../set-throughput.md#set-throughput-on-a-database-and-a-container).
+This command creates a database and sets a throughput to it. All collections within this database share the set throughput, unless the collections are created with [a specific throughput level](../set-throughput.md#set-throughput-on-a-database-and-a-container).
-#### Create a database with Autoscale throughput
+### Example: Create a database with Autoscale throughput
To create a database named `"test"` and to specify an Autoscale max throughput of 20,000 RU/s at [database-level](../set-throughput.md#set-throughput-on-a-database), use the following command:
db.runCommand({customAction: "CreateDatabase", autoScaleSettings: { maxThroughpu
## <a id="update-database"></a> Update database
-The update database extension command updates the properties associated with the specified database. Changing your database from provisioned throughput to autoscale and vice-versa is only supported in the Azure Portal. The following table describes the parameters within the command:
+The update database extension command updates the properties associated with the specified database. Changing your database from provisioned throughput to autoscale and vice-versa is only supported in the Azure portal. The following table describes the parameters within the command:
-|**Field**|**Type** |**Description** |
-||||
-| `customAction` | `string` | Name of the custom command. Must be "UpdateDatabase". |
-| `offerThroughput` | `int` | New provisioned throughput that you want to set on the database if the database uses [database-level throughput](../set-throughput.md#set-throughput-on-a-database) |
-| `autoScaleSettings` | `Object` | Required for [Autoscale mode](../provision-throughput-autoscale.md). This object contains the settings associated with the Autoscale capacity mode. You can set up the `maxThroughput` value, which describes the highest amount of Request Units that the database will be increased to dynamically. |
+| Field | Type | Description |
+| | | |
+| `customAction` | `string` | Name of the custom command. The value must be `UpdateDatabase`. |
+| `offerThroughput` | `int` | New provisioned throughput that you want to set on the database if the database uses [database-level throughput](../set-throughput.md#set-throughput-on-a-database) |
+| `autoScaleSettings` | `Object` | Required for [Autoscale mode](../provision-throughput-autoscale.md). This object contains the settings associated with the Autoscale capacity mode. You can set up the `maxThroughput` value, which describes the highest number of Request Units that the database can be increased to dynamically. |
-This command uses the database specified in the context of the session. This is the database you used in the `use <database>` command. At the moment, the database name can not be changed using this command.
+This command uses the database specified in the context of the session. This database is the same one you used in the `use <database>` command. At the moment, the database name can't be changed using this command.
### Output
-If the command is successful, it will return the following response:
+If the command is successful, it returns the following response:
```javascript { "ok" : 1 }
If the command is successful, it will return the following response:
See the [default output](#default-output) of custom command for the parameters in the output.
-### Examples
-
-#### Update the provisioned throughput associated with a database
+### Example: Update the provisioned throughput associated with a database
To update the provisioned throughput of a database with name `"test"` to 1200 RUs, use the following command:
use test
db.runCommand({customAction: "UpdateDatabase", offerThroughput: 1200 }); ```
-#### Update the Autoscale throughput associated with a database
+### Example: Update the Autoscale throughput associated with a database
To update the provisioned throughput of a database with name `"test"` to 20,000 RUs, or to transform it to an [Autoscale throughput level](../provision-throughput-autoscale.md), use the following command:
use test
db.runCommand({customAction: "UpdateDatabase", autoScaleSettings: { maxThroughput: 20000 } }); ``` - ## <a id="get-database"></a> Get database The get database extension command returns the database object. The database name is used from the database context against which the command is executed.
The get database extension command returns the database object. The database nam
The following table describes the parameters within the command:
+| Field | Type | Description |
+| | | |
+| `customAction` | `string` | Name of the custom command. The value must be `GetDatabase`. |
-|**Field**|**Type** |**Description** |
-||||
-| `customAction` | `string` | Name of the custom command. Must be "GetDatabase"|
-
### Output If the command succeeds, the response contains a document with the following fields:
-|**Field**|**Type** |**Description** |
-||||
-| `ok` | `int` | Status of response. 1 == success. 0 == failure. |
-| `database` | `string` | Name of the database. |
-| `provisionedThroughput` | `int` | Provisioned throughput that is set on the database if the database is using [manual database-level throughput](../set-throughput.md#set-throughput-on-a-database) |
-| `autoScaleSettings` | `Object` | This object contains the capacity parameters associated with the database if it is using the [Autoscale mode](../provision-throughput-autoscale.md). The `maxThroughput` value describes the highest amount of Request Units that the database will be increased to dynamically. |
+| Field | Type | Description |
+| | | |
+| `ok` | `int` | Status of response. 1 == success. 0 == failure. |
+| `database` | `string` | Name of the database. |
+| `provisionedThroughput` | `int` | Provisioned throughput that is set on the database if the database is using [manual database-level throughput](../set-throughput.md#set-throughput-on-a-database) |
+| `autoScaleSettings` | `Object` | This object contains the capacity parameters associated with the database if it's using the [Autoscale mode](../provision-throughput-autoscale.md). The `maxThroughput` value describes the highest number of Request Units that the database can be increased to dynamically. |
If the command fails, a default custom command response is returned. See the [default output](#default-output) of custom command for the parameters in the output.
-### Examples
-
-#### Get the database
+### Example: Get the database
To get the database object for a database named `"test"`, use the following command:
The create collection extension command creates a new MongoDB collection. The da
The following table describes the parameters within the command:
-| **Field** | **Type** | **Required** | **Description** |
-|||||
-| `customAction` | `string` | Required | Name of the custom command. Must be "CreateCollection".|
-| `collection` | `string` | Required | Name of the collection. No special characters or spaces are allowed.|
-| `offerThroughput` | `int` | Optional | Provisioned throughput to set on the database. If this parameter is not provided, it will default to the minimum, 400 RU/s. * To specify throughput beyond 10,000 RU/s, the `shardKey` parameter is required.|
-| `shardKey` | `string` | Required for collections with large throughput | The path to the Shard Key for the sharded collection. This parameter is required if you set more than 10,000 RU/s in `offerThroughput`. If it is specified, all documents inserted will require this key and value. |
-| `autoScaleSettings` | `Object` | Required for [Autoscale mode](../provision-throughput-autoscale.md) | This object contains the settings associated with the Autoscale capacity mode. You can set up the `maxThroughput` value, which describes the highest amount of Request Units that the collection will be increased to dynamically. |
-| `indexes` | `Array` | Optionally configure indexes. This parameter is supported for 3.6+ accounts only. | When present, an index on _id is required. Each entry in the array must include a key of one or more fields, a name, and may contain index options. For example, to create a compound unique index on the fields `a` and `b` use this entry: `{key: {a: 1, b: 1}, name:"a_1_b_1", unique: true}`.
+| Field | Type | Required | Description |
+| | | | |
+| `customAction` | `string` | Required | Name of the custom command. The value must be `CreateCollection`. |
+| `collection` | `string` | Required | Name of the collection. No special characters or spaces are allowed. |
+| `offerThroughput` | `int` | Optional | Provisioned throughput to set on the database. If this parameter isn't provided, it defaults to the minimum, 400 RU/s. * To specify throughput beyond 10,000 RU/s, the `shardKey` parameter is required. |
+| `shardKey` | `string` | Required for collections with large throughput | The path to the Shard Key for the sharded collection. This parameter is required if you set more than 10,000 RU/s in `offerThroughput`. If it's specified, all documents inserted require this key and value. |
+| `autoScaleSettings` | `Object` | Required for [Autoscale mode](../provision-throughput-autoscale.md) | This object contains the settings associated with the Autoscale capacity mode. You can set up the `maxThroughput` value, which describes the highest number of Request Units that the collection can be increased to dynamically. |
+| `indexes` | `Array` | Optionally configure indexes. This parameter is supported for 3.6+ accounts only. | When present, an index on _id is required. Each entry in the array must include a key of one or more fields, a name, and may contain index options. For example, to create a compound unique index on the fields `a` and `b` use this entry: `{key: {a: 1, b: 1}, name:"a_1_b_1", unique: true}`. |
### Output Returns a default custom command response. See the [default output](#default-output) of custom command for the parameters in the output.
-### Examples
+### Example: Create a collection with the minimum configuration
-#### Create a collection with the minimum configuration
-
-To create a new collection with name `"testCollection"` and the default values, use the following command:
+To create a new collection with name `"testCollection"` and the default values, use the following command:
```javascript use test db.runCommand({customAction: "CreateCollection", collection: "testCollection"}); ```
-This will result in a new fixed, unsharded, collection with 400RU/s and an index on the `_id` field automatically created. This type of configuration will also apply when creating new collections via the `insert()` function. For example:
+This operation results in a new fixed, unsharded, collection with 400RU/s and an index on the `_id` field automatically created. This type of configuration also applies when creating new collections via the `insert()` function. For example:
```javascript use test db.newCollection.insert({}); ```
-#### Create a unsharded collection
+### Example: Create an unsharded collection
-To create a unsharded collection with name `"testCollection"` and provisioned throughput of 1000 RUs, use the following command:
+To create an unsharded collection with name `"testCollection"` and provisioned throughput of 1000 RUs, use the following command:
```javascript use test db.runCommand({customAction: "CreateCollection", collection: "testCollection", offerThroughput: 1000});
-```
+```
You can create a collection with up to 10,000 RU/s as the `offerThroughput` without needing to specify a shard key. For collections with larger throughput, check out the next section.
-#### Create a sharded collection
+### Example: Create a sharded collection
To create a sharded collection with name `"testCollection"` and provisioned throughput of 11,000 RUs, and a `shardkey` property "a.b", use the following command:
db.runCommand({customAction: "CreateCollection", collection: "testCollection", o
This command now requires the `shardKey` parameter, since more than 10,000 RU/s specified in the `offerThroughput`.
-#### Create an unsharded Autoscale collection
+### Example: Create an unsharded Autoscale collection
To create an unsharded collection named `'testCollection'` that uses [Autoscale throughput capacity](../provision-throughput-autoscale.md) set to 4,000 RU/s, use the following command:
db.runCommand({
}); ```
-For the `autoScaleSettings.maxThroughput` value you can specify a range from 4,000 RU/s to 10,000 RU/s without a shard key. For higher autoscale throughput, you need to specify the `shardKey` parameter.
+For the `autoScaleSettings.maxThroughput` value, you can specify a range from 4,000 RU/s to 10,000 RU/s without a shard key. For higher autoscale throughput, you need to specify the `shardKey` parameter.
-#### Create a sharded Autoscale collection
+### Example: Create a sharded Autoscale collection
To create a sharded collection named `'testCollection'` with a shard key called `'a.b'`, and that uses [Autoscale throughput capacity](../provision-throughput-autoscale.md) set to 20,000 RU/s, use the following command:
db.runCommand({customAction: "CreateCollection", collection: "testCollection", s
## <a id="update-collection"></a> Update collection
-The update collection extension command updates the properties associated with the specified collection. Changing your collection from provisioned throughput to autoscale and vice-versa is only supported in the Azure Portal.
+The update collection extension command updates the properties associated with the specified collection. Changing your collection from provisioned throughput to autoscale and vice-versa is only supported in the Azure portal.
```javascript {
The update collection extension command updates the properties associated with t
The following table describes the parameters within the command:
-|**Field**|**Type** |**Description** |
-||||
-| `customAction` | `string` | Name of the custom command. Must be "UpdateCollection". |
-| `collection` | `string` | Name of the collection. |
-| `offerThroughput` | `int` | Provisioned throughput to set on the collection.|
-| `autoScaleSettings` | `Object` | Required for [Autoscale mode](../provision-throughput-autoscale.md). This object contains the settings associated with the Autoscale capacity mode. The `maxThroughput` value describes the highest amount of Request Units that the collection will be increased to dynamically. |
-| `indexes` | `Array` | Optionally configure indexes. This parameter is supported for 3.6+ accounts only. When present, the existing indexes of the collection are replaced by the set of indexes specified (including dropping indexes). An index on _id is required. Each entry in the array must include a key of one or more fields, a name, and may contain index options. For example, to create a compound unique index on the fields a and b use this entry: `{key: {a: 1, b: 1}, name: "a_1_b_1", unique: true}`.
+| Field | Type | Description |
+| | | |
+| `customAction` | `string` | Name of the custom command. The value must be `UpdateCollection`. |
+| `collection` | `string` | Name of the collection. |
+| `offerThroughput` | `int` | Provisioned throughput to set on the collection. |
+| `autoScaleSettings` | `Object` | Required for [Autoscale mode](../provision-throughput-autoscale.md). This object contains the settings associated with the Autoscale capacity mode. The `maxThroughput` value describes the highest number of Request Units that the collection can be increased to dynamically. |
+| `indexes` | `Array` | Optionally configure indexes. This parameter is supported for 3.6+ accounts only. When present, the set of indexes specified (including dropping indexes) replaces the existing indexes of the collection. An index on _id is required. Each entry in the array must include a key of one or more fields, a name, and may contain index options. For example, to create a compound unique index on the fields a and b use this entry: `{key: {a: 1, b: 1}, name: "a_1_b_1", unique: true}`. |
### Output Returns a default custom command response. See the [default output](#default-output) of custom command for the parameters in the output.
-### Examples
-
-#### Update the provisioned throughput associated with a collection
+### Example: Update the provisioned throughput associated with a collection
To update the provisioned throughput of a collection with name `"testCollection"` to 1200 RUs, use the following command:
The get collection custom command returns the collection object.
The following table describes the parameters within the command: -
-|**Field**|**Type** |**Description** |
-||||
-| `customAction` | `string` | Name of the custom command. Must be "GetCollection". |
-| `collection` | `string` | Name of the collection. |
+| Field | Type | Description |
+| | | |
+| `customAction` | `string` | Name of the custom command. The value must be `GetCollection`. |
+| `collection` | `string` | Name of the collection. |
### Output If the command succeeds, the response contains a document with the following fields -
-|**Field**|**Type** |**Description** |
-||||
-| `ok` | `int` | Status of response. 1 == success. 0 == failure. |
-| `database` | `string` | Name of the database. |
-| `collection` | `string` | Name of the collection. |
-| `shardKeyDefinition` | `document` | Index specification document used as a shard key. This is an optional response parameter. |
-| `provisionedThroughput` | `int` | Provisioned Throughput to set on the collection. This is an optional response parameter. |
-| `autoScaleSettings` | `Object` | This object contains the capacity parameters associated with the database if it is using the [Autoscale mode](../provision-throughput-autoscale.md). The `maxThroughput` value describes the highest amount of Request Units that the collection will be increased to dynamically. |
+| Field | Type | Description |
+| | | |
+| `ok` | `int` | Status of response. 1 == success. 0 == failure. |
+| `database` | `string` | Name of the database. |
+| `collection` | `string` | Name of the collection. |
+| `shardKeyDefinition` | `document` | Index specification document used as a shard key. This field is an optional response parameter. |
+| `provisionedThroughput` | `int` | Provisioned Throughput to set on the collection. This field is an optional response parameter. |
+| `autoScaleSettings` | `Object` | This object contains the capacity parameters associated with the database if it's using the [Autoscale mode](../provision-throughput-autoscale.md). The `maxThroughput` value describes the highest number of Request Units that the collection can be increased to dynamically. |
If the command fails, a default custom command response is returned. See the [default output](#default-output) of custom command for the parameters in the output.
-### Examples
-
-#### Get the collection
+### Example: Get the collection
To get the collection object for a collection named `"testCollection"`, use the following command:
use test
db.runCommand({customAction: "GetCollection", collection: "testCollection"}); ```
-If the collection has an associated throughput capacity to it, it will include the `provisionedThroughput` value, and the output would be:
+If the collection has an associated throughput capacity to it, it includes the `provisionedThroughput` value, and the output would be:
```javascript {
If the collection has an associated throughput capacity to it, it will include t
} ```
-If the collection has an associated Autoscale throughput, it will include the `autoScaleSettings` object with the `maxThroughput` parameter, which defines the maximum throughput the collection will increase to dynamically. Additionally, it will also include the `provisionedThroughput` value, which defines the minimum throughput this collection will reduce to if there are no requests in the collection:
+If the collection has an associated Autoscale throughput, it includes the `autoScaleSettings` object with the `maxThroughput` parameter, which defines the maximum throughput the collection increases to dynamically. Additionally, it also includes the `provisionedThroughput` value, which defines the minimum throughput this collection reduces to if there are no requests in the collection:
```javascript {
If the collection is sharing [database-level throughput](../set-throughput.md#se
} ```
-## <a id="parallel-change-stream"></a> Parallelizing change streams
-When using [change streams](change-streams.md) at scale, it is best to evenly spread the load. The following command will return one or more change stream resume tokens - each one corresponding to data from a single physical shard/partition (multiple logical shards/partitions can exist on one physical partition). Each resume token will cause watch() to only return data from that physical shard/partition.
+## <a id="parallel-change-stream"></a> Parallelizing change streams
+
+When using [change streams](change-streams.md) at scale, it's best to evenly spread the load. The following command returns one or more change stream resume tokens - each one corresponding to data from a single physical shard/partition (multiple logical shards/partitions can exist on one physical partition). Each resume token causes watch() to only return data from that physical shard/partition.
-Calling db.collection.watch() on each resume token (one thread per token), will scale change streams efficiently.
+Use `db.collection.watch()` on each resume token (one thread per token), to scale change streams efficiently.
```javascript {
Calling db.collection.watch() on each resume token (one thread per token), will
} ```
-### Example
+### Example: Get the stream token
+ Run the custom command to get a resume token for each physical shard/partition. ```javascript
use test
db.runCommand({customAction: "GetChangeStreamTokens", collection: "<Name of the collection>"}) ```
-Run a watch() thread/process for each resume token returned from the GetChangeStreamTokens custom command. Below is an example for one thread.
+Run a watch() thread/process for each resume token returned from the GetChangeStreamTokens custom command. Here's an example for one thread.
```javascript db.test_coll.watch([{ $match: { "operationType": { $in: ["insert", "update", "replace"] } } }, { $project: { "_id": 1, "fullDocument": 1, "ns": 1, "documentKey": 1 } }],
db.test_coll.watch([{ $match: { "operationType": { $in: ["insert", "update", "re
resumeAfter: { "_data" : BinData(0,"eyJWIjoyLCJSaWQiOiJQeFVhQUxuMFNLRT0iLCJDb250aW51YXRpb24iOlt7IkZlZWRSYW5nZSI6eyJ0eXBlIjoiRWZmZWN0aXZlIFBhcnRpdGlvbiBLZXkgUmFuZ2UiLCJ2YWx1ZSI6eyJtaW4iOiIiLCJtYXgiOiJGRiJ9fSwiU3RhdGUiOnsidHlwZSI6ImNvbnRpbndkFLbiIsInZhbHVlIjoiXCIxODQ0XCIifX1dfQ=="), "_kind" : NumberInt(1)}}) ```
-The document (value) in the resumeAfter field represents the resume token. watch() will return a curser for all documents that were inserted, updated, or replaced from that physical partition since the GetChangeStreamTokens custom command was run. A sample of the data returned is below.
+The document (value) in the resumeAfter field represents the resume token. The command `watch()` returns a curser for all documents that were inserted, updated, or replaced from that physical partition since the GetChangeStreamTokens custom command was run. A sample of the data returned is included here.
```javascript
-{ "_id" : { "_data" : BinData(0,"eyJWIjoyLCJSaWQiOiJQeFVhQUxuMFNLRT0iLCJDfdsfdsfdsft7IkZlZWRSYW5nZSI6eyJ0eXBlIjoiRWZmZWN0aXZlIFBhcnRpdGlvbiBLZXkgUmFuZ2UiLCJ2YWx1ZSI6eyJtaW4iOiIiLCJtYXgiOiJGRiJ9fSwiU3RhdGUiOnsidHlwZSI6ImNvbnRpbnVhdGlvbiIsInZhbHVlIjoiXCIxOTgwXCIifX1dfQ=="), "_kind" : 1 },
- "fullDocument" : { "_id" : ObjectId("60da41ec9d1065b9f3b238fc"), "name" : John, "age" : 6 }, "ns" : { "db" : "test-db", "coll" : "test_coll" }, "documentKey" : { "_id" : ObjectId("60da41ec9d1065b9f3b238fc") }}
+{
+ "_id": {
+ "_data": BinData(0,
+ "eyJWIjoyLCJSaWQiOiJQeFVhQUxuMFNLRT0iLCJDfdsfdsfdsft7IkZlZWRSYW5nZSI6eyJ0eXBlIjoiRWZmZWN0aXZlIFBhcnRpdGlvbiBLZXkgUmFuZ2UiLCJ2YWx1ZSI6eyJtaW4iOiIiLCJtYXgiOiJGRiJ9fSwiU3RhdGUiOnsidHlwZSI6ImNvbnRpbnVhdGlvbiIsInZhbHVlIjoiXCIxOTgwXCIifX1dfQ=="),
+ "_kind": 1
+ },
+ "fullDocument": {
+ "_id": ObjectId("60da41ec9d1065b9f3b238fc"),
+ "name": John,
+ "age": 6
+ },
+ "ns": {
+ "db": "test-db",
+ "coll": "test_coll"
+ },
+ "documentKey": {
+ "_id": ObjectId("60da41ec9d1065b9f3b238fc")
+ }
+}
```
-Note that each document returned includes a resume token (they are all the same for each page). This resume token should be stored and reused if the thread/process dies. This resume token will pick up from where you left off, and receive data only from that physical partition.
-
+Each document returned includes a resume token (they're all the same for each page). This resume token should be stored and reused if the thread/process dies. This resume token picks up from where you left off, and receive data only from that physical partition.
## <a id="default-output"></a> Default output of a custom command If not specified, a custom response contains a document with the following fields:
-|**Field**|**Type** |**Description** |
-||||
-| `ok` | `int` | Status of response. 1 == success. 0 == failure. |
-| `code` | `int` | Only returned when the command failed (i.e. ok == 0). Contains the MongoDB error code. This is an optional response parameter. |
-| `errMsg` | `string` | Only returned when the command failed (i.e. ok == 0). Contains a user-friendly error message. This is an optional response parameter. |
+| Field | Type | Description |
+| | | |
+| `ok` | `int` | Status of response. 1 == success. 0 == failure. |
+| `code` | `int` | Only returned when the command failed (that is, ok == 0). Contains the MongoDB error code. This field is an optional response parameter. |
+| `errMsg` | `string` | Only returned when the command failed (that is, ok == 0). Contains a user-friendly error message. This field is an optional response parameter. |
For example:
For example:
## Next steps
-Next you can proceed to learn the following Azure Cosmos DB concepts:
+Next you can proceed to learn the following Azure Cosmos DB concepts:
* [Indexing in Azure Cosmos DB](../index-policy.md) * [Expire data in Azure Cosmos DB automatically with time to live](../time-to-live.md)
cosmos-db Pre Migration Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/pre-migration-steps.md
Title: Pre-migration steps for data migration to Azure Cosmos DB's API for MongoDB
+ Title: Pre-data migration steps
+ description: This doc provides an overview of the prerequisites for a data migration from MongoDB to Azure Cosmos DB.++ - Previously updated : 04/05/2022--+ Last updated : 02/27/2023
-# Pre-migration steps for data migrations from MongoDB to Azure Cosmos DB's API for MongoDB
+# Pre-migration steps for data migrations from MongoDB to Azure Cosmos DB for MongoDB
+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)] > [!IMPORTANT] > Please read this entire guide before carrying out your pre-migration steps. >
-This MongoDB pre-migration guide is part of series on MongoDB migration. The critical MongoDB migration steps are pre-migration, migration, and [post-migration](post-migration-optimization.md), as shown below.
+This MongoDB pre-migration guide is part of series on MongoDB migration. The critical MongoDB migration steps are pre-migration, migration, and [post-migration](post-migration-optimization.md), as shown in this guide.
-![Diagram of migration steps.](./media/pre-migration-steps/overall-migration-steps.png)
+![Diagram of the migration steps from pre to post migration.](./media/pre-migration-steps/overall-migration-steps.png)
## Overview of pre-migration
It's critical to carry out certain up-front planning and decision-making about y
Your goal in pre-migration is to: 1. Ensure that you set up Azure Cosmos DB to fulfill your application's requirements, and
-2. Plan out how you will execute the migration.
+2. Plan out how you execute the migration.
Follow these steps to perform a thorough pre-migration
All of the above steps are critical for ensuring a successful migration.
When you plan a migration, we recommend that whenever possible you plan at the per-resource level.
-The [Database Migration Assistant](https://aka.ms/mongodma)(DMA) assists you with the [Discovery](#programmatic-discovery-using-the-database-migration-assistant) and [Assessment](#programmatic-assessment-using-the-database-migration-assistant) stages of the planning.
+The [Database Migration Assistant](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB)(DMA) assists you with the [Discovery](#programmatic-discovery-using-the-database-migration-assistant) and [Assessment](#programmatic-assessment-using-the-database-migration-assistant) stages of the planning.
## Pre-migration discovery
-The first pre-migration step is resource discovery.
-In this step, you need to create a **data estate migration spreadsheet**.
+The first pre-migration step is resource discovery. In this step, you need to create a **data estate migration spreadsheet**.
* This sheet contains a comprehensive list of the existing resources (databases or collections) in your MongoDB data estate. * The purpose of this spreadsheet is to enhance your productivity and help you to plan migration from end-to-end.
In this step, you need to create a **data estate migration spreadsheet**.
### Programmatic discovery using the Database Migration Assistant
-You may use the [Database Migration Assistant](https://aka.ms/mongodma) (DMA) to assist you with the discovery stage and create the data estate migration sheet programmatically.
+You may use the [Database Migration Assistant](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB) (DMA) to assist you with the discovery stage and create the data estate migration sheet programmatically.
-It's easy to [setup and run DMA](https://aka.ms/mongodma#how-to-run-the-dma) through an Azure Data Studio client. It can be run from any machine connected to your source MongoDB environment.
+It's easy to [setup and run DMA](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB#how-to-run-the-dma) through an Azure Data Studio client. It can be run from any machine connected to your source MongoDB environment.
You can use either one of the following DMA output files as the data estate migration spreadsheet:
You can use either one of the following DMA output files as the data estate migr
* `workload_collection_details.csv` - Gives a collection-level view of the source workload. Columns in file are: Database Name, Collection Name, Doc Count, Average Document Size, Data size, Index Count, Index Size and Index definitions. Here's a sample database-level migration spreadsheet created by DMA:
-![Data estate spreadsheet example](./media/pre-migration-steps/data-estate-spreadsheet.png)
+
+| DB Name | Collection Count | Doc Count | Avg Doc Size | Data Size | Index Count | Index Size |
+| | | | | | | |
+| `bookstoretest` | 2 | 192200 | 4144 | 796572532 | 7 | 260636672 |
+| `cosmosbookstore` | 1 | 96604 | 4145 | 400497620 | 1 | 1814528 |
+| `geo` | 2 | 25554 | 252 | 6446542 | 2 | 266240 |
+| `kagglemeta` | 2 | 87934912 | 190 | 16725184704 | 2 | 891363328 |
+| `pe_orig` | 2 | 57703820 | 668 | 38561434711 | 2 | 861605888 |
+| `portugeseelection` | 2 | 30230038 | 687 | 20782985862 | 1 | 450932736 |
+| `sample_mflix` | 5 | 75583 | 691 | 52300763 | 5 | 798720 |
+| `test` | 1 | 22 | 545 | 12003 | 0 | 0 |
+| `testcol` | 26 | 46 | 88 | 4082 | 32 | 589824 |
+| `testhav` | 3 | 2 | 528 | 1057 | 3 | 36864 |
+| **TOTAL:** | **46** | **176258781** | | **72.01 GB** | | **2.3 GB** |
### Manual discovery
-Alternately, you may refer to the sample spreadsheet above and create a similar document yourself.
+Alternately, you may refer to the sample spreadsheet in this guide and create a similar document yourself.
* The spreadsheet should be structured as a record of your data estate resources, in list form. * Each row corresponds to a resource (database or collection). * Each column corresponds to a property of the resource; start with at least *name* and *data size (GB)* as columns.
-* As you progress through this guide, you'll build this spreadsheet into a tracking document for your end-to-end migration planning, adding columns as needed.
+* As you progress through this guide, you build this spreadsheet into a tracking document for your end-to-end migration planning, adding columns as needed.
Here are some tools you can use for discovering resources:
Assessment involves finding out whether you're using the [features and syntax th
### Programmatic assessment using the Database Migration Assistant
-[Database Migration Assistant](https://aka.ms/mongodma) (DMA) also assists you with the assessment stage of pre-migration planning.
+[Database Migration Assistant](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB) (DMA) also assists you with the assessment stage of pre-migration planning.
-Refer to the section [Programmatic discovery using the Database Migration Assistant](#programmatic-discovery-using-the-database-migration-assistant) to know how to setup and run DMA.
+Refer to the section [Programmatic discovery using the Database Migration Assistant](#programmatic-discovery-using-the-database-migration-assistant) to know how to setup and run DMA.
-The DMA notebook runs a few assessment rules against the resource list it gathers from source MongoDB. The assessment result lists the required and recommended changes needed to proceed with the migration.
+The DMA notebook runs a few assessment rules against the resource list it gathers from source MongoDB. The assessment result lists the required and recommended changes needed to proceed with the migration.
The results are printed as an output in the DMA notebook and saved to a CSV file - `assessment_result.csv`.
The results are printed as an output in the DMA notebook and saved to a CSV file
## Pre-migration mapping
-With the discovery and assessment steps complete, you are done with the MongoDB side of the equation. Now it is time to plan the Azure Cosmos DB side of the equation. How will you set up and configure your production Azure Cosmos DB resources? Do your planning at a *per-resource* level ΓÇô that means you should add the following columns to your planning spreadsheet:
-* Azure Cosmos DB mapping
-* Shard key
+With the discovery and assessment steps complete, you're done with the MongoDB side of the equation. Now it's time to plan the Azure Cosmos DB side of the equation. How are you planning to set up and configure your production Azure Cosmos DB resources? Do your planning at a *per-resource* level ΓÇô that means you should add the following columns to your planning spreadsheet:
+
+* Azure Cosmos DB mapping
+* Shard key
* Data model * Dedicated vs shared throughput
More detail is provided in the following sections.
### Capacity planning Trying to do capacity planning for a migration to Azure Cosmos DB?
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md) ### Considerations when using Azure Cosmos DB's API for MongoDB Before you plan your Azure Cosmos DB data estate, make sure you understand the following Azure Cosmos DB concepts: -- **Capacity model**: Database capacity on Azure Cosmos DB is based on a throughput-based model. This model is based on [Request Units per second](../request-units.md), which is a unit that represents the number of database operations that can be executed against a collection on a per-second basis. This capacity can be allocated at [a database or collection level](../set-throughput.md), and it can be provisioned on an allocation model, or using the [autoscale provisioned throughput](../provision-throughput-autoscale.md).--- **Request Units**: Every database operation has an associated Request Units (RUs) cost in Azure Cosmos DB. When executed, this is subtracted from the available request units level on a given second. If a request requires more RUs than the currently allocated RU/s there are two options to solve the issue - increase the amount of RUs, or wait until the next second starts and then retry the operation.--- **Elastic capacity**: The capacity for a given collection or database can change at any time. This allows for the database to elastically adapt to the throughput requirements of your workload.--- **Automatic sharding**: Azure Cosmos DB provides an automatic partitioning system that only requires a shard (or a partition key). The [automatic partitioning mechanism](../partitioning-overview.md) is shared across all the Azure Cosmos DB APIs and it allows for seamless data and throughout scaling through horizontal distribution.
+* **Capacity model**: Database capacity on Azure Cosmos DB is based on a throughput-based model. This model is based on [Request Units per second](../request-units.md), which is a unit that represents the number of database operations that can be executed against a collection on a per-second basis. This capacity can be allocated at [a database or collection level](../set-throughput.md), and it can be provisioned on an allocation model, or using the [autoscale provisioned throughput](../provision-throughput-autoscale.md).
+* **Request Units**: Every database operation has an associated Request Units (RUs) cost in Azure Cosmos DB. When executed, the request units are subtracted from the available request units level on a given second. If a request requires more RUs than the currently allocated RU/s there are two options to solve the issue - increase the number of RUs, or wait until the next second starts, and then retry the operation.
+* **Elastic capacity**: The capacity for a given collection or database can change at any time. This flexibility allows for the database to elastically adapt to the throughput requirements of your workload.
+* **Automatic sharding**: Azure Cosmos DB provides an automatic partitioning system that only requires a shard (or a partition key). The [automatic partitioning mechanism](../partitioning-overview.md) is shared across all the Azure Cosmos DB APIs and it allows for seamless data and throughout scaling through horizontal distribution.
### Plan the Azure Cosmos DB data estate
-Figure out what Azure Cosmos DB resources you'll create. This means stepping through your data estate migration spreadsheet and mapping each existing MongoDB resource to a new Azure Cosmos DB resource.
+Figure out what Azure Cosmos DB resources you create. This process requires stepping through your data estate migration spreadsheet and mapping each existing MongoDB resource to a new Azure Cosmos DB resource.
-* Anticipate that each MongoDB database will become an Azure Cosmos DB database.
-* Anticipate that each MongoDB collection will become an Azure Cosmos DB collection.
-* Choose a naming convention for your Azure Cosmos DB resources. Barring any change in the structure of databases and collections, keeping the same resource names is usually a fine choice.
-* Determine whether you'll be using sharded or unsharded collections in Azure Cosmos DB. The unsharded collection limit is 20 GB. Sharding, on the other hand, helps achieve horizontal scale that is critical to the performance of many workloads.
-* If using sharded collections, *do not assume that your MongoDB collection shard key becomes your Azure Cosmos DB collection shard key. Do not assume that your existing MongoDB data model/document structure is what you'll employ on Azure Cosmos DB.*
- * Shard key is the single most important setting for optimizing the scalability and performance of Azure Cosmos DB, and data modeling is the second most important. Both of these settings are immutable and cannot be changed once they are set; therefore it is highly important to optimize them in the planning phase. Follow the guidance in the [Immutable decisions](#immutable-decisions) section for more information.
-* Azure Cosmos DB does not recognize certain MongoDB collection types such as capped collections. For these resources, just create normal Azure Cosmos DB collections.
-* Azure Cosmos DB has two collection types of its own ΓÇô shared and dedicated throughput. Shared vs dedicated throughput is another critical, immutable decision which it is vital to make in the planning phase. Follow the guidance in the [Immutable decisions](#immutable-decisions) section for more information.
+* Anticipate that each MongoDB database becomes an Azure Cosmos DB database.
+* Anticipate that each MongoDB collection becomes an Azure Cosmos DB collection.
+* Choose a naming convention for your Azure Cosmos DB resources. Keeping the same resource names is usually a fine choice, unless there are any changes in the structure of databases and collections.
+* Determine whether you're using sharded or unsharded collections in Azure Cosmos DB. The unsharded collection limit is 20 GB. Sharding, on the other hand, helps achieve horizontal scale that is critical to the performance of many workloads.
+* If using sharded collections, *don't assume that your MongoDB collection shard key becomes your Azure Cosmos DB container partition key*. Don't assume that your existing MongoDB data model document structure should be the same model you employ on Azure Cosmos DB.
+ * Shard key is the single most important setting for optimizing the scalability and performance of Azure Cosmos DB, and data modeling is the second most important. Both of these settings are immutable and can't be changed once they're set; therefore it's highly important to optimize them in the planning phase. Follow the guidance in the [Immutable decisions](#immutable-decisions) section for more information.
+* Azure Cosmos DB doesn't recognize certain MongoDB collection types such as capped collections. For these resources, just create normal Azure Cosmos DB collections.
+* Azure Cosmos DB has two collection types of its own ΓÇô shared and dedicated throughput. Shared vs dedicated throughput is another critical, immutable decision, which it's vital to make in the planning phase. Follow the guidance in the [Immutable decisions](#immutable-decisions) section for more information.
### Immutable decisions
-The following Azure Cosmos DB configuration choices cannot be modified or undone once you have created an Azure Cosmos DB resource; therefore it is important to get these right during pre-migration planning, before you kick off any migrations:
-* Refer to [Partitioning and horizontal scaling in Azure Cosmos DB](../partitioning-overview.md) to choose the best shard key. Partitioning, also known as Sharding, is a key point of consideration before migrating data. Azure Cosmos DB uses fully-managed partitioning to increase the capacity in a database to meet the storage and throughput requirements. This feature doesn't need the hosting or configuration of routing servers.
- * In a similar way, the partitioning capability automatically adds capacity and re-balances the data accordingly. For details and recommendations on choosing the right partition key for your data, please see the [Choosing a Partition Key article](../partitioning-overview.md#choose-partitionkey).
+The following Azure Cosmos DB configuration choices can't be modified or undone once you've created an Azure Cosmos DB resource; therefore it's important to get these configuration choices right during pre-migration planning, before you kick off any migrations:
+
+* Refer to [Partitioning and horizontal scaling in Azure Cosmos DB](../partitioning-overview.md) to choose the best shard key. Partitioning, also known as Sharding, is a key point of consideration before migrating data. Azure Cosmos DB uses fully managed partitioning to increase the capacity in a database to meet the storage and throughput requirements. This feature doesn't need the hosting or configuration of routing servers.
+ * In a similar way, the partitioning capability automatically adds capacity and rebalances the data accordingly. For more information on choosing the right partition key for your data, see [choosing a partition key](../partitioning-overview.md#choose-partitionkey).
* Follow the guide for [Data modeling in Azure Cosmos DB](../modeling-data.md) to choose a data model.
-* Follow [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md#optimize-by-provisioning-throughput-at-different-levels) to choose between dedicated and shared throughput for each resource that you will migrate
+* Follow [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md#optimize-by-provisioning-throughput-at-different-levels) to choose between dedicated and shared throughput for each resource that you migrate
* [How to model and partition data on Azure Cosmos DB using a real-world example](../how-to-model-partition-example.md) is a real-world example of sharding and data modeling to aid you in your decision-making process ### Cost of ownership
The following Azure Cosmos DB configuration choices cannot be modified or undone
* In Azure Cosmos DB, the throughput is provisioned in advance and is measured in Request Units (RUs) per second. Unlike VMs or on-premises servers, RUs are easy to scale up and down at any time. You can change the number of provisioned RUs instantly. For more information, see [Request units in Azure Cosmos DB](../request-units.md).
-* You can use the [Azure Cosmos DB Capacity Calculator](https://cosmos.azure.com/capacitycalculator/) to determine the amount of Request Units based on your database account configuration, amount of data, document size, and required reads and writes per second.
+* You can use the [Azure Cosmos DB capacity calculator](https://cosmos.azure.com/capacitycalculator/) to determine the number of Request Units you should use. This number is based on your database account configuration, amount of data, document size, and required reads and writes per second.
* The following are key factors that affect the number of required RUs:
- * **Document size**: As the size of an item/document increases, the number of RUs consumed to read or write the item/document also increases.
+ * **Document size**: As the size of an item/document increase, the number of RUs consumed to read or write the item/document also increases.
- * **Document property count**:The number of RUs consumed to create or update a document is related to the number, complexity and length of its properties. You can reduce the request unit consumption for write operations by [limiting the number of indexed properties](indexing.md).
+ * **Document property count**:The number of RUs consumed to create or update a document is related to the number, complexity and length of its properties. You can reduce the request unit consumption for write operations by [limiting the number of indexed properties](indexing.md).
- * **Query patterns**: The complexity of a query affects how many request units are consumed by the query.
+ * **Query patterns**: The complexity of a query affects how many request units the query consumes.
-* The best way to understand the cost of queries is to use sample data in Azure Cosmos DB, [and run sample queries from the MongoDB Shell](connect-account.md) using the `getLastRequestStastistics` command to get the request charge, which will output the number of RUs consumed:
+* The best way to understand the cost of queries is to use sample data in Azure Cosmos DB, [and run sample queries from the MongoDB Shell](connect-account.md) using the `getLastRequestStastistics` command to get the request charge, which outputs the number of RUs consumed:
- `db.runCommand({getLastRequestStatistics: 1})`
+ ```bash
+ db.runCommand({getLastRequestStatistics: 1})
+ ```
- This command will output a JSON document similar to the following:
+ *This command outputs a JSON document similar to the following example:
- `{ "_t": "GetRequestStatisticsResponse", "ok": 1, "CommandName": "find", "RequestCharge": 10.1, "RequestDurationInMilliSeconds": 7.2}`
+ ```json
+ {
+ "_t": "GetRequestStatisticsResponse",
+ "ok": 1,
+ "CommandName": "find",
+ "RequestCharge": 10.1,
+ "RequestDurationInMilliSeconds": 7.2
+ }
+ ```
-* You can also use [the diagnostic settings](../monitor-resource-logs.md) to understand the frequency and patterns of the queries executed against Azure Cosmos DB. The results from the diagnostic logs can be sent to a storage account, an EventHub instance or [Azure Log Analytics](../../azure-monitor/logs/log-analytics-tutorial.md).
+* You can also use [the diagnostic settings](../monitor-resource-logs.md) to understand the frequency and patterns of the queries executed against Azure Cosmos DB. The results from the diagnostic logs can be sent to a storage account, an Event Hubs instance or [Azure Log Analytics](../../azure-monitor/logs/log-analytics-tutorial.md).
## Pre-migration logistics planning
-Finally, now that you have a view of your existing data estate and a design for your new Azure Cosmos DB data estate, you are ready to plan how to execute your migration process end-to-end. Once again, do your planning at a *per-resource* level, adding columns to your spreadsheet to capture the logistic dimensions below.
+Finally, now that you have a view of your existing data estate and a design for your new Azure Cosmos DB data estate, you're ready to plan how to execute your migration process end-to-end. Once again, do your planning at a *per-resource* level, adding columns to your spreadsheet to capture the logistic dimensions included in this section.
### Execution logistics
-* Assign responsibility for migrating each existing resource from MongoDB to Azure Cosmos DB. How you leverage your team resources in order to shepherd your migration to completion is up to you. For small migrations, you can have one team kick off the entire migration and monitor its progress. For larger migrations, you could assign responsibility to team-members on a per-resource basis for migrating and monitoring that resource.
+
+* Assign responsibility for migrating each existing resource from MongoDB to Azure Cosmos DB. How you apply your team resources in order to shepherd your migration to completion is up to you. For small migrations, you can have one team kick off the entire migration and monitor its progress. For larger migrations, you could assign responsibility to team-members on a per-resource basis for migrating and monitoring that resource.
* Once you have assigned responsibility for migrating your resources, now you should choose the right migration tool(s) for migration. For small migrations, you might be able to use one migration tool such as a MongoDB native tool or Azure DMS to migrate all of your resources in one shot. For larger migrations or migrations with special requirements, you may want to choose migration tooling at a per-resource granularity.
- * Before you plan which migration tools to use, we recommend acquainting yourself with the options that are available. The [Azure Database Migration Service for Azure Cosmos DB's API for MongoDB](../../dms/tutorial-mongodb-cosmos-db.md) provides a mechanism that simplifies data migration by providing a fully managed hosting platform, migration monitoring options and automatic throttling handling. The full list of options are the following:
+ * Before you plan which migration tools to use, we recommend acquainting yourself with the options that are available. The [Azure Database Migration Service for Azure Cosmos DB's API for MongoDB](../../dms/tutorial-mongodb-cosmos-db.md) provides a mechanism that simplifies data migration by providing a fully managed hosting platform, migration monitoring options and automatic throttling handling. Here's a full list of options:
+
+ |**Migration type**|**Solution**|**Considerations**|
+ ||||
+ |Online|[Azure Database Migration Service](../../dms/tutorial-mongodb-cosmos-db-online.md)|&bull; Uses the [bulk executor library](../bulk-executor-overview.md) for Azure Cosmos DB <br />&bull; Suitable for large datasets and takes care of replicating live changes <br/>&bull; Works only with other MongoDB sources|
+ |Offline|[Azure Database Migration Service](../../dms/tutorial-mongodb-cosmos-db-online.md)|&bull; Uses the [bulk executor library](../bulk-executor-overview.md) for Azure Cosmos DB <br/>&bull; Suitable for large datasets and takes care of replicating live changes <br/>&bull; Works only with other MongoDB sources|
+ |Offline|[Azure Data Factory](../../data-factory/connector-azure-cosmos-db.md)|&bull; Uses the [bulk executor library](../bulk-executor-overview.md) for Azure Cosmos DB <br/>&bull; Suitable for large datasets <br/> &bull; Easy to set up and supports multiple sources <br/> &bull; Lack of checkpointing means that any issue during migration would require a restart of the whole migration process<br/>&bull; Lack of a dead letter queue would mean that a few erroneous files could stop the entire migration process <br/>&bull; Needs custom code to increase read throughput for certain data sources|
+ |Offline|[Existing Mongo Tools (mongodump, mongorestore, Studio3T)](tutorial-mongotools-cosmos-db.md)|&bull; Easy to set up and integration <br/>&bull; Needs custom handling for throttles|
+ *|Offline/online|[Azure Databricks and Spark](migrate-databricks.md)|&bull; Full control of migration rate and data transformation <br/>&bull; Requires custom coding|
+
+ * If your resource can tolerate an offline migration, use this diagram to choose the appropriate migration tool:
- |**Migration type**|**Solution**|**Considerations**|
- ||||
- |Online|[Azure Database Migration Service](../../dms/tutorial-mongodb-cosmos-db-online.md)|&bull; Makes use of the Azure Cosmos DB bulk executor library <br/>&bull; Suitable for large datasets and takes care of replicating live changes <br/>&bull; Works only with other MongoDB sources|
- |Offline|[Azure Database Migration Service](../../dms/tutorial-mongodb-cosmos-db-online.md)|&bull; Makes use of the Azure Cosmos DB bulk executor library <br/>&bull; Suitable for large datasets and takes care of replicating live changes <br/>&bull; Works only with other MongoDB sources|
- |Offline|[Azure Data Factory](../../data-factory/connector-azure-cosmos-db.md)|&bull; Easy to set up and supports multiple sources <br/>&bull; Makes use of the Azure Cosmos DB bulk executor library <br/>&bull; Suitable for large datasets <br/>&bull; Lack of checkpointing means that any issue during the course of migration would require a restart of the whole migration process<br/>&bull; Lack of a dead letter queue would mean that a few erroneous files could stop the entire migration process <br/>&bull; Needs custom code to increase read throughput for certain data sources|
- |Offline|[Existing Mongo Tools (mongodump, mongorestore, Studio3T)](tutorial-mongotools-cosmos-db.md)|&bull; Easy to set up and integration <br/>&bull; Needs custom handling for throttles|
- |Offline/online|[Azure Databricks and Spark](migrate-databricks.md)|&bull; Full control of migration rate and data transformation <br/>&bull; Requires custom coding|
-
- * If your resource can tolerate an offline migration, use the diagram below to choose the appropriate migration tool:
+ ![Diagram of using offline migration tools based on the size of the tool.](./media/pre-migration-steps/offline-tools.png)
- ![Offline migration tools.](./media/pre-migration-steps/offline-tools.png)
+ * If your resource requires an online migration, use this diagram to choose the appropriate migration tool:
- * If your resource requires an online migration, use the diagram below to choose the appropriate migration tool:
+ ![Diagram of using online migration tools based on preference for turnkey or custom solutions.](./media/pre-migration-steps/online-tools.png)
- ![Online migration tools.](./media/pre-migration-steps/online-tools.png)
-
- Watch this video for an [overview and demo of the migration solutions](https://www.youtube.com/watch?v=WN9h80P4QJM) mentioned above.
+ * Watch an [overview and demo of the migration solutions](https://www.youtube.com/watch?v=WN9h80P4QJM) video.
-* Once you have chosen migration tools for each resource, the next step is to prioritize the resources you will migrate. Good prioritization can help keep your migration on schedule. A good practice is to prioritize migrating those resources which need the most time to be moved; migrating these resources first will bring the greatest progress toward completion. Furthermore, since these time-consuming migrations typically involve more data, they are usually more resource-intensive for the migration tool and therefore are more likely to expose any problems with your migration pipeline early on. This minimizes the chance that your schedule will slip due to any difficulties with your migration pipeline.
-* Plan how you will monitor the progress of migration once it has started. If you are coordinating your data migration effort among a team, plan a regular cadence of team syncs too, so that you have a comprehensive view of how the high-priority migrations are going.
-
+* Once you have chosen migration tools for each resource, the next step is to prioritize the resources you'll migrate. Good prioritization can help keep your migration on schedule. A good practice is to prioritize migrating those resources, which need the most time to be moved; migrating these resources first bring the greatest progress toward completion. Furthermore, since these time-consuming migrations typically involve more data, they're more resource-intensive for the migration tool and therefore are more likely to expose any problems with your migration pipeline early on. This practice minimizes the chance that your schedule slips due to any difficulties with your migration pipeline.
+* Plan how you monitor the progress of migration once it has started. If you're coordinating your data migration effort among a team, plan a regular cadence of team syncs too, so that you have a comprehensive view of how the high-priority migrations are going.
### Supported migration scenarios
-The best choice of MongoDB migration tool depends on your migration scenario.
+The best choice of MongoDB migration tool depends on your migration scenario.
#### Types of migrations
-The compatible tools for each migration scenario are shown below:
+Here's a list of compatible tools for each migration scenario:
-![Supported migration scenarios.](./media/pre-migration-steps/migration-tools-use-case-table.png)
+| Source | Destination | Process recommendation |
+| | | |
+| &bull; MongoDB on-premises cluster <br /> &bull; MongoDB on IaaS VM cluster <br /> &bull; MongoDB Atlas cluster - **Offline** | Azure Cosmos DB Mongo API | &bull; <10-GB data: MongoDB native tools <br /> &bull; <1-TB data: Azure DMS <br /> &bull; >1-TB data: Spark |
+| &bull; MongoDB on-premises cluster <br /> &bull; MongoDB on IaaS VM cluster <br /> &bull; MongoDB Atlas cluster - **Online** | Azure Cosmos DB Mongo API | &bull; <1-TB data: Azure DMS <br /> &bull; >1-TB data: Spark + Mongo Changestream |
+| &bull; Need to change schema during migration <br /> Need more flexibility than aforementioned tools | Azure Cosmos DB Mongo API | &bull; ADF is more flexible than DMS, it supports schema modifications during migration and supports the most source/destination combinations <br /> &bull; DMS is better in terms of scale (ex. faster migration) |
+| &bull; JSON file | Azure Cosmos DB Mongo API | &bull; MongoDB native tools specifically **mongoimport** |
+| &bull; CSV file | Azure Cosmos DB Mongo API | &bull; MongoDB native tools specifically **mongoimport** |
+| &bull; BSON file | Azure Cosmos DB Mongo API | &bull; MongoDB native tools specifically **mongorestore** |
#### Tooling support for MongoDB versions
-Given that you are migrating from a particular MongoDB version, the supported tools are shown below:
+Given that you're migrating from a particular MongoDB version, the supported tools for each version are included here:
-![MongoDB versions supported by migration tools.](./media/pre-migration-steps/migration-tool-compatibility.png)
+| MongoDB source version | Azure Cosmos DB for MongoDB destination version | Supported tools | Unsupported tools |
+| | | | |
+| <2.x, >4.0 | 3.2, 3.6, 4.0 | MongoDB native tools, Spark | DMS, ADF |
+| 3.2, 3.6, 4.0 | 3.2, 3.6, 4.0 | MongoDB native tools, DMS, ADF, Spark | None |
### Post-migration
-In the pre-migration phase, spend some time to plan what steps you will take toward app migration and optimization post-migration.
-* In the post-migration phase, you will execute a cutover of your application to use Azure Cosmos DB instead of your existing MongoDB data estate.
-* Make your best effort to plan out indexing, global distribution, consistency, and other *mutable* Azure Cosmos DB properties at a per resource level - however, these Azure Cosmos DB configuration settings *can* be modified later, so expect to make adjustments to these settings down the road. DonΓÇÖt let these aspects be a cause of analysis paralysis. You will apply these mutable configurations post-migration.
+In the pre-migration phase, spend some time to plan what steps you take toward app migration and optimization post-migration.
+
+* In the post-migration phase, you execute a cutover of your application to use Azure Cosmos DB instead of your existing MongoDB data estate.
+* Make your best effort to plan out indexing, global distribution, consistency, and other *mutable* Azure Cosmos DB properties at a per resource level. However, these Azure Cosmos DB configuration settings *can* be modified later, so expect to make adjustments to these settings later. You apply these mutable configurations post-migration.
* For a post-migration guide, see [Post-migration optimization steps when using Azure Cosmos DB's API for MongoDB](post-migration-optimization.md). ## Next steps
-* Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
* Migrate to Azure Cosmos DB for MongoDB
- * [Offline migration using MongoDB native tools](tutorial-mongotools-cosmos-db.md)
- * [Offline migration using Azure database migration service (DMS)](../../dms/tutorial-mongodb-cosmos-db.md)
- * [Online migration using Azure database migration service (DMS)](../../dms/tutorial-mongodb-cosmos-db-online.md)
- * [Offline/online migration using Azure Databricks and Spark](migrate-databricks.md)
-* [Post-migration guide](post-migration-optimization.md) - optimize steps once you have migrated to Azure Cosmos DB for MongoDB
-* [Provision throughput on Azure Cosmos DB containers and databases](../set-throughput.md)
-* [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
-* [Global Distribution in Azure Cosmos DB](../distribute-data-globally.md)
-* [Indexing in Azure Cosmos DB](../index-overview.md)
-* [Request Units in Azure Cosmos DB](../request-units.md)
+ * [Offline migration using MongoDB native tools](tutorial-mongotools-cosmos-db.md)
+ * [Offline migration using Azure database migration service (DMS)](../../dms/tutorial-mongodb-cosmos-db.md)
+ * [Online migration using Azure database migration service (DMS)](../../dms/tutorial-mongodb-cosmos-db-online.md)
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-nodejs.md
ms.devlang: javascript Last updated 02/21/2023-+ # Quickstart - Azure Cosmos DB for NoSQL client library for Node.js
cosmos-db Troubleshoot Dotnet Sdk Request Header Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-dotnet-sdk-request-header-too-large.md
Title: Troubleshoot a "Request header too large" message or 400 bad request in Azure Cosmos DB
-description: Learn how to diagnose and fix the request header too large exception.
+ Title: Troubleshoot "request header too large" or "bad request"
+
+description: Learn how to diagnose and fix either the HTTP request header too large or bad request (400) exceptions.
++ Previously updated : 09/29/2021- - Last updated : 02/27/2023
-# Diagnose and troubleshoot Azure Cosmos DB "Request header too large" message
+# Diagnose and troubleshoot "request header too large" or "bad request" messages in Azure Cosmos DB SDK for .NET
+ [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-The "Request header too large" message is thrown with an HTTP error code 400. This error occurs if the size of the request header has grown so large that it exceeds the maximum-allowed size. We recommend that you use the latest version of the SDK. Use at least version 3.x or 2.x, because these versions add header size tracing to the exception message.
+The "Request header too large" message is thrown with an HTTP error code 400. This error occurs if the size of the request header has grown so large that it exceeds the maximum-allowed size. We recommend that you use the latest version of the Azure Cosmos DB SDK for .NET. We recommend that you use version 3.x because this major version adds header size tracing to the exception message.
## Troubleshooting steps+ The "Request header too large" message occurs if the session or the continuation token is too large. The following sections describe the cause of the issue and its solution in each category. ### Session token is too large
-#### Cause:
+This section reviews scenarios where the session token is too large.
+
+#### Cause
+ A 400 bad request most likely occurs because the session token is too large. If the following statements are true, the session token is too large: * The error occurs on point operations like create, read, and update where there isn't a continuation token. * The exception started without making any changes to the application. The session token grows as the number of partitions increases in the container. The number of partitions increases as the amount of data increases or if the throughput is increased.
-#### Temporary mitigation:
-Restart your client application to reset all the session tokens. Eventually, the session token will grow back to the previous size that caused the issue. To avoid this issue completely, use the solution in the next section.
+#### Temporary mitigation
+
+Restart your client application to reset all the session tokens. Eventually, the session token grows back to the previous size that caused the issue. To avoid this issue completely, use the solution in the next section.
+
+#### Solution
-#### Solution:
> [!IMPORTANT] > Upgrade to at least .NET [v3.20.1](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/changelog.md) or [v2.16.1](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md). These minor versions contain optimizations to reduce the session token size to prevent the header from growing and hitting the size limit.+ 1. Follow the guidance in the [.NET v3](performance-tips-dotnet-sdk-v3.md) or [.NET v2](performance-tips.md) performance tips articles. Convert the application to use the direct connection mode with the Transmission Control Protocol (TCP). The direct connection mode with the TCP protocol doesn't have the header size restriction like the HTTP protocol, so it avoids this issue. Make sure to use the latest version of the SDK, which has a fix for query operations when the service interop isn't available. 1. If the direct connection mode with the TCP protocol isn't an option for your workload, mitigate it by changing the [client consistency level](how-to-manage-consistency.md). The session token is only used for session consistency, which is the default consistency level for Azure Cosmos DB. Other consistency levels don't use the session token. ### Continuation token is too large
-#### Cause:
-The 400 bad request occurs on query operations where the continuation token is used if the continuation token has grown too large or if different queries have different continuation token sizes.
-
-#### Solution:
-1. Follow the guidance in the [.NET v3](performance-tips-dotnet-sdk-v3.md) or [.NET v2](performance-tips.md) performance tips articles. Convert the application to use the direct connection mode with the TCP protocol. The direct connection mode with the TCP protocol doesn't have the header size restriction like the HTTP protocol, so it avoids this issue.
+This section reviews scenarios where the continuation token is too large.
+
+#### Cause
+
+The 400 bad request occurs on query operations where the continuation token is used if the token has grown too large. This error can also occur if different queries have different continuation token sizes.
+
+#### Solution
+
+1. Follow the guidance in the [.NET v3](performance-tips-dotnet-sdk-v3.md) or [.NET v2](performance-tips.md) performance tips articles. Convert the application to use the direct connection mode with the TCP protocol. The direct connection mode with the TCP protocol doesn't have the header size restriction like the HTTP protocol, so it avoids this issue.
1. If the direct connection mode with the TCP protocol isn't an option for your workload, set the `ResponseContinuationTokenLimitInKb` option. You can find this option in `FeedOptions` in v2 or `QueryRequestOptions` in v3. ## Next steps+ * [Diagnose and troubleshoot](troubleshoot-dotnet-sdk.md) issues when you use the Azure Cosmos DB .NET SDK. * Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3.md) and [.NET v2](performance-tips.md).
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md
Title: Partitioning and horizontal scaling in Azure Cosmos DB
-description: Learn about partitioning, logical, physical partitions in Azure Cosmos DB, best practices when choosing a partition key, and how to manage logical partitions
+ Title: Partitioning and horizontal scaling
+
+description: Learn about partitioning, logical, physical partitions in Azure Cosmos DB, best practices when choosing a partition key, and how to manage logical partitions.
Previously updated : 03/24/2022 Last updated : 02/27/2023 # Partitioning and horizontal scaling in Azure Cosmos DB+ [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
-Azure Cosmos DB uses partitioning to scale individual containers in a database to meet the performance needs of your application. In partitioning, the items in a container are divided into distinct subsets called *logical partitions*. Logical partitions are formed based on the value of a *partition key* that is associated with each item in a container. All the items in a logical partition have the same partition key value.
+Azure Cosmos DB uses partitioning to scale individual containers in a database to meet the performance needs of your application. The items in a container are divided into distinct subsets called *logical partitions*. Logical partitions are formed based on the value of a *partition key* that is associated with each item in a container. All the items in a logical partition have the same partition key value.
For example, a container holds items. Each item has a unique value for the `UserID` property. If `UserID` serves as the partition key for the items in the container and there are 1,000 unique `UserID` values, 1,000 logical partitions are created for the container.
In addition to a partition key that determines the item's logical partition, eac
This article explains the relationship between logical and physical partitions. It also discusses best practices for partitioning and gives an in-depth view at how horizontal scaling works in Azure Cosmos DB. It's not necessary to understand these internal details to select your partition key but we've covered them so you have clarity on how Azure Cosmos DB works. - ## Logical partitions A logical partition consists of a set of items that have the same partition key. For example, in a container that contains data about food nutrition, all items contain a `foodGroup` property. You can use `foodGroup` as the partition key for the container. Groups of items that have specific values for `foodGroup`, such as `Beef Products`, `Baked Products`, and `Sausages and Luncheon Meats`, form distinct logical partitions.
-A logical partition also defines the scope of database transactions. You can update items within a logical partition by using a [transaction with snapshot isolation](database-transactions-optimistic-concurrency.md). When new items are added to a container, new logical partitions are transparently created by the system. You don't have to worry about deleting a logical partition when the underlying data is deleted.
+A logical partition also defines the scope of database transactions. You can update items within a logical partition by using a [transaction with snapshot isolation](database-transactions-optimistic-concurrency.md). When new items are added to a container, the system transparently creates new logical partitions. You don't have to worry about deleting a logical partition when the underlying data is deleted.
-There's no limit to the number of logical partitions in your container. Each logical partition can store up to 20 GB of data. Good partition key choices have a wide range of possible values. For example, in a container where all items contain a `foodGroup` property, the data within the `Beef Products` logical partition can grow up to 20 GB. [Selecting a partition key](#choose-partitionkey) with a wide range of possible values ensures that the container is able to scale.
+There's no limit to the number of logical partitions in your container. Each logical partition can store up to 20 GB of data. Good partition key choices have a wide range of possible values. For example, in a container where all items contain a `foodGroup` property, the data within the `Beef Products` logical partition can grow up to 20 GB. [Selecting a partition key](#choose-partitionkey) with a wide range of possible values ensures that the container is able to scale.
You can use Azure Monitor Alerts to [monitor if a logical partition's size is approaching 20 GB](how-to-alert-on-logical-partition-key-storage-size.md). ## Physical partitions
-A container is scaled by distributing data and throughput across physical partitions. Internally, one or more logical partitions are mapped to a single physical partition. Typically smaller containers have many logical partitions but they only require a single physical partition. Unlike logical partitions, physical partitions are an internal implementation of the system and they're entirely managed by Azure Cosmos DB.
+A container is scaled by distributing data and throughput across physical partitions. Internally, one or more logical partitions are mapped to a single physical partition. Typically smaller containers have many logical partitions but they only require a single physical partition. Unlike logical partitions, physical partitions are an internal implementation of the system and Azure Cosmos DB entirely manages physical partitions.
-The number of physical partitions in your container depends on the following:
+The number of physical partitions in your container depends on the following characteristics:
* The amount of throughput provisioned (each individual physical partition can provide a throughput of up to 10,000 request units per second). The 10,000 RU/s limit for physical partitions implies that logical partitions also have a 10,000 RU/s limit, as each logical partition is only mapped to one physical partition.
-* The total data storage (each individual physical partition can store up to 50GB data).
+* The total data storage (each individual physical partition can store up to 50 GBs of data).
> [!NOTE] > Physical partitions are an internal implementation of the system and they are entirely managed by Azure Cosmos DB. When developing your solutions, don't focus on physical partitions because you can't control them. Instead, focus on your partition keys. If you choose a partition key that evenly distributes throughput consumption across logical partitions, you will ensure that throughput consumption across physical partitions is balanced.
-There's no limit to the total number of physical partitions in your container. As your provisioned throughput or data size grows, Azure Cosmos DB will automatically create new physical partitions by splitting existing ones. Physical partition splits do not impact your application's availability. After the physical partition split, all data within a single logical partition will still be stored on the same physical partition. A physical partition split simply creates a new mapping of logical partitions to physical partitions.
+There's no limit to the total number of physical partitions in your container. As your provisioned throughput or data size grows, Azure Cosmos DB automatically creates new physical partitions by splitting existing ones. Physical partition splits don't affect your application's availability. After the physical partition split, all data within a single logical partition will still be stored on the same physical partition. A physical partition split simply creates a new mapping of logical partitions to physical partitions.
Throughput provisioned for a container is divided evenly among physical partitions. A partition key design that doesn't distribute requests evenly might result in too many requests directed to a small subset of partitions that become "hot." Hot partitions lead to inefficient use of provisioned throughput, which might result in rate-limiting and higher costs. You can see your container's physical partitions in the **Storage** section of the **Metrics blade** of the Azure portal:
-In the above screenshot, a container has `/foodGroup` as the partition key. Each of the three bars in the graph represents a physical partition. In the image, **partition key range** is the same as a physical partition. The selected physical partition contains the top 3 most significant size logical partitions: `Beef Products`, `Vegetable and Vegetable Products`, and `Soups, Sauces, and Gravies`.
+For example, consider a container with the path `/foodGroup` specified as the partition key. The container could have any number of physical partitions, but in this example we assume it has three. A single physical partition could contain multiple partition keys. As an example, the largest physical partition could contain the top three most significant size logical partitions: `Beef Products`, `Vegetable and Vegetable Products`, and `Soups, Sauces, and Gravies`.
-If you provision a throughput of 18,000 request units per second (RU/s), then each of the three physical partitions can utilize 1/3 of the total provisioned throughput. Within the selected physical partition, the logical partition keys `Beef Products`, `Vegetable and Vegetable Products`, and `Soups, Sauces, and Gravies` can, collectively, utilize the physical partition's 6,000 provisioned RU/s. Because provisioned throughput is evenly divided across your container's physical partitions, it's important to choose a partition key that evenly distributes throughput consumption by [choosing the right logical partition key](#choose-partitionkey).
+If you assign a throughput of 18,000 request units per second (RU/s), then each of the three physical partitions can utilize 1/3 of the total provisioned throughput. Within the selected physical partition, the logical partition keys `Beef Products`, `Vegetable and Vegetable Products`, and `Soups, Sauces, and Gravies` can, collectively, utilize the physical partition's 6,000 provisioned RU/s. Because provisioned throughput is evenly divided across your container's physical partitions, it's important to choose a partition key that evenly distributes throughput consumption. For more information, see [choosing the right logical partition key](#choose-partitionkey).
## Managing logical partitions
Transactions (in stored procedures or triggers) are allowed only against items i
## Replica sets
-Each physical partition consists of a set of replicas, also referred to as a [*replica set*](global-dist-under-the-hood.md). Each replica hosts an instance of the database engine. A replica set makes the data stored within the physical partition durable, highly available, and consistent. Each replica that makes up the physical partition inherits the partition's storage quota. All replicas of a physical partition collectively support the throughput that's allocated to the physical partition. Azure Cosmos DB automatically manages replica sets.
+Each physical partition consists of a set of replicas, also referred to as a [*replica set*](global-dist-under-the-hood.md). Each replica hosts an instance of the database engine. A replica set makes the data store within the physical partition durable, highly available, and consistent. Each replica that makes up the physical partition inherits the partition's storage quota. All replicas of a physical partition collectively support the throughput that's allocated to the physical partition. Azure Cosmos DB automatically manages replica sets.
-Typically, smaller containers only require a single physical partition, but they will still have at least 4 replicas.
+Typically, smaller containers only require a single physical partition, but they still have at least four replicas.
The following image shows how logical partitions are mapped to physical partitions that are distributed globally. [Partition set](global-dist-under-the-hood.md#partition-sets) in the image refers to a group of physical partitions that manage the same logical partition keys across multiple regions:
The following image shows how logical partitions are mapped to physical partitio
A partition key has two components: **partition key path** and the **partition key value**. For example, consider an item `{ "userId" : "Andrew", "worksFor": "Microsoft" }` if you choose "userId" as the partition key, the following are the two partition key components:
-* The partition key path (For example: "/userId"). The partition key path accepts alphanumeric and underscore (_) characters. You can also use nested objects by using the standard path notation(/).
+* The partition key path (For example: "/userId"). The partition key path accepts alphanumeric and underscores (_) characters. You can also use nested objects by using the standard path notation(/).
* The partition key value (For example: "Andrew"). The partition key value can be of string or numeric types. To learn about the limits on throughput, storage, and length of the partition key, see the [Azure Cosmos DB service quotas](concepts-limits.md) article.
-Selecting your partition key is a simple but important design choice in Azure Cosmos DB. Once you select your partition key, it is not possible to change it in-place. If you need to change your partition key, you should move your data to a new container with your new desired partition key. ([Container copy jobs](intra-account-container-copy.md) help with this process.)
+Selecting your partition key is a simple but important design choice in Azure Cosmos DB. Once you select your partition key, it isn't possible to change it in-place. If you need to change your partition key, you should move your data to a new container with your new desired partition key. ([Container copy jobs](intra-account-container-copy.md) help with this process.)
For **all** containers, your partition key should:
-* Be a property that has a value which does not change. If a property is your partition key, you can't update that property's value.
+* Be a property that has a value, which doesn't change. If a property is your partition key, you can't update that property's value.
* Should only contain `String` values - or numbers should ideally be converted into a `String`, if there's any chance that they are outside the boundaries of double precision numbers according to [IEEE 754 binary64](https://www.rfc-editor.org/rfc/rfc8259#ref-IEEE754). The [Json specification](https://www.rfc-editor.org/rfc/rfc8259#section-6) calls out the reasons why using numbers outside of this boundary in general is a bad practice due to likely interoperability problems. These concerns are especially relevant for the partition key column, because it's immutable and requires data migration to change it later. * Have a high cardinality. In other words, the property should have a wide range of possible values.
-* Spread request unit (RU) consumption and data storage evenly across all logical partitions. This ensures even RU consumption and storage distribution across your physical partitions.
+* Spread request unit (RU) consumption and data storage evenly across all logical partitions. This spread ensures even RU consumption and storage distribution across your physical partitions.
-* Have values that are no larger than 2048 bytes typically, or 101 bytes if large partition keys are not enabled. For more information, see [large partition keys](large-partition-keys.md)
+* Have values that are no larger than 2048 bytes typically, or 101 bytes if large partition keys aren't enabled. For more information, see [large partition keys](large-partition-keys.md)
-If you need [multi-item ACID transactions](database-transactions-optimistic-concurrency.md#multi-item-transactions) in Azure Cosmos DB, you will need to use [stored procedures or triggers](how-to-write-stored-procedures-triggers-udfs.md#stored-procedures). All JavaScript-based stored procedures and triggers are scoped to a single logical partition.
+If you need [multi-item ACID transactions](database-transactions-optimistic-concurrency.md#multi-item-transactions) in Azure Cosmos DB, you need to use [stored procedures or triggers](how-to-write-stored-procedures-triggers-udfs.md#stored-procedures). All JavaScript-based stored procedures and triggers are scoped to a single logical partition.
> [!NOTE]
-> If you only have one physical partition, the value of the partition key may not be relevant as all queries will target the same physical partition.
+> If you only have one physical partition, the value of the partition key may not be relevant as all queries will target the same physical partition.
## Partition keys for read-heavy containers
-For most containers, the above criteria is all you need to consider when picking a partition key. For large read-heavy containers, however, you might want to choose a partition key that appears frequently as a filter in your queries. Queries can be [efficiently routed to only the relevant physical partitions](how-to-query-container.md#in-partition-query) by including the partition key in the filter predicate.
+For most containers, the above criteria are all you need to consider when picking a partition key. For large read-heavy containers, however, you might want to choose a partition key that appears frequently as a filter in your queries. Queries can be [efficiently routed to only the relevant physical partitions](how-to-query-container.md#in-partition-query) by including the partition key in the filter predicate.
-If most of your workload's requests are queries and most of your queries have an equality filter on the same property, this property can be a good partition key choice. For example, if you frequently run a query that filters on `UserID`, then selecting `UserID` as the partition key would reduce the number of [cross-partition queries](how-to-query-container.md#avoiding-cross-partition-queries).
+This property can be a good partition key choice if most of your workload's requests are queries and most of your queries have an equality filter on the same property. For example, if you frequently run a query that filters on `UserID`, then selecting `UserID` as the partition key would reduce the number of [cross-partition queries](how-to-query-container.md#avoiding-cross-partition-queries).
-However, if your container is small, you probably don't have enough physical partitions to need to worry about the performance impact of cross-partition queries. Most small containers in Azure Cosmos DB only require one or two physical partitions.
+However, if your container is small, you probably don't have enough physical partitions to need to worry about the performance of cross-partition queries. Most small containers in Azure Cosmos DB only require one or two physical partitions.
-If your container could grow to more than a few physical partitions, then you should make sure you pick a partition key that minimizes cross-partition queries. Your container will require more than a few physical partitions when either of the following are true:
+If your container could grow to more than a few physical partitions, then you should make sure you pick a partition key that minimizes cross-partition queries. Your container requires more than a few physical partitions when either of the following are true:
-* Your container will have over 30,000 RU's provisioned
+* Your container has over 30,000 RUs provisioned
-* Your container will store over 100 GB of data
+* Your container stores over 100 GB of data
## Use item ID as the partition key If your container has a property that has a wide range of possible values, it's likely a great partition key choice. One possible example of such a property is the *item ID*. For small read-heavy containers or write-heavy containers of any size, the *item ID* is naturally a great choice for the partition key.
-The system property *item ID* exists in every item in your container. You may have other properties that represent a logical ID of your item. In many cases, these are also great partition key choices for the same reasons as the *item ID*.
+The system property *item ID* exists in every item in your container. You may have other properties that represent a logical ID of your item. In many cases, these IDs are also great partition key choices for the same reasons as the *item ID*.
The *item ID* is a great partition key choice for the following reasons: * There are a wide range of possible values (one unique *item ID* per item). * Because there's a unique *item ID* per item, the *item ID* does a great job at evenly balancing RU consumption and data storage.
-* You can easily do efficient point reads since you'll always know an item's partition key if you know its *item ID*.
+* You can easily do efficient point reads since you always know an item's partition key if you know its *item ID*.
Some things to consider when selecting the *item ID* as the partition key include:
-* If the *item ID* is the partition key, it will become a unique identifier throughout your entire container. You won't be able to have items that have a duplicate *item ID*.
-* If you have a read-heavy container that has a lot of [physical partitions](partitioning-overview.md#physical-partitions), queries will be more efficient if they have an equality filter with the *item ID*.
-* You can't run stored procedures or triggers across multiple logical partitions.
+* If the *item ID* is the partition key, it becomes a unique identifier throughout your entire container. You can't create items that have duplicate *item IDs*.
+* If you have a read-heavy container with many [physical partitions](partitioning-overview.md#physical-partitions), queries are more efficient if they have an equality filter with the *item ID*.
+* You can't run stored procedures or triggers that target multiple logical partitions.
## Next steps * Learn about [provisioned throughput in Azure Cosmos DB](request-units.md). * Learn about [global distribution in Azure Cosmos DB](distribute-data-globally.md).
-* Learn how to [provision throughput on an Azure Cosmos DB container](how-to-provision-container-throughput.md).
-* Learn how to [provision throughput on an Azure Cosmos DB database](how-to-provision-database-throughput.md).
* See the training module on how to [Model and partition your data in Azure Cosmos DB.](/training/modules/model-partition-data-azure-cosmos-db/)
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Request Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/request-units.md
Title: Request Units as a throughput and performance currency in Azure Cosmos DB
-description: Learn about how to specify and estimate Request Unit requirements in Azure Cosmos DB
+ Title: Request Units as a throughput and performance currency
+
+description: Learn how request units function as a currency in Azure Cosmos DB and how to specify and estimate Request Unit requirements.
Previously updated : 03/24/2022 Last updated : 02/27/2023 + # Request Units in Azure Cosmos DB+ [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
-The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request unit is a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB.
+Azure Cosmos DB normalizes the cost of all database operations using Request Units (or RUs, for short). Request unit is a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB.
-The cost to do a point read (fetching a single item by its ID and partition key value) for a 1-KB item is 1 Request Unit (or 1 RU). All other database operations are similarly assigned a cost using RUs. No matter which API you use to interact with your Azure Cosmos DB container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs.
+The cost to do a point read (fetching a single item by its ID and partition key value) for a 1-KB item is one Request Unit (or one RU). All other database operations are similarly assigned a cost using RUs. No matter which API you use to interact with your Azure Cosmos DB container, RUs measure the actual costs of using that API. Whether the database operation is a write, point read, or query, costs are always measured in RUs.
->
-> [!VIDEO https://aka.ms/docs.essential-request-units]
+> [!VIDEO https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=772fba63-62c7-488c-acdb-a8f686a3b5f4]
The following image shows the high-level idea of RUs: :::image type="content" source="./media/request-units/request-units.png" alt-text="Database operations consume Request Units" border="false":::
-To manage and plan capacity, Azure Cosmos DB ensures that the number of RUs for a given database operation over a given dataset is deterministic. You can examine the response header to track the number of RUs that are consumed by any database operation. When you understand the [factors that affect RU charges](request-units.md#request-unit-considerations) and your application's throughput requirements, you can run your application cost effectively.
+To manage and plan capacity, Azure Cosmos DB ensures that the number of RUs for a given database operation over a given dataset is deterministic. You can examine the response header to track the number of RUs consumed by any database operation. When you understand the [factors that affect RU charges](request-units.md#request-unit-considerations) and your application's throughput requirements, you can run your application cost effectively.
The type of Azure Cosmos DB account you're using determines the way consumed RUs get charged. There are three modes in which you can create an account:
-1. **Provisioned throughput mode**: In this mode, you provision the number of RUs for your application on a per-second basis in increments of 100 RUs per second. To scale the provisioned throughput for your application, you can increase or decrease the number of RUs at any time in increments or decrements of 100 RUs. You can make your changes either programmatically or by using the Azure portal. You are billed on an hourly basis for the number of RUs per second you have provisioned. To learn more, see the [Provisioned throughput](set-throughput.md) article.
+1. **Provisioned throughput mode**: In this mode, you assign the number of RUs for your application on a per-second basis in increments of 100 RUs per second. To scale the provisioned throughput for your application, you can increase or decrease the number of RUs at any time in increments or decrements of 100 RUs. You can make your changes either programmatically or by using the Azure portal. You're billed on an hourly basis for the number of RUs per second you've provisioned. To learn more, see the [Provisioned throughput](set-throughput.md) article.
- You can provision throughput at two distinct granularities:
+ You can assign throughput at two distinct granularities:
- * **Containers**: For more information, see [Provision throughput on an Azure Cosmos DB container](how-to-provision-container-throughput.md).
- * **Databases**: For more information, see [Provision throughput on an Azure Cosmos DB database](how-to-provision-database-throughput.md).
+ * **Containers**: For more information, see [Assign throughput to an Azure Cosmos DB container](how-to-provision-container-throughput.md).
+ * **Databases**: For more information, see [Assign throughput to an Azure Cosmos DB database](how-to-provision-database-throughput.md).
-2. **Serverless mode**: In this mode, you don't have to provision any throughput when creating resources in your Azure Cosmos DB account. At the end of your billing period, you get billed for the number of Request Units that has been consumed by your database operations. To learn more, see the [Serverless throughput](serverless.md) article.
+2. **Serverless mode**: In this mode, you don't have to assign any throughput when creating resources in your Azure Cosmos DB account. At the end of your billing period, you get billed for the number of Request Units consumed by your database operations. To learn more, see the [Serverless throughput](serverless.md) article.
-3. **Autoscale mode**: In this mode, you can automatically and instantly scale the throughput (RU/s) of your database or container based on its usage, without impacting the availability, latency, throughput, or performance of the workload. This mode is well suited for mission-critical workloads that have variable or unpredictable traffic patterns, and require SLAs on high performance and scale. To learn more, see the [autoscale throughput](provision-throughput-autoscale.md) article.
+3. **Autoscale mode**: In this mode, you can automatically and instantly scale the throughput (RU/s) of your database or container based on its usage. This scaling operation doesn't affect the availability, latency, throughput, or performance of the workload. This mode is well suited for mission-critical workloads that have variable or unpredictable traffic patterns, and require SLAs on high performance and scale. To learn more, see the [autoscale throughput](provision-throughput-autoscale.md) article.
## Request Unit considerations
While you estimate the number of RUs consumed by your workload, consider the fol
* **Data consistency**: The strong and bounded staleness consistency levels consume approximately two times more RUs while performing read operations when compared to that of other relaxed consistency levels.
-* **Type of reads**: Point reads cost significantly fewer RUs than queries.
+* **Type of reads**: Point reads cost fewer RUs than queries.
+
+* **Query patterns**: The complexity of a query affects how many RUs are consumed for an operation. Factors that affect the cost of query operations include:
-* **Query patterns**: The complexity of a query affects how many RUs are consumed for an operation. Factors that affect the cost of query operations include:
-
* The number of query results * The number of predicates * The nature of the predicates
While you estimate the number of RUs consumed by your workload, consider the fol
* The size of the result set * Projections
- The same query on the same data will always cost the same number of RUs on repeated executions.
+ The same query on the same data always costs the same number of RUs on repeated executions.
* **Script usage**: As with queries, stored procedures and triggers consume RUs based on the complexity of the operations that are performed. As you develop your application, inspect the [request charge header](./optimize-cost-reads-writes.md#measuring-the-ru-charge-of-a-request) to better understand how much RU capacity each operation consumes. ## Request units and multiple regions
-If you provision *'R'* RUs on an Azure Cosmos DB container (or database), Azure Cosmos DB ensures that *'R'* RUs are available in *each* region associated with your Azure Cosmos DB account. You can't selectively assign RUs to a specific region. The RUs provisioned on an Azure Cosmos DB container (or database) are provisioned in all the regions associated with your Azure Cosmos DB account.
+If you assign *'R'* RUs on an Azure Cosmos DB container (or database), Azure Cosmos DB ensures that *'R'* RUs are available in *each* region associated with your Azure Cosmos DB account. You can't selectively assign RUs to a specific region. The RUs provisioned on an Azure Cosmos DB container (or database) are provisioned in all the regions associated with your Azure Cosmos DB account.
Assuming that an Azure Cosmos DB container is configured with *'R'* RUs and there are *'N'* regions associated with the Azure Cosmos DB account, the total RUs available globally on the container = *R* x *N*.
Your choice of [consistency model](consistency-levels.md) also affects the throu
## Next steps -- Learn more about how to [provision throughput on Azure Cosmos DB containers and databases](set-throughput.md).-- Learn more about [serverless on Azure Cosmos DB](serverless.md).-- Learn more about [logical partitions](./partitioning-overview.md).-- Learn how to [provision throughput on an Azure Cosmos DB container](how-to-provision-container-throughput.md).-- Learn how to [provision throughput on an Azure Cosmos DB database](how-to-provision-database-throughput.md).-- Learn how to [find the request unit charge for an operation](find-request-unit-charge.md).-- Learn how to [optimize provisioned throughput cost in Azure Cosmos DB](optimize-cost-throughput.md).-- Learn how to [optimize reads and writes cost in Azure Cosmos DB](optimize-cost-reads-writes.md).-- Learn how to [optimize query cost in Azure Cosmos DB](./optimize-cost-reads-writes.md).-- Learn how to [use metrics to monitor throughput](use-metrics.md).-- Trying to do capacity planning for a migration to Azure Cosmos DB?
- - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+* Learn more about how to [assign throughput on Azure Cosmos DB containers and databases](set-throughput.md).
+* Learn more about [serverless on Azure Cosmos DB](serverless.md).
+* Learn more about [logical partitions](./partitioning-overview.md)
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Title: EA Billing administration on the Azure portal
description: This article explains the common tasks that an enterprise administrator accomplishes in the Azure portal. Previously updated : 01/04/2023 Last updated : 02/27/2023
EA admins and department administrators use departments to organize and report o
A department administrator can add new accounts to their departments. They can also remove accounts from their departments, but not from the enrollment.
-Check out the [Manage departments in the Azure portal](https://www.youtube.com/watch?v=NUlRrJFF1_U) video.
+Check out the [Manage departments in the Azure portal](https://www.youtube.com/watch?v=vs3wIeRDK4Q) video.
>[!VIDEO https://www.youtube.com/embed/vs3wIeRDK4Q]
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-blob-storage.md
The following sections provide details about properties that are used to define
This Blob storage connector supports the following authentication types. See the corresponding sections for details.
+- [Anonymous authentication](#anonymous-authentication)
- [Account key authentication](#account-key-authentication) - [Shared access signature authentication](#shared-access-signature-authentication) - [Service principal authentication](#service-principal-authentication)
This Blob storage connector supports the following authentication types. See the
>[!NOTE] >Azure HDInsight and Azure Machine Learning activities only support authentication that uses Azure Blob Storage account keys.
+### Anonymous authentication
+
+The following properties are supported for storage account key authentication in Azure Data Factory or Synapse pipelines:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The `type` property must be set to `AzureBlobStorage` (suggested) or `AzureStorage` (see the following notes). | Yes |
+| containerUri | Specify the Azure Blob container URI which has enabled Anonymous read access by taking this format `https://<AccountName>.blob.core.windows.net/<ContainerName>` and [Configure anonymous public read access for containers and blobs](/azure/storage/blobs/anonymous-read-access-configure#set-the-public-access-level-for-a-container) | Yes |
+| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No |
+
+**Example:**
+
+```json
+
+{
+ "name": "AzureBlobStorageAnonymous",
+ "properties": {
+ "annotations": [],
+ "type": "AzureBlobStorage",
+ "typeProperties": {
+ "containerUri": "https:// <accountname>.blob.core.windows.net/ <containername>",
+ "authenticationType": "Anonymous"
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+
+**Examples UI**:
+
+The UI experience will be like below. This sample will use the Azure open dataset as the source. If you want to get the open [dataset bing_covid-19_data.csv](https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/bing_covid-19_data/latest/bing_covid-19_data.csv), you just need to choose **Authentication type** as **Anonymous** and fill in Container URI with `https://pandemicdatalake.blob.core.windows.net/public`.
+++ ### Account key authentication The following properties are supported for storage account key authentication in Azure Data Factory or Synapse pipelines:
To learn details about the properties, check [Delete activity](delete-activity.m
## Change data capture
-Azure Data Factory can get new or changed files only from Azure Blob Storage by enabling **Enable change data capture ** in the mapping data flow source transformation. With this connector option, you can read new or updated files only and apply transformations before loading transformed data into destination datasets of your choice. Pleaser refer to [Change Data Capture](concepts-change-data-capture.md) for detials.
+Azure Data Factory can get new or changed files only from Azure Blob Storage by enabling **Enable change data capture ** in the mapping data flow source transformation. With this connector option, you can read new or updated files only and apply transformations before loading transformed data into destination datasets of your choice. Pleaser refer to [Change Data Capture](concepts-change-data-capture.md) for details.
.
data-factory Connector Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-office-365.md
This Microsoft 365 (Office 365) connector is supported for the following capabil
|| --| |[Copy activity](copy-activity-overview.md) (source/-)|&#9312;| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312;|
-|[Lookup activity](control-flow-lookup-activity.md) (source/-)|&#9312;|
<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
To create a mapping data flow using the Microsoft 365 connector as a source, com
6. On the tab **Data preview** click on the **Refresh** button to fetch a sample dataset for validation.
-## Lookup activity properties
-
-To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
- ## Next steps For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory How Does Managed Airflow Work https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-does-managed-airflow-work.md
You'll need to upload a sample DAG onto an accessible Storage account.
### Steps to import 1. Copy-paste the content (either v2.x or v1.10 based on the Airflow environment that you have setup) into a new file called as **tutorial.py**.
- Upload the **tutorial.py** to a blob storage. ([How to upload a file into blob](/storage/blobs/storage-quickstart-blobs-portal.md))
+ Upload the **tutorial.py** to a blob storage. ([How to upload a file into blob](/azure/storage/blobs/storage-quickstart-blobs-portal))
> [!NOTE] > You will need to select a directory path from a blob storage account that contains folders named **dags** and **plugins** to import those into the Airflow environment. **Plugins** are not mandatory. You can also have a container named **dags** and upload all Airflow files within it.
defender-for-cloud Adaptive Application Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md
description: This document helps you use adaptive application control in Microso
Previously updated : 01/08/2023 Last updated : 02/06/2023 # Use adaptive application controls to reduce your machines' attack surfaces - Learn about the benefits of Microsoft Defender for Cloud's adaptive application controls and how you can enhance your security with this data-driven, intelligent feature. ## What are adaptive application controls?
Select the recommendation, or open the adaptive application controls page to vie
> [!TIP] > Both application lists include the option to restrict a specific application to certain users. Adopt the principle of least privilege whenever possible. >
- > Applications are defined by their publishers; if an application doesn't have publisher information (it's unsigned), a path rule is created for the full path of the specific application.
+ > Applications are defined by their publishers. If an application doesn't have publisher information (it's unsigned), a path rule is created for the full path of the specific application.
1. To apply the rule, select **Audit**.
To remediate the issues:
1. To investigate further, select a group.
- :::image type="content" source="./media/adaptive-application/recent-alerts.png" alt-text="Screenshot showing selecting a group the group settings page for adaptive application controls." lightbox="./media/adaptive-application/recent-alerts.png":::
+ :::image type="content" source="media/adaptive-application/recent-alerts.png" alt-text="Screenshot showing recent alerts.":::
1. For further details, and the list of affected machines, select an alert. The security alerts page shows more details of the alerts and provides a **Take action** link with recommendations of how to mitigate the threat.
- :::image type="content" source="media/adaptive-application/adaptive-application-alerts-start-time.png" alt-text="Screenshot showing the start time of adaptive application controls alerts is the time that adaptive application controls created the alert.":::
+ :::image type="content" source="media/adaptive-application/adaptive-application-alerts-start-time.png" alt-text="Screenshot of the start time of adaptive application controls alerts showing that the time is when adaptive application controls created the alert.":::
> [!NOTE] > Adaptive application controls calculates events once every twelve hours. The "activity start time" shown in the security alerts page is the time that adaptive application controls created the alert, **not** the time that the suspicious process was active.
To remediate the issues:
## Move a machine from one group to another
-When you move a machine from one group to another, the application control policy applied to it changes to the settings of the group that you moved it to. You can also move a machine from a configured group to a non-configured group; doing so removes any application control rules that were applied to the machine.
+When you move a machine from one group to another, the application control policy applied to it changes to the settings of the group that you moved it to. You can also move a machine from a configured group to a non-configured group, which removes any application control rules that were applied to the machine.
1. Open the **Workload protections dashboard** and from the advanced protection area, select **Adaptive application controls**.
When you move a machine from one group to another, the application control polic
To manage your adaptive application controls programmatically, use our REST API.
-The relevant API documentation is available in [the Adaptive application Controls section of Defender for Cloud's API docs](https://learn.microsoft.com/rest/api/defenderforcloud/adaptive-application-controls).
+The relevant API documentation is available in [the Adaptive application Controls section of Defender for Cloud's API docs](/rest/api/defenderforcloud/adaptive-application-controls).
Some of the functions available from the REST API include: * **List** retrieves all your group recommendations and provides a JSON with an object for each group.
-* **Get** retrieves the JSON with the full recommendation data (list of machines, publisher/path rules, etc.).
+* **Get** retrieves the JSON with the full recommendation data (that is, list of machines, publisher/path rules, and so on).
* **Put** configures your rule (use the JSON you retrieved with **Get** as the body for this request).
defender-for-cloud Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-overview.md
This article describes security alerts and notifications in Microsoft Defender f
## What are security alerts? Security alerts are the notifications generated by Defender for Cloud and Defender for Cloud plans when threats are identified in your cloud, hybrid, or on-premises environment. -- Security alerts are triggered by advanced detections in Defender for Cloud, and are available when you enable [enhanced security features](enhanced-security-features-overview.md).
+- Security alerts are triggered by advanced detections in Defender for Cloud, and are available when you enable Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
- Each alert provides details of affected resources, issues, and remediation recommendations. - Defender for Cloud classifies alerts and prioritizes them by severity in the Defender for Cloud portal. - Alerts data is retained for 90 days.
defender-for-cloud Alerts Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-schemas.md
Last updated 11/09/2021
# Security alerts schemas
-If your subscription has enhanced security features enabled, you'll receive security alerts when Defender for Cloud detects threats to their resources.
+If your subscription has Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads) enabled, you'll receive security alerts when Defender for Cloud detects threats to their resources.
You can view these security alerts in Microsoft Defender for Cloud's pages - [overview dashboard](overview-page.md), [alerts](tutorial-security-incident.md), [resource health pages](investigate-resource-health.md), or [workload protections dashboard](workload-protections-dashboard.md) - and through external tools such as:
defender-for-cloud Alerts Suppression Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-suppression-rules.md
This page explains how you can use alerts suppression rules to suppress false po
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|Free<br>(Security alerts are generated by [Defender plans](enable-enhanced-security.md))|
+|Pricing:|Free<br>(Most security alerts are only available with [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads))|
|Required roles and permissions:|**Security admin** and **Owner** can create/delete rules.<br>**Security reader** and **Reader** can view rules.| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)|
defender-for-cloud Asset Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/asset-inventory.md
The asset inventory page of Microsoft Defender for Cloud shows the [security pos
Use this view and its filters to address such questions as: -- Which of my subscriptions with [Defender plans](defender-for-cloud-introduction.md#cwpidentify-unique-workload-security-requirements) enabled have outstanding recommendations?
+- Which of my subscriptions with [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads) enabled have outstanding recommendations?
- Which of my machines with the tag 'Production' are missing the Log Analytics agent? - How many of my machines tagged with a specific tag have outstanding recommendations? - Which machines in a specific resource group have a known vulnerability (using a CVE number)?
defender-for-cloud Auto Deploy Azure Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-azure-monitoring-agent.md
When you auto-provision the Log Analytics agent in Defender for Cloud, you can c
If you want to collect security events when you auto-provision the Azure Monitor Agent, you can create a [Data Collection Rule](../azure-monitor/essentials/data-collection-rule-overview.md) to collect the required events.
-Like for Log Analytics workspaces, Defender for Cloud users are eligible for [500-MB of free data](enhanced-security-features-overview.md#faqpricing-and-billing) daily on defined data types that include security events.
+Like for Log Analytics workspaces, Defender for Cloud users are eligible for [500-MB of free data](plan-defender-for-servers-data-workspace.md#log-analytics-pricing-faq) daily on defined data types that include security events.
## Next steps
defender-for-cloud Concept Attack Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-attack-path.md
Learn how to use the [cloud security explorer](how-to-manage-cloud-security-expl
## Next steps -- [Enable Defender CSPM on a subscription](enable-enhanced-security.md#enable-enhanced-security-features-on-a-subscription) - [Identify and remediate attack paths](how-to-manage-attack-path.md) - [Enabling agentless scanning for machines](enable-vulnerability-assessment-agentless.md#enabling-agentless-scanning-for-machines) - [Build a query with the cloud security explorer](how-to-manage-cloud-security-explorer.md)
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Learn more about [agentless scanning](concept-agentless-data-collection.md).
## Next steps
-Learn about [Microsoft Defender for Cloud's basic and enhanced security features](enhanced-security-features-overview.md)
+Learn about Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Title: What is Microsoft Defender for Cloud?
description: Use Microsoft Defender for Cloud to protect your Azure, hybrid, and multicloud resources and workloads. --- Previously updated : 10/04/2022 Last updated : 12/05/2022 # What is Microsoft Defender for Cloud?
-Microsoft Defender for Cloud is a Cloud Security Posture Management (CSPM) and Cloud Workload Protection Platform (CWPP) for all of your Azure, on-premises, and multicloud (Amazon AWS and Google GCP) resources. Defender for Cloud fills three vital needs as you manage the security of your resources and workloads in the cloud and on-premises:
+Microsoft Defender for Cloud is a cloud-native application protection platform (CNAPP) with a set of security measures and practices designed to protect cloud-based applications from various cyber threats and vulnerabilities. Defender for Cloud combines the capabilities of:
+- A development security operations (DevSecOps) solution that unifies security management at the code level across multicloud and multiple-pipeline environments
+- A cloud security posture management (CSPM) solution that surfaces actions that you can take to prevent breaches
+- A cloud workload protection platform (CWPP) with specific protections for servers, containers, storage, databases, and other workloads
-- [**Defender for Cloud secure score**](secure-score-security-controls.md) **continually assesses** your security posture so you can track new security opportunities and precisely report on the progress of your security efforts.-- [**Defender for Cloud recommendations**](security-policy-concept.md) **secures** your workloads with step-by-step actions that protect your workloads from known security risks.-- [**Defender for Cloud alerts**](alerts-overview.md) **defends** your workloads in real-time so you can react immediately and prevent security events from developing.
+![Diagram that shows the core functionality of Microsoft Defender for Cloud.](media/defender-for-cloud-introduction/defender-for-cloud-pillars.png)
-For a step-by-step walkthrough of Defender for Cloud, check out this [interactive tutorial](https://mslearn.cloudguides.com/en-us/guides/Protect%20your%20multi-cloud%20environment%20with%20Microsoft%20Defender%20for%20Cloud).
+## Secure cloud applications
-You can learn more about Defender for Cloud from a cybersecurity expert by watching [Lessons Learned from the Field](episode-six.md).
+Defender for Cloud helps you to incorporate good security practices early during the software development process, or DevSecOps. You can protect your code management environments and your code pipelines, and get insights into your development environment security posture from a single location. Defender for Cloud currently includes Defender for DevOps.
-## Protect your resources and track your security progress
+TodayΓÇÖs applications require security awareness at the code, infrastructure, and runtime levels to make sure that deployed applications are hardened against attacks.
-Microsoft Defender for Cloud's features covers the two broad pillars of cloud security: Cloud Workload Protection Platform (CWPP) and Cloud Security Posture Management (CSPM).
+| Capability | What problem does it solve? | Get started | Defender plan and pricing |
+| - | | -- | - |
+| [Code pipeline insights](defender-for-devops-introduction.md) | Empowers security teams with the ability to protect applications and resources from code to cloud across multi-pipeline environments, including GitHub and Azure DevOps. Findings from Defender for DevOps, such as IaaC misconfigurations and exposed secrets, can then be correlated with other contextual cloud security insights to prioritize remediation in code. | Connect [Azure DevOps](quickstart-onboard-devops.md) and [GitHub](quickstart-onboard-github.md) repositories to Defender for Cloud | [Defender for DevOps](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
-### CSPM - Remediate security issues and watch your security posture improve
+## Improve your security posture
-In Defender for Cloud, the posture management features provide:
+The security of your cloud and on-premises resources depends on proper configuration and deployment. Defender for Cloud recommendations identify the steps that you can take to secure your environment.
-- **Hardening guidance** - to help you efficiently and effectively improve your security-- **Visibility** - to help you understand your current security situation
+Defender for Cloud includes Foundational CSPM (Free) capabilities for free. You can also enable advanced CSPM capabilities by enabling paid Defender plans.
-Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues and shows your security posture in **secure score**, an aggregated score of the security findings that tells you, at a glance, your current security situation: the higher the score, the lower the identified risk level.
+| Capability | What problem does it solve? | Get started | Defender plan and pricing |
+| - | | -- | - |
+| [Centralized policy management](security-policy-concept.md) | Define the security conditions that you want to maintain across your environment. The policy translates to recommendations that identify resource configurations that violate your security policy. The [Microsoft cloud security benchmark](concept-regulatory-compliance.md) is a built-in standard that applies security principles with detailed technical implementation guidance for Azure, for other cloud providers (such as AWS and GCP), and for other Microsoft clouds. | [Customize security a policy](custom-security-policies.md) | Foundational CSPM (Free) |
+| [Secure score]( secure-score-security-controls.md) | Summarize your security posture based on the security recommendations. As you remediate recommendations, your secure score improves. | [Track your secure score](secure-score-access-and-track.md) | Foundational CSPM (Free) |
+| [Multicloud coverage](plan-multicloud-security-get-started.md) | Connect to your multicloud environments with agentless methods for CSPM insight and CWP protection. | Connect your [Amazon AWS](quickstart-onboard-aws.md) and [Google GCP](quickstart-onboard-gcp.md) cloud resources to Defender for Cloud | Foundational CSPM (Free) |
+| [Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) | Use the dashboard to see weaknesses in your security posture. | [Enable CSPM tools](enable-enhanced-security.md) | Foundational CSPM (Free) |
+| [Advanced Cloud Security Posture Management](concept-cloud-security-posture-management.md) | Get advanced tools to identify weaknesses in your security posture, including:</br>- Governance to drive actions to improve your security posture</br>- Regulatory compliance to verify compliance with security standards</br>- Cloud security explorer to build a comprehensive view of your environment | [Enable CSPM tools](enable-enhanced-security.md) | Defender CSPM |
+| [Attack path analysis](concept-attack-path.md#what-is-attack-path-analysis) | Model traffic on your network to identify potential risks before you implement changes to your environment. | [Build queries to analyze paths](how-to-manage-attack-path.md) | Defender CSPM |
+| [Cloud Security Explorer](concept-attack-path.md#what-is-cloud-security-explorer) | A map of your cloud environment that lets you build queries to find security risks. | [Build queries to find security risks](how-to-manage-cloud-security-explorer.md) | Defender CSPM |
+| [Security governance](governance-rules.md#building-an-automated-process-for-improving-security-with-governance-rules) | Drive security improvements through your organization by assigning tasks to resource owners and tracking progress in aligning your security state with your security policy. | [Define governance rules](governance-rules.md#defining-governance-rules-to-automatically-set-the-owner-and-due-date-of-recommendations) | Defender CSPM |
+| [Microsoft Entra Permissions Management](../active-directory/cloud-infrastructure-entitlement-management/index.yml) | Provide comprehensive visibility and control over permissions for any identity and any resource in Azure, AWS, and GCP. | [Review your Permission Creep Index (CPI)](other-threat-protections.md#entra-permission-management-formerly-cloudknox) | Defender CSPM |
-As soon as you open Defender for Cloud for the first time, Defender for Cloud:
+## Protect cloud workloads
-- **Generates a secure score** for your subscriptions based on an assessment of your connected resources compared with the guidance in [Microsoft cloud security benchmark](/security/benchmark/azure/overview). Use the score to understand your security posture, and the compliance dashboard to review your compliance with the built-in benchmark. When you've enabled the enhanced security features, you can customize the standards used to assess your compliance, and add other regulations (such as NIST and Azure CIS) or organization-specific security requirements. You can also apply recommendations, and score based on the AWS Foundational Security Best practices standards.
+Proactive security principles require that you implement security practices that protect your workloads from threats. Cloud workload protections (CWP) surface workload-specific recommendations that lead you to the right security controls to protect your workloads.
- You can also [learn more about secure score](secure-score-security-controls.md).
+When your environment is threatened, security alerts right away indicate the nature and severity of the threat so you can plan your response. After you identify a threat in your environment, you need to quickly respond to limit the risk to your resources.
-- **Provides hardening recommendations** based on any identified security misconfigurations and weaknesses. Use these security recommendations to strengthen the security posture of your organization's Azure, hybrid, and multicloud resources.--- **Analyze and secure your attack paths** through the cloud security graph, which is a graph-based context engine that exists within Defender for Cloud. The cloud security graph collects data from your multicloud environment and other data sources. For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to internet, permissions, network connections, vulnerabilities and more. The data collected is then used to build a graph representing your multicloud environment. -
- Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers may use to breach your environment to reach your high-impact assets. Attack path analysis exposes those attack paths and suggests recommendations as to how best remediate the issues that will break the attack path and prevent successful breach.
-
- By taking your environment's contextual information into account such as, internet exposure, permissions, lateral movement, and more. Attack path analysis identifies issues that may lead to a breach on your environment, and helps you to remediate the highest risk ones first.
-
- Learn more about [attack path analysis](concept-attack-path.md#what-is-attack-path-analysis).
-
-Defender CSPM offers two options to protect your environments and resources, a free option and a premium option. We recommend enabling the premium option to gain the full coverage and benefits of CSPM. You can learn more about the benefits offered by [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) and [the differences between the two plans](concept-cloud-security-posture-management.md).
-
-### CWP - Identify unique workload security requirements
-
-Defender for Cloud offers security alerts that are powered by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684). It also includes a range of advanced, intelligent, protections for your workloads. The workload protections are provided through Microsoft Defender plans specific to the types of resources in your subscriptions. For example, you can enable **Microsoft Defender for Storage** to get alerted about suspicious activities related to your storage resources.
-
-## Protect all of your resources under one roof
-
-Because Defender for Cloud is an Azure-native service, many Azure services are monitored and protected without needing any deployment, but you can also add resources the are on-premises or in other public clouds.
-
-When necessary, Defender for Cloud can automatically deploy a Log Analytics agent to gather security-related data. For Azure machines, deployment is handled directly. For hybrid and multicloud environments, Microsoft Defender plans are extended to non Azure machines with the help of [Azure Arc](https://azure.microsoft.com/services/azure-arc/). CSPM features are extended to multicloud machines without the need for any agents (see [Defend resources running on other clouds](#defend-resources-running-on-other-clouds)).
-
-### Defend your Azure-native resources
-
-Defender for Cloud helps you detect threats across:
--- **Azure PaaS services** - Detect threats targeting Azure services including Azure App Service, Azure SQL, Azure Storage Account, and more data services. You can also perform anomaly detection on your Azure activity logs using the native integration with Microsoft Defender for Cloud Apps (formerly known as Microsoft Cloud App Security).--- **Azure data services** - Defender for Cloud includes capabilities that help you automatically classify your data in Azure SQL. You can also get assessments for potential vulnerabilities across Azure SQL and Storage services, and recommendations for how to mitigate them.--- **Networks** - Defender for Cloud helps you limit exposure to brute force attacks. By reducing access to virtual machine ports, using the just-in-time VM access, you can harden your network by preventing unnecessary access. You can set secure access policies on selected ports, for only authorized users, allowed source IP address ranges or IP addresses, and for a limited amount of time.-
-### Defend your on-premises resources
-
-In addition to defending your Azure environment, you can add Defender for Cloud capabilities to your hybrid cloud environment to protect your non-Azure servers. To help you focus on what matters the mostΓÇï, you'll get customized threat intelligence and prioritized alerts according to your specific environment.
-
-To extend protection to on-premises machines, deploy [Azure Arc](https://azure.microsoft.com/services/azure-arc/) and enable Defender for Cloud's enhanced security features. Learn more in [Add non-Azure machines with Azure Arc](quickstart-onboard-machines.md#add-non-azure-machines-with-azure-arc).
-
-### Defend resources running on other clouds
-
-Defender for Cloud can protect resources in other clouds (such as AWS and GCP).
-
-For example, if you've [connected an Amazon Web Services (AWS) account](quickstart-onboard-aws.md) to an Azure subscription, you can enable any of these protections:
--- **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature helping you manage your AWS resources alongside your Azure resources.-- **Microsoft Defender for Kubernetes** extends its container threat detection and advanced defenses to your **Amazon EKS Linux clusters**.-- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.-
-Learn more about connecting your [AWS](quickstart-onboard-aws.md) and [GCP](quickstart-onboard-gcp.md) accounts to Microsoft Defender for Cloud.
-
-## Close vulnerabilities before they get exploited
--
-Defender for Cloud includes vulnerability assessment solutions for your virtual machines, container registries, and SQL servers as part of the enhanced security features. Some of the scanners are powered by Qualys. But you don't need a Qualys license, or even a Qualys account - everything's handled seamlessly inside Defender for Cloud.
-
-Microsoft Defender for Servers includes automatic, native integration with Microsoft Defender for Endpoint. Learn more, [Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md). With this integration enabled, you'll have access to the vulnerability findings from **Microsoft Defender Vulnerability Management**. Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's Defender Vulnerability Management](deploy-vulnerability-assessment-defender-vulnerability-management.md).
-
-Review the findings from these vulnerability scanners and respond to them all from within Defender for Cloud. This broad approach brings Defender for Cloud closer to being the single pane of glass for all of your cloud security efforts.
-
-Learn more on the following pages:
--- [Defender for Cloud's integrated Qualys scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md)-- [Identify vulnerabilities in images in Azure container registries](defender-for-containers-vulnerability-assessment-azure.md)-- [Identify vulnerabilities in images in AWS Elastic Container Registry](defender-for-containers-vulnerability-assessment-elastic.md)-
-## Enforce your security policy from the top down
--
-It's a security basic to know and make sure your workloads are secure, and it starts with having tailored security policies in place. Because policies in Defender for Cloud are built on top of Azure Policy controls, you're getting the full range and flexibility of a **world-class policy solution**. In Defender for Cloud, you can set your policies to run on management groups, across subscriptions, and even for a whole tenant.
-
-Defender for Cloud continuously discovers new resources that are being deployed across your workloads and assesses whether they're configured according to security best practices. If not, they're flagged and you get a prioritized list of recommendations for what you need to fix. Recommendations help you reduce the attack surface across each of your resources.
-
-The list of recommendations is enabled and supported by the Microsoft cloud security benchmark. This Microsoft-authored benchmark, based on common compliance frameworks, began with Azure and now provides a set of guidelines for security and compliance best practices for multiple cloud environments. Learn more in [Microsoft cloud security benchmark introduction](/security/benchmark/azure/introduction).
-
-In this way, Defender for Cloud enables you not just to set security policies, but to *apply secure configuration standards across your resources*.
--
-To help you understand how important each recommendation is to your overall security posture, Defender for Cloud groups the recommendations into security controls and adds a **secure score** value to each control. This is crucial in enabling you to **prioritize your security work**.
--
-## Extend Defender for Cloud with Defender plans and external monitoring
--
-You can extend the Defender for Cloud protection with:
--- **Advanced threat protection features** for virtual machines, SQL databases, containers, web applications, your network, and more - Protections include securing the management ports of your VMs with [just-in-time access](just-in-time-access-overview.md), and [adaptive application controls](adaptive-application-controls.md) to create allowlists for what apps should and shouldn't run on your machines.-
-The **Defender plans** of Microsoft Defender for Cloud offer comprehensive defenses for the compute, data, and service layers of your environment:
--- [Microsoft Defender for Servers](defender-for-servers-introduction.md)-- [Microsoft Defender for Storage](defender-for-storage-introduction.md)-- [Microsoft Defender for SQL](defender-for-sql-introduction.md)-- [Microsoft Defender for Containers](defender-for-containers-introduction.md)-- [Microsoft Defender for App Service](defender-for-app-service-introduction.md)-- [Microsoft Defender for Key Vault](defender-for-key-vault-introduction.md)-- [Microsoft Defender for Resource Manager](defender-for-resource-manager-introduction.md)-- [Microsoft Defender for DNS](defender-for-dns-introduction.md)-- [Microsoft Defender for open-source relational databases](defender-for-databases-introduction.md)-- [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md)-- [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md)
- - [Security governance and regulatory compliance](concept-cloud-security-posture-management.md#security-governance-and-regulatory-compliance)
- - [Cloud security explorer](concept-cloud-security-posture-management.md#cloud-security-explorer)
- - [Attack path analysis](concept-cloud-security-posture-management.md#attack-path-analysis)
- - [Agentless scanning for machines](concept-cloud-security-posture-management.md#agentless-scanning-for-machines)
-- [Defender for DevOps](defender-for-devops-introduction.md)--
-Use the advanced protection tiles in the [workload protections dashboard](workload-protections-dashboard.md) to monitor and configure each of these protections.
-
-> [!TIP]
-> Microsoft Defender for IoT is a separate product. You'll find all the details in [Introducing Microsoft Defender for IoT](../defender-for-iot/overview.md).
--- **Security alerts** - When Defender for Cloud detects a threat in any area of your environment, it generates a security alert. These alerts describe details of the affected resources, suggested remediation steps, and in some cases an option to trigger a logic app in response. Whether an alert is generated by Defender for Cloud, or received by Defender for Cloud from an integrated security product, you can export it. To export your alerts to Microsoft Sentinel, any third-party SIEM, or any other external tool, follow the instructions in [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md). Defender for Cloud's threat protection includes fusion kill-chain analysis, which automatically correlates alerts in your environment based on cyber kill-chain analysis, to help you better understand the full story of an attack campaign, where it started and what kind of impact it had on your resources. [Defender for Cloud's supported kill chain intents are based on version 9 of the MITRE ATT&CK matrix](alerts-reference.md#intentions).
+| Capability | What problem does it solve? | Get started | Defender plan and pricing |
+| - | | -- | - |
+| Protect cloud servers | Provide server protections through Microsoft Defender for Endpoint or extended protection with just-in-time network access, file integrity monitoring, vulnerability assessment, and more. | [Secure your multicloud and on-premises servers](defender-for-servers-introduction.md) | [Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
+| Identify threats to your storage resources | Detect unusual and potentially harmful attempts to access or exploit your storage accounts using advanced threat detection capabilities and Microsoft Threat Intelligence data to provide contextual security alerts. | [Protect your cloud storage resources](defender-for-storage-introduction.md) | [Defender for Storage](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
+| Protect cloud databases | Protect your entire database estate with attack detection and threat response for the most popular database types in Azure to protect the database engines and data types, according to their attack surface and security risks. | [Deploy specialized protections for cloud and on-premises databases](quickstart-enable-database-protections.md) | - [Defender for Azure SQL Databases](https://azure.microsoft.com/pricing/details/defender-for-cloud/)</br>- [Defender for SQL servers on machines](https://azure.microsoft.com/pricing/details/defender-for-cloud/)</br>- [Defender for Open-source relational databases](https://azure.microsoft.com/pricing/details/defender-for-cloud/)</br>- [Defender for Azure Cosmos DB](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
+| Protect containers | Secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications with environment hardening, vulnerability assessments, and run-time protection. | [Find security risks in your containers](defender-for-containers-introduction.md) | [Defender for Containers](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
+| [Infrastructure service insights](asset-inventory.md) | Diagnose weaknesses in your application infrastructure that can leave your environment susceptible to attack. | - [Identify attacks targeting applications running over App Service](defender-for-app-service-introduction.md)</br>- [Detect attempts to exploit Key Vault accounts](defender-for-key-vault-introduction.md)</br>- [Get alerted on suspicious Resource Manager operations](defender-for-resource-manager-introduction.md)</br>- [Expose anomalous DNS activities](defender-for-dns-introduction.md) | - [Defender for App Service](https://azure.microsoft.com/pricing/details/defender-for-cloud/)</br></br>- [Defender for Key Vault](https://azure.microsoft.com/pricing/details/defender-for-cloud/)</br></br>- [Defender for Resource Manager](https://azure.microsoft.com/pricing/details/defender-for-cloud/)</br></br>- [Defender for DNS](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
+| [Security alerts](alerts-overview.md) | Get informed of real-time events that threaten the security of your environment. Alerts are categorized and assigned severity levels to indicate proper responses. | [Manage security alerts]( managing-and-responding-alerts.md) | [Any workload protection Defender plan](#protect-cloud-workloads) |
+| [Security incidents](alerts-overview.md#what-are-security-incidents) | Correlate alerts to identify attack patterns and integrate with Security Information and Event Management (SIEM), Security Orchestration Automated Response (SOAR), and IT Service Management (ITSM) solutions to respond to threats and limit the risk to your resources. | [Export alerts to SIEM, SOAR, or ITSM systems](export-to-siem.md) | [Any workload protection Defender plan](#protect-cloud-workloads) |
## Learn More
-You can also check out the following blogs:
+For more information about Defender for Cloud and how it works, check out:
-- [A new name for multicloud security: Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/a-new-name-for-multi-cloud-security-microsoft-defender-for-cloud/ba-p/2943020)
+- A [step-by-step walkthrough](https://mslearn.cloudguides.com/en-us/guides/Protect%20your%20multi-cloud%20environment%20with%20Microsoft%20Defender%20for%20Cloud) of Defender for Cloud
+- An interview about Defender for Cloud with an expert in cybersecurity in [Lessons Learned from the Field](episode-six.md)
- [Microsoft Defender for Cloud - Use cases](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-use-cases/ba-p/2953619) - [Microsoft Defender for Cloud PoC Series - Microsoft Defender for Containers](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-poc-series-microsoft-defender-for/ba-p/3064644) ## Next steps -- To get started with Defender for Cloud, you need a subscription to Microsoft Azure. If you don't have a subscription, [sign up for a free trial](https://azure.microsoft.com/free/).--- Defender for Cloud's free plan is enabled on all your current Azure subscriptions when you visit the Defender for Cloud pages in the Azure portal for the first time, or if enabled programmatically via the REST API. To take advantage of advanced security management and threat detection capabilities, you must enable the enhanced security features. These features are free for the first 30 days. [Learn more about the pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/).--- If you're ready to enable enhanced security features now, [Quickstart: Enable enhanced security features](enable-enhanced-security.md) walks you through the steps.- > [!div class="nextstepaction"] > [Enable Microsoft Defender plans](enable-enhanced-security.md)
defender-for-cloud Defender For Cloud Planning And Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-planning-and-operations-guide.md
Defenders for Cloud policies contain the following components:
- [Security policy](tutorial-security-policy.md): an [Azure Policy](../governance/policy/overview.md) that determines which controls are monitored and recommended by Defender for Cloud. You can also use Azure Policy to create new definitions, define more policies, and assign policies across management groups. - [Email notifications](configure-email-notifications.md): security contacts and notification settings.--- [Pricing tier](enhanced-security-features-overview.md): with or without Microsoft Defender for Cloud's enhanced security features, which determine which Defender for Cloud features are available for resources in scope (can be specified for subscriptions and workspaces using the API).
+- [Pricing tier](defender-for-cloud-introduction.md#protect-cloud-workloads): with or without Microsoft Defender for Cloud's Defender plans, which determine which Defender for Cloud features are available for resources in scope (can be specified for subscriptions and workspaces using the API).
> [!NOTE] > Specifying a security contact ensures that Azure can reach the right person in your organization if a security incident occurs. Read [Provide security contact details in Defender for Cloud](configure-email-notifications.md) for more information on how to enable this recommendation.
In the Azure portal, you can browse to see a list of your Log Analytics workspac
For workspaces created by Defender for Cloud, data is retained for 30 days. For existing workspaces, retention is based on the workspace pricing tier. If you want, you can also use an existing workspace.
-If your agent reports to a workspace other than the **default** workspace, any Microsoft Defender plans providing [enhanced security features](enhanced-security-features-overview.md) that you've enabled on the subscription should also be enabled on the workspace.
+If your agent reports to a workspace other than the **default** workspace, any Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads) that you've enabled on the subscription should also be enabled on the workspace.
> [!NOTE] > Microsoft makes strong commitments to protect the privacy and security of this data. Microsoft adheres to strict compliance and security guidelinesΓÇöfrom coding to operating a service. For more information about data handling and privacy, read [Defender for Cloud Data Security](data-security.md).
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
Some images may reuse tags from an image that was already scanned. For example,
### Does Defender for Containers scan images in Microsoft Container Registry? Currently, Defender for Containers can scan images in Azure Container Registry (ACR) and AWS Elastic Container Registry (ECR) only.
-Docker Registry, Microsoft Artifact Registry/Microsoft Container Registry, and Microsoft Azure Red Hat OpenShift (ARO) built-in container image registry aren't supported.
-Images should first be imported to ACR. Learn more about [importing container images to an Azure container registry](../container-registry/container-registry-import-images.md?tabs=azure-cli).
+Docker Registry, Microsoft Artifact Registry/Microsoft Container Registry, and Microsoft Azure Red Hat OpenShift (ARO) built-in container image registry are not supported.
+Images should first be imported to ACR. Learn more about [importing container images to an Azure container registry](/azure/container-registry/container-registry-import-images?tabs=azure-cli).
## Next steps
-Learn more about the [advanced protection plans of Microsoft Defender for Cloud](enhanced-security-features-overview.md).
+Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
defender-for-cloud Defender For Containers Vulnerability Assessment Elastic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-elastic.md
To enable scanning of vulnerabilities in containers, you have to [connect your A
Defender for Containers creates resources in your AWS account to build an inventory of the software in your images. The scan then sends only the software inventory to Defender for Cloud. This architecture protects your information privacy and intellectual property, and also keeps the outbound network traffic to a minimum.
-These resources are created under us-east-1 and eu-central-1 in each AWS account where container vulnerability assesment is enabled:
+These resources are created under us-east-1 and eu-central-1 in each AWS account where container vulnerability assessment is enabled:
- **S3 bucket** with the prefix `defender-for-containers-va` - **ECS cluster** with the name `defender-for-containers-va`
Yes. The results are under [Sub-Assessments REST API](/rest/api/defenderforcloud
Learn more about: -- [Advanced protection plans of Microsoft Defender for Cloud](enhanced-security-features-overview.md)
+- Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads)
- [Multicloud protections](multicloud.yml) for your AWS account
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
- Title: Understand the basic and extended security features of Microsoft Defender for Cloud
-description: Learn about the benefits of enabling enhanced security in Microsoft Defender for Cloud
- Previously updated : 01/24/2023---
-# Basic and enhanced security features
-
-Defender for Cloud offers basic, and many enhanced security features that can help protect your organization against threats and attacks.
-
-## Basic features
-
-When you open Defender for Cloud in the Azure portal for the first time or if you enable it through the API, Defender for Cloud is enabled for free on all your Azure subscriptions. Defender for Cloud provides foundational cloud security and posture management (CSPM) features by default. The foundational CSPM includes, [secure score](secure-score-security-controls.md), [security policy and basic recommendations](security-policy-concept.md), and [network security assessment](protect-network-resources.md) to help you protect your Azure resources.
-
-## Try out enhanced features
-
-If you want to try out the enhanced security features, [enable enhanced security features](enable-enhanced-security.md) for free for the first 30 days. At the end of 30 days, if you decide to continue using the service, we'll automatically start charging for usage. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
-
-## Enhanced features
-
-When you enable the enhanced security features (paid), Defender for Cloud can provide unified security management and threat protection across your hybrid cloud workloads, including:
--- **Microsoft Defender for Endpoint** - Microsoft Defender for Servers includes [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender) for comprehensive endpoint detection and response (EDR). Learn more about the benefits of using Microsoft Defender for Endpoint together with Defender for Cloud in [Use Defender for Cloud's integrated EDR solution](integration-defender-for-endpoint.md).--- **Vulnerability assessment for virtual machines, container registries, and SQL resources** - Easily enable vulnerability assessment solutions to discover, manage, and resolve vulnerabilities. View, investigate, and remediate the findings directly from within Defender for Cloud.--- **Multicloud security** - Connect your accounts from Amazon Web Services (AWS) and Google Cloud Platform (GCP) to protect resources and workloads on those platforms with a range of Microsoft Defender for Cloud security features.--- **Hybrid security** ΓÇô Get a unified view of security across all of your on-premises and cloud workloads. Apply security policies and continuously assess the security of your hybrid cloud workloads to ensure compliance with security standards. Collect, search, and analyze security data from multiple sources, including firewalls and other partner solutions.--- **Threat protection alerts** - Advanced behavioral analytics and the Microsoft Intelligent Security Graph provide an edge over evolving cyber-attacks. Built-in behavioral analytics and machine learning can identify attacks and zero-day exploits. Monitor networks, machines, data stores (SQL servers hosted inside and outside Azure, Azure SQL databases, Azure SQL Managed Instance, and Azure Storage) and cloud services for incoming attacks and post-breach activity. Streamline investigation with interactive tools and contextual threat intelligence.--- **Track compliance with a range of standards** - Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in [Microsoft cloud security benchmark](/security/benchmark/azure/introduction). When you enable the enhanced security features, you can apply a range of other industry standards, regulatory standards, and benchmarks according to your organization's needs. Add standards and track your compliance with them from the [regulatory compliance dashboard](update-regulatory-compliance-packages.md).--- **Access and application controls** - Block malware and other unwanted applications by applying machine learning powered recommendations adapted to your specific workloads to create allowlists and blocklists. Reduce the network attack surface with just-in-time, controlled access to management ports on Azure VMs. Access and application control drastically reduce exposure to brute force and other network attacks.--- **Container security features** - Benefit from vulnerability management and real-time threat protection on your containerized environments. Charges are based on the number of unique container images pushed to your connected registry. After an image has been scanned once, you won't be charged for it again unless it's modified and pushed once more.--- **Breadth threat protection for resources connected to Azure** - Cloud-native threat protection for the Azure services common to all of your resources: Azure Resource Manager, Azure DNS, Azure network layer, and Azure Key Vault. Defender for Cloud has unique visibility into the Azure management layer and the Azure DNS layer, and can therefore protect cloud resources that are connected to those layers.--- **Manage your Cloud Security Posture Management (CSPM)** - CSPM offers you the ability to remediate security issues and review your security posture through the tools provided. These tools include:
- - Security governance and regulatory compliance
- - Cloud security graph
- - Attack path analysis
- - Agentless scanning for machines
-
-## FAQ - Pricing and billing
--- [How can I track who in my organization enabled a Microsoft Defender plan in Defender for Cloud?](#how-can-i-track-who-in-my-organization-enabled-a-microsoft-defender-plan-in-defender-for-cloud)-- [What are the plans offered by Defender for Cloud?](#what-are-the-plans-offered-by-defender-for-cloud)-- [How do I enable Defender for Cloud's enhanced security for my subscription?](#how-do-i-enable-defender-for-clouds-enhanced-security-for-my-subscription)-- [Can I enable Microsoft Defender for Servers on a subset of servers?](#can-i-enable-microsoft-defender-for-servers-on-a-subset-of-servers)-- [If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Defender for Servers?](#if-i-already-have-a-license-for-microsoft-defender-for-endpoint-can-i-get-a-discount-for-defender-for-servers)-- [My subscription has Microsoft Defender for Servers enabled, which machines do I pay for?](#my-subscription-has-microsoft-defender-for-servers-enabled-which-machines-do-i-pay-for)-- [Will I be charged for machines without the Log Analytics agent installed?](#will-i-be-charged-for-machines-without-the-log-analytics-agent-installed)-- [If a Log Analytics agent reports to multiple workspaces, will I be charged twice?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-will-i-be-charged-twice)-- [If a Log Analytics agent reports to multiple workspaces, is the 500-MB free data ingestion available on all of them?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-is-the-500-mb-free-data-ingestion-available-on-all-of-them)-- [Is the 500-MB free data ingestion calculated for an entire workspace or strictly per machine?](#is-the-500-mb-free-data-ingestion-calculated-for-an-entire-workspace-or-strictly-per-machine)-- [What data types are included in the 500-MB data daily allowance?](#what-data-types-are-included-in-the-500-mb-data-daily-allowance)-- [How can I monitor my daily usage](#how-can-i-monitor-my-daily-usage)-
-### How can I track who in my organization enabled a Microsoft Defender plan in Defender for Cloud?
-Azure Subscriptions may have multiple administrators with permissions to change the pricing settings. To find out which user made a change, use the Azure Activity Log.
--
-If the user's info isn't listed in the **Event initiated by** column, explore the event's JSON for the relevant details.
---
-### What are the plans offered by Defender for Cloud?
-The free offering from Microsoft Defender for Cloud offers the secure score and related tools. Enabling enhanced security turns on all of the Microsoft Defender plans to provide a range of security benefits for all your resources in Azure, hybrid, and multicloud environments.
-
-### How do I enable Defender for Cloud's enhanced security for my subscription?
-You can use any of the following ways to enable enhanced security for your subscription:
-
-| Method | Instructions |
-|-|-|
-| Defender for Cloud pages of the Azure portal | [Enable enhanced protections](enable-enhanced-security.md) |
-| REST API | [Pricings API](/rest/api/defenderforcloud/pricings) |
-| Azure CLI | [az security pricing](/cli/azure/security/pricing) |
-| PowerShell | [Set-AzSecurityPricing](/powershell/module/az.security/set-azsecuritypricing) |
-| Azure Policy | [Bundle Pricings](https://github.com/Azure/Azure-Security-Center/blob/master/Pricing%20%26%20Settings/ARM%20Templates/Set-ASC-Bundle-Pricing.json) |
--
-### Can I enable Microsoft Defender for Servers on a subset of servers?
-
-No. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on an Azure subscription or a connected AWS account, all of the connected machines will be protected by Defender for Servers. This includes servers that don't have the Log Analytics agent or Azure Monitor agent installed.
-
-### If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Defender for Servers?
-
-If you already have a license for **Microsoft Defender for Endpoint for Servers Plan 2**, you won't have to pay for that part of your Microsoft Defender for Servers license. Learn more about [this license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements).
-
-To request your discount, [contact Defender for Cloud's support team](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). You'll need to provide the relevant workspace ID, region, and number of Microsoft Defender for Endpoint for servers licenses applied for machines in the given workspace.
-
-The discount will be effective starting from the approval date, and won't take place retroactively.
-
-### My subscription has Microsoft Defender for Servers enabled, which machines do I pay for?
-
-When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on a subscription, all machines including machines that are part of PaaS services, in that subscription are billed according to their power state as shown in the following table:
-
-| State | Description | Instance usage billed |
-|--|--|--|
-| Starting | VM is starting up. | Not billed |
-| Running | Normal working state for a VM | Billed |
-| Stopping | This state is transitional. When completed, it will show as Stopped. | Billed |
-| Stopped | The VM has been shut down from within the guest OS or using the PowerOff APIs. Hardware is still allocated to the VM and it remains on the host. | Billed |
-| Deallocating | This state is transitional. When completed, the VM will show as Deallocated. | Not billed |
-| Deallocated | The VM has been stopped successfully and removed from the host. | Not billed |
--
-### If I enable Defender for Clouds Servers plan on the subscription level, do I need to enable it on the workspace level?
-
-When you enable the Servers plan on the subscription level, Defender for Cloud will enable the Servers plan on your default workspaces automatically. Connect to the default workspace by selecting **Connect Azure VMs to the default workspace(s) created by Defender for Cloud** option and selecting **Apply**.
--
-However, if you're using a custom workspace in place of the default workspace, you'll need to enable the Servers plan on all of your custom workspaces that don't have it enabled.
-
-If you're using a custom workspace and enable the plan on the subscription level only, the `Microsoft Defender for servers should be enabled on workspaces` recommendation will appear on the Recommendations page. This recommendation will give you the option to enable the servers plan on the workspace level with the Fix button. You're charged for all VMs in the subscription even if the Servers plan isn't enabled for the workspace. The VMs won't benefit from features that depend on the Log Analytics workspace, such as Adaptive Application Controls, File Integrity Monitoring, SQL Protection, ProtectionStatus reports from the Endpoint Protection Antimalware, or the 500-MB free data ingestion allowance.
-
-Enabling the Servers plan on both the subscription and its connected workspaces, won't incur a double charge. The system will identify each unique VM.
-
-If you enable the Servers plan on cross-subscription workspaces, connected VMs with the Log Analytics agent installed from all subscriptions will be billed, including subscriptions that don't have the Servers plan enabled. Connected VMs with the Azure Monitor agent installed are billed only if the Servers plan is enabled at the subscription level.
-
-### Will I be charged for machines without the Log Analytics agent installed?
-
-Yes. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on an Azure subscription or a connected AWS account, you'll be charged for all machines that are connected to your Azure subscription or AWS account. The term machines include Azure virtual machines, Azure Virtual Machine Scale Sets instances, and Azure Arc-enabled servers. Machines that don't have Log Analytics installed are covered by protections that don't depend on the Log Analytics agent.
-
-### If a Log Analytics agent reports to multiple workspaces, will I be charged twice?
-
-If a machine, reports to multiple workspaces, and all of them have Defender for Servers enabled, the machines will be billed for each attached workspace.
-
-### If a Log Analytics agent reports to multiple workspaces, is the 500-MB free data ingestion available on all of them?
-
-Yes. If you configure your Log Analytics agent to send data to two or more different Log Analytics workspaces (multi-homing), you'll get 500-MB free data ingestion for each workspace. It's calculated per node, per reported workspace, per day, and available for every workspace that has a 'Security' or 'AntiMalware' solution installed. You'll be charged for any data ingested over the 500-MB limit.
-
-### Is the 500-MB free data ingestion calculated for an entire workspace or strictly per machine?
-
-You'll get 500-MB free data ingestion per day, for every VM connected to the workspace. Specifically for the [security data types](#what-data-types-are-included-in-the-500-mb-data-daily-allowance) that are directly collected by Defender for Cloud.
-
-This data is a daily rate averaged across all nodes. Your total daily free limit is equal to **[number of machines] x 500 MB**. So even if some machines send 100 MB and others send 800 MB, if the total doesn't exceed your total daily free limit, you won't be charged extra.
-
-### What data types are included in the 500-MB data daily allowance?
-Defender for Cloud's billing is closely tied to the billing for Log Analytics. [Microsoft Defender for Servers](defender-for-servers-introduction.md) provides a 500 MB/node/day allocation for machines against the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security):
--- [SecurityAlert](/azure/azure-monitor/reference/tables/securityalert)-- [SecurityBaseline](/azure/azure-monitor/reference/tables/securitybaseline)-- [SecurityBaselineSummary](/azure/azure-monitor/reference/tables/securitybaselinesummary)-- [SecurityDetection](/azure/azure-monitor/reference/tables/securitydetection)-- [SecurityEvent](/azure/azure-monitor/reference/tables/securityevent)-- [WindowsFirewall](/azure/azure-monitor/reference/tables/windowsfirewall)-- [SysmonEvent](/azure/azure-monitor/reference/tables/sysmonevent)-- [ProtectionStatus](/azure/azure-monitor/reference/tables/protectionstatus)-- [Update](/azure/azure-monitor/reference/tables/update) and [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) when the Update Management solution isn't running in the workspace or solution targeting is enabled.-
-If the workspace is in the legacy Per Node pricing tier, the Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
-
-## How can I monitor my daily usage?
-
-You can view your data usage in two different ways, the Azure portal, or by running a script.
-
-**To view your usage in the Azure portal**:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to **Log Analytics workspaces**.
-
-1. Select your workspace.
-
-1. Select **Usage and estimated costs**.
-
- :::image type="content" source="media/enhanced-security-features-overview/data-usage.png" alt-text="Screenshot of your data usage of your log analytics workspace. " lightbox="media/enhanced-security-features-overview/data-usage.png":::
-
-You can also view estimated costs under different pricing tiers by selecting :::image type="icon" source="media/enhanced-security-features-overview/drop-down-icon.png" border="false"::: for each pricing tier.
--
-**To view your usage by using a script**:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to **Log Analytics workspaces** > **Logs**.
-
-1. Select your time range. Learn about [time ranges](../azure-monitor/logs/log-analytics-tutorial.md).
-
-1. Copy and past the following query into the **Type your query here** section.
-
- ```azurecli
- let Unit= 'GB';
- Usage
- | where IsBillable == 'TRUE'
- | where DataType in ('SecurityAlert', 'SecurityBaseline', 'SecurityBaselineSummary', 'SecurityDetection', 'SecurityEvent', 'WindowsFirewall', 'MaliciousIPCommunication', 'SysmonEvent', 'ProtectionStatus', 'Update', 'UpdateSummary')
- | project TimeGenerated, DataType, Solution, Quantity, QuantityUnit
- | summarize DataConsumedPerDataType = sum(Quantity)/1024 by DataType, DataUnit = Unit
- | sort by DataConsumedPerDataType desc
- ```
-
-1. Select **Run**.
-
- :::image type="content" source="media/enhanced-security-features-overview/select-run.png" alt-text="Screenshot showing where to enter your query and where the select run button is located." lightbox="media/enhanced-security-features-overview/select-run.png":::
-
-You can learn how to [Analyze usage in Log Analytics workspace](../azure-monitor/logs/analyze-usage.md).
-
-Based on your usage, you won't be billed until you've used your daily allowance. If you're receiving a bill, it's only for the data used after the 500-MB limit is reached, or for other service that doesn't fall under the coverage of Defender for Cloud.
-
-## Next steps
-This article explained Defender for Cloud's pricing options. For related material, see:
--- [How to optimize your Azure workload costs](https://azure.microsoft.com/blog/how-to-optimize-your-azure-workload-costs/)-- [Pricing details according to currency or region](https://azure.microsoft.com/pricing/details/defender-for-cloud/)-- You may want to manage your costs and limit the amount of data collected for a solution by limiting it to a particular set of agents. Use [solution targeting](../azure-monitor/insights/solution-targeting.md) to apply a scope to the solution and target a subset of computers in the workspace. If you're using solution targeting, Defender for Cloud lists the workspace as not having a solution.
-> [!IMPORTANT]
-> Solution targeting has been deprecated because the Log Analytics agent is being replaced with the Azure Monitor agent and solutions in Azure Monitor are being replaced with insights. You can continue to use solution targeting if you already have it configured, but it is not available in new regions.
-> The feature will not be supported after August 31, 2024.
-> Regions that support solution targeting until the deprecation date are:
->
-> | Region code | Region name |
-> | : | :- |
-> | CCAN | canadacentral |
-> | CHN | switzerlandnorth |
-> | CID | centralindia |
-> | CQ | brazilsouth |
-> | CUS | centralus |
-> | DEWC | germanywestcentral |
-> | DXB | UAENorth |
-> | EA | eastasia |
-> | EAU | australiaeast |
-> | EJP | japaneast |
-> | EUS | eastus |
-> | EUS2 | eastus2 |
-> | NCUS | northcentralus |
-> | NEU | NorthEurope |
-> | NOE | norwayeast |
-> | PAR | FranceCentral |
-> | SCUS | southcentralus |
-> | SE | KoreaCentral |
-> | SEA | southeastasia |
-> | SEAU | australiasoutheast |
-> | SUK | uksouth |
-> | WCUS | westcentralus |
-> | WEU | westeurope |
-> | WUS | westus |
-> | WUS2 | westus2 |
->
-> | Air-gapped clouds | Region code | Region name |
-> | :- | :- | :- |
-> | UsNat | EXE | usnateast |
-> | UsNat | EXW | usnatwest |
-> | UsGov | FF | usgovvirginia |
-> | China | MC | ChinaEast2 |
-> | UsGov | PHX | usgovarizona |
-> | UsSec | RXE | usseceast |
-> | UsSec | RXW | ussecwest |
defender-for-cloud Features Paas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/features-paas.md
- Title: Microsoft Defender for Cloud features for supported Azure PaaS resources.
-description: This page shows the availability of Microsoft Defender for Cloud features for the supported Azure PaaS resources.
--- Previously updated : 02/27/2022-
-# Feature coverage for Azure PaaS services
-
-<a name="paas-services"></a>
-
-The table below shows the availability of Microsoft Defender for Cloud features for the supported Azure PaaS resources.
-
-|Service|Recommendations (Free)|Security alerts |Vulnerability assessment|
-|:-|:-:|:-:|:-:|
-|Azure App Service|Γ£ö|Γ£ö|-|
-|Azure Automation account|Γ£ö|-|-|
-|Azure Batch account|Γ£ö|-|-|
-|Azure Blob Storage|Γ£ö|Γ£ö|-|
-|Azure Cache for Redis|Γ£ö|-|-|
-|Azure Cloud Services|Γ£ö|-|-|
-|Azure Cognitive Search|Γ£ö|-|-|
-|Azure Container Registry|Γ£ö|Γ£ö|Γ£ö|
-|Azure Cosmos DB|Γ£ö|Γ£ö|-|
-|Azure Data Lake Analytics|Γ£ö|-|-|
-|Azure Data Lake Storage|Γ£ö|Γ£ö|-|
-|Azure Database for MySQL|Γ£ö|Γ£ö|-|
-|Azure Database for PostgreSQL|Γ£ö|Γ£ö|-|
-|Azure Event Hubs namespace|Γ£ö|-|-|
-|Azure Functions app|Γ£ö|-|-|
-|Azure Key Vault|Γ£ö|Γ£ö|-|
-|Azure Kubernetes Service|Γ£ö|Γ£ö|-|
-|Azure Load Balancer|Γ£ö|-|-|
-|Azure Logic Apps|Γ£ö|-|-|
-|Azure SQL Database|Γ£ö|Γ£ö|Γ£ö|
-|Azure SQL Managed Instance|Γ£ö|Γ£ö|Γ£ö|
-|Azure Service Bus namespace|Γ£ö|-|-|
-|Azure Service Fabric account|Γ£ö|-|-|
-|Azure Storage accounts|Γ£ö|Γ£ö|-|
-|Azure Stream Analytics|Γ£ö|-|-|
-|Azure Subscription|Γ£ö **|Γ£ö|-|
-|Azure Virtual Network</br> (incl. subnets, NICs, and network security groups)|Γ£ö|-|-|
-
-\* These features are currently supported in preview.
-
-\*\* Azure Active Directory (Azure AD) recommendations are available only for subscriptions with [enhanced security features enabled](enable-enhanced-security.md).
defender-for-cloud File Integrity Monitoring Enable Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-enable-log-analytics.md
FIM is only available from Defender for Cloud's pages in the Azure portal. There
- Access and view the status and settings of each workspace
- - ![Upgrade plan icon.][4] Upgrade the workspace to use enhanced security features. This icon indicates that the workspace or subscription isn't protected with Microsoft Defender for Servers. To use the FIM features, your subscription must be protected with this plan. For more information, see [Microsoft Defender for Cloud's enhanced security features](enhanced-security-features-overview.md).
+ - ![Upgrade plan icon.][4] Upgrade the workspace to use enhanced security features. This icon indicates that the workspace or subscription isn't protected with Microsoft Defender for Servers. To use the FIM features, your subscription must be protected with this plan. Learn about how to [enable Defender for Servers](plan-defender-for-servers-select-plan.md).
- ![Enable icon][3] Enable FIM on all machines under the workspace and configure the FIM options. This icon indicates that FIM isn't enabled for the workspace. If there's no enable or upgrade button, and the space is blank, it means that FIM is already enabled on the workspace.
defender-for-cloud Just In Time Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-overview.md
Title: Understanding just-in-time virtual machine access in Microsoft Defender for Cloud description: This document explains how just-in-time VM access in Microsoft Defender for Cloud helps you control access to your Azure virtual machines -- Last updated 05/15/2022
defender-for-cloud Monitoring Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/monitoring-components.md
Learn more about [using the Azure Monitor Agent with Defender for Cloud](auto-de
| Policy-based: | :::image type="icon" source="./media/icons/no-icon.png"::: No | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet |
+#### Supported operating systems for the Log Analytics agent
+
+Defender for Cloud depends on the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md). Ensure your machines are running one of the supported operating systems for this agent as described on the following pages:
+
+* [Log Analytics agent for Windows supported operating systems](../azure-monitor/agents/agents-overview.md#supported-operating-systems)
+* [Log Analytics agent for Linux supported operating systems](../azure-monitor/agents/agents-overview.md#supported-operating-systems)
+
+Also ensure your Log Analytics agent is [properly configured to send data to Defender for Cloud](working-with-log-analytics-agent.md#manual-agent)
<a name="preexisting"></a>
This page explained what monitoring components are and how to enable them.
Learn more about: - [Setting up email notifications](configure-email-notifications.md) for security alerts-- Protecting workloads with [enhanced security features](enhanced-security-features-overview.md)
+- Protecting workloads with [the Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads)
defender-for-cloud Onboard Management Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/onboard-management-group.md
There are various ways you might choose to modify the Azure Policy definition:
The supplied definition, defines *either* of the 'pricing' settings below as compliant. Meaning that a subscription set to 'standard' or 'free' is compliant. > [!TIP]
- > When any Microsoft Defender plan is enabled, it's described in a policy definition as being on the 'Standard' setting. When it's disabled, it's 'Free'. To learn about the differences between these plans, see [Microsoft Defender for Cloud's enhanced security features](enhanced-security-features-overview.md).
+ > When any Microsoft Defender plan is enabled, it's described in a policy definition as being on the 'Standard' setting. When it's disabled, it's 'Free'. To learn about the differences between these plans, see [Microsoft Defender for Cloud's Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
``` "existenceCondition": {
defender-for-cloud Os Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/os-coverage.md
- Title: Platforms supported by Microsoft Defender for Cloud
-description: This document provides a list of platforms supported by Microsoft Defender for Cloud.
---- Previously updated : 01/09/2023--
-# Supported platforms
-
-This page shows the platforms and environments supported by Microsoft Defender for Cloud.
-
-<a name="vm-server"></a>
-
-## Combinations of environments
-
-Microsoft Defender for Cloud supports virtual machines and servers on different types of hybrid environments:
-
-* Only Azure
-* Azure and on-premises
-* Azure and other clouds
-* Azure, other clouds, and on-premises
-
-For an Azure environment activated on an Azure subscription, Microsoft Defender for Cloud will automatically discover IaaS resources that are deployed within the subscription.
-
-## Supported operating systems
-
-Defender for Cloud depends on the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md). Ensure your machines are running one of the supported operating systems for this agent as described on the following pages:
-
-* [Log Analytics agent for Windows supported operating systems](../azure-monitor/agents/agents-overview.md#supported-operating-systems)
-* [Log Analytics agent for Linux supported operating systems](../azure-monitor/agents/agents-overview.md#supported-operating-systems)
-
-Also ensure your Log Analytics agent is [properly configured to send data to Defender for Cloud](working-with-log-analytics-agent.md#manual-agent)
-
-To learn more about the specific Defender for Cloud features available on Windows and Linux, see [Feature coverage for machines](supported-machines-endpoint-solutions-clouds-containers.md).
-
-> [!NOTE]
-> Even though **Microsoft Defender for Servers** is designed to protect servers, most of its features are supported for Windows 10 machines. One feature that isn't currently supported is [Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
-
-<a name="virtual-machine"></a>
-
-## Managed virtual machine services
-
-Virtual machines are also created in a customer subscription as part of some Azure-managed services as well, such as Azure Kubernetes (AKS), Azure Databricks, and more. Defender for Cloud discovers these virtual machines too, and the Log Analytics agent can be installed and configured if a supported OS is available.
-
-<a name="cloud-services"></a>
-
-## Cloud Services
-
-Virtual machines that run in a cloud service are also supported. Only cloud services web and worker roles that run in production slots are monitored. To learn more about cloud services, see [Overview of Azure Cloud Services](../cloud-services/cloud-services-choose-me.md).
-
-## Next steps
--- Learn how [Defender for Cloud collects data using the Log Analytics Agent](monitoring-components.md#log-analytics-agent).-- Learn how [Defender for Cloud manages and safeguards data](data-security.md).-- Learn how to [plan and understand the design considerations to adopt Microsoft Defender for Cloud](defender-for-cloud-planning-and-operations-guide.md).
defender-for-cloud Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/overview-page.md
The **top menu bar** offers:
The center of the page displays the **feature tiles**, each linking to a high profile feature or dedicated dashboard: - **Security posture** - Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues. It then aggregates all the findings into a single score so that you can understand, at a glance, your current security situation: the higher the score, the lower the identified risk level. [Learn more](secure-score-security-controls.md).-- **Workload protections** - This tile is the cloud workload protection platform (CWPP) integrated within Defender for Cloud for advanced, intelligent protection of your workloads running on Azure, on-premises machines, or other cloud providers. For each resource type, there's a corresponding Microsoft Defender plan. The tile shows the coverage of your connected resources (for the currently selected subscriptions) and the recent alerts, color-coded by severity. Learn more about [the enhanced security features](enhanced-security-features-overview.md).
+- **Workload protections** - This is the cloud workload protection platform (CWPP) integrated within Defender for Cloud for advanced, intelligent protection of your workloads running on Azure, on-premises machines, or other cloud providers. For each resource type, there's a corresponding Microsoft Defender plan. The tile shows the coverage of your connected resources (for the currently selected subscriptions) and the recent alerts, color-coded by severity. Learn more about [the Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
- **Regulatory compliance** - Defender for Cloud provides insights into your compliance posture based on continuous assessments of your Azure environment. Defender for Cloud analyzes risk factors in your environment according to security best practices. These assessments are mapped to compliance controls from a supported set of standards. [Learn more](regulatory-compliance-dashboard.md). - **Inventory** - The asset inventory page of Microsoft Defender for Cloud provides a single page for viewing the security posture of the resources you've connected to Microsoft Defender for Cloud. All resources with unresolved security recommendations are shown in the inventory. If you've enabled the integration with Microsoft Defender for Endpoint and enabled Microsoft Defender for Servers, you'll also have access to a software inventory. The tile on the overview page shows you at a glance the total healthy and unhealthy resources (for the currently selected subscriptions). [Learn more](asset-inventory.md).
defender-for-cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md
Title: Permissions in Microsoft Defender for Cloud
+ Title: User roles and permissions in Microsoft Defender for Cloud
description: This article explains how Microsoft Defender for Cloud uses role-based access control to assign permissions to users and identify the permitted actions for each role.-+ Last updated 01/24/2023
-# Permissions in Microsoft Defender for Cloud
+# User roles and permissions
-Defender for Cloud uses [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) to provide [built-in roles](../role-based-access-control/built-in-roles.md). You can assign these roles to users, groups, and services in Azure to give users access to resources according to the access defined in the role.
+Microsoft Defender for Cloud uses [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) to provide [built-in roles](../role-based-access-control/built-in-roles.md). You can assign these roles to users, groups, and services in Azure to give users access to resources according the access defined in the role.
Defender for Cloud assesses the configuration of your resources to identify security issues and vulnerabilities. In Defender for Cloud, you only see information related to a resource when you're assigned one of these roles for the subscription or for the resource group the resource is in: Owner, Contributor, or Reader
defender-for-cloud Plan Defender For Servers Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-agents.md
You want to configure a custom workspace | Log Analytics agent, Azure Monitor ag
## Next steps
-After you work through these planning steps, learn how to [scale your Defender for Servers deployment](plan-defender-for-servers-scale.md).
+After working through these planning steps, you can start deployment:
+
+- [Enable Defender for Servers](enable-enhanced-security.md) plans
+- [Connect on-premises machines](quickstart-onboard-machines.md) to Azure.
+- [Connect AWS accounts](quickstart-onboard-aws.md) to Defender for Cloud.
+- [Connect GCP projects](quickstart-onboard-gcp.md) to Defender for Cloud.
+- Learn about [scaling your Defender for Server deployment](plan-defender-for-servers-scale.md).
defender-for-cloud Plan Defender For Servers Data Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-data-workspace.md
You can store your server information in the default workspace or you can use a
- You must have at least read permissions for the workspace. - If the *Security & Audit* solution is installed in a workspace, Defender for Cloud uses the existing solution.
-Learn more about [Log Analytics workspace design strategy and criteria](../azure-monitor/logs/workspace-design.md).
+## Log Analytics pricing FAQ
+
+- [Will I be charged for machines without the Log Analytics agent installed?](#will-i-be-charged-for-machines-without-the-log-analytics-agent-installed)
+- [If a Log Analytics agent reports to multiple workspaces, will I be charged twice?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-will-i-be-charged-twice)
+- [If a Log Analytics agent reports to multiple workspaces, is the 500-MB free data ingestion available on all of them?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-is-the-500-mb-free-data-ingestion-available-on-all-of-them)
+- [Is the 500-MB free data ingestion calculated for an entire workspace or strictly per machine?](#is-the-500-mb-free-data-ingestion-calculated-for-an-entire-workspace-or-strictly-per-machine)
+- [What data types are included in the 500-MB data daily allowance?](#what-data-types-are-included-in-the-500-mb-data-daily-allowance)
+- [How can I monitor my daily usage](#how-can-i-monitor-my-daily-usage)
+- [How can I manage my costs?](#how-can-i-manage-my-costs)
+
+### If I enable Defender for Clouds Servers plan on the subscription level, do I need to enable it on the workspace level?
+
+When you enable the Servers plan on the subscription level, Defender for Cloud will enable the Servers plan on your default workspaces automatically. Connect to the default workspace by selecting **Connect Azure VMs to the default workspace(s) created by Defender for Cloud** option and selecting **Apply**.
++
+However, if you're using a custom workspace in place of the default workspace, you'll need to enable the Servers plan on all of your custom workspaces that don't have it enabled.
+
+If you're using a custom workspace and enable the plan on the subscription level only, the `Microsoft Defender for servers should be enabled on workspaces` recommendation will appear on the Recommendations page. This recommendation will give you the option to enable the servers plan on the workspace level with the Fix button. You're charged for all VMs in the subscription even if the Servers plan isn't enabled for the workspace. The VMs won't benefit from features that depend on the Log Analytics workspace, such as Microsoft Defender for Endpoint, VA solution (TVM/Qualys), and Just-in-Time VM access.
+
+Enabling the Servers plan on both the subscription and its connected workspaces, won't incur a double charge. The system will identify each unique VM.
+
+If you enable the Servers plan on cross-subscription workspaces, connected VMs from all subscriptions will be billed, including subscriptions that don't have the Servers plan enabled.
+
+### Will I be charged for machines without the Log Analytics agent installed?
+
+Yes. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on an Azure subscription or a connected AWS account, you'll be charged for all machines that are connected to your Azure subscription or AWS account. The term machines include Azure virtual machines, Azure Virtual Machine Scale Sets instances, and Azure Arc-enabled servers. Machines that don't have Log Analytics installed are covered by protections that don't depend on the Log Analytics agent.
+
+### If a Log Analytics agent reports to multiple workspaces, will I be charged twice?
+
+If a machine, reports to multiple workspaces, and all of them have Defender for Servers enabled, the machines will be billed for each attached workspace.
+
+### If a Log Analytics agent reports to multiple workspaces, is the 500-MB free data ingestion available on all of them?
+
+Yes. If you configure your Log Analytics agent to send data to two or more different Log Analytics workspaces (multi-homing), you'll get 500-MB free data ingestion for each workspace. It's calculated per node, per reported workspace, per day, and available for every workspace that has a 'Security' or 'AntiMalware' solution installed. You'll be charged for any data ingested over the 500-MB limit.
+
+### Is the 500-MB free data ingestion calculated for an entire workspace or strictly per machine?
+
+You'll get 500-MB free data ingestion per day, for every VM connected to the workspace. Specifically for the [security data types](#what-data-types-are-included-in-the-500-mb-data-daily-allowance) that are directly collected by Defender for Cloud.
+
+This data is a daily rate averaged across all nodes. Your total daily free limit is equal to **[number of machines] x 500 MB**. So even if some machines send 100 MB and others send 800 MB, if the total doesn't exceed your total daily free limit, you won't be charged extra.
+
+### What data types are included in the 500-MB data daily allowance?
+Defender for Cloud's billing is closely tied to the billing for Log Analytics. [Microsoft Defender for Servers](defender-for-servers-introduction.md) provides a 500 MB/node/day allocation for machines against the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security):
+
+- [SecurityAlert](/azure/azure-monitor/reference/tables/securityalert)
+- [SecurityBaseline](/azure/azure-monitor/reference/tables/securitybaseline)
+- [SecurityBaselineSummary](/azure/azure-monitor/reference/tables/securitybaselinesummary)
+- [SecurityDetection](/azure/azure-monitor/reference/tables/securitydetection)
+- [SecurityEvent](/azure/azure-monitor/reference/tables/securityevent)
+- [WindowsFirewall](/azure/azure-monitor/reference/tables/windowsfirewall)
+- [SysmonEvent](/azure/azure-monitor/reference/tables/sysmonevent)
+- [ProtectionStatus](/azure/azure-monitor/reference/tables/protectionstatus)
+- [Update](/azure/azure-monitor/reference/tables/update) and [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) when the Update Management solution isn't running in the workspace or solution targeting is enabled.
+
+If the workspace is in the legacy Per Node pricing tier, the Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
+
+### How can I monitor my daily usage?
+
+You can view your data usage in two different ways, the Azure portal, or by running a script.
+
+**To view your usage in the Azure portal**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Log Analytics workspaces**.
+
+1. Select your workspace.
+
+1. Select **Usage and estimated costs**.
+
+ :::image type="content" source="media/plan-defender-for-servers-data-workspace/data-usage.png" alt-text="Screenshot of your data usage of your log analytics workspace. " lightbox="media/plan-defender-for-servers-data-workspace/data-usage.png":::
+
+You can also view estimated costs under different pricing tiers by selecting :::image type="icon" source="media/plan-defender-for-servers-data-workspace/drop-down-icon.png" border="false"::: for each pricing tier.
++
+**To view your usage by using a script**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Log Analytics workspaces** > **Logs**.
+
+1. Select your time range. Learn about [time ranges](../azure-monitor/logs/log-analytics-tutorial.md).
+
+1. Copy and past the following query into the **Type your query here** section.
+
+ ```azurecli
+ let Unit= 'GB';
+ Usage
+ | where IsBillable == 'TRUE'
+ | where DataType in ('SecurityAlert', 'SecurityBaseline', 'SecurityBaselineSummary', 'SecurityDetection', 'SecurityEvent', 'WindowsFirewall', 'MaliciousIPCommunication', 'SysmonEvent', 'ProtectionStatus', 'Update', 'UpdateSummary')
+ | project TimeGenerated, DataType, Solution, Quantity, QuantityUnit
+ | summarize DataConsumedPerDataType = sum(Quantity)/1024 by DataType, DataUnit = Unit
+ | sort by DataConsumedPerDataType desc
+ ```
+
+1. Select **Run**.
+
+ :::image type="content" source="media/plan-defender-for-servers-data-workspace/select-run.png" alt-text="Screenshot showing where to enter your query and where the select run button is located." lightbox="media/plan-defender-for-servers-data-workspace/select-run.png":::
+
+You can learn how to [Analyze usage in Log Analytics workspace](../azure-monitor/logs/analyze-usage.md).
+
+Based on your usage, you won't be billed until you've used your daily allowance. If you're receiving a bill, it's only for the data used after the 500-MB limit is reached, or for other service that doesn't fall under the coverage of Defender for Cloud.
+
+### How can I manage my costs?
+
+You may want to manage your costs and limit the amount of data collected for a solution by limiting it to a particular set of agents. Use [solution targeting](../azure-monitor/insights/solution-targeting.md) to apply a scope to the solution and target a subset of computers in the workspace. If you're using solution targeting, Defender for Cloud lists the workspace as not having a solution.
+> [!IMPORTANT]
+> Solution targeting has been deprecated because the Log Analytics agent is being replaced with the Azure Monitor agent and solutions in Azure Monitor are being replaced with insights. You can continue to use solution targeting if you already have it configured, but it is not available in new regions.
+> The feature will not be supported after August 31, 2024.
+> Regions that support solution targeting until the deprecation date are:
+>
+> | Region code | Region name |
+> | : | :- |
+> | CCAN | canadacentral |
+> | CHN | switzerlandnorth |
+> | CID | centralindia |
+> | CQ | brazilsouth |
+> | CUS | centralus |
+> | DEWC | germanywestcentral |
+> | DXB | UAENorth |
+> | EA | eastasia |
+> | EAU | australiaeast |
+> | EJP | japaneast |
+> | EUS | eastus |
+> | EUS2 | eastus2 |
+> | NCUS | northcentralus |
+> | NEU | NorthEurope |
+> | NOE | norwayeast |
+> | PAR | FranceCentral |
+> | SCUS | southcentralus |
+> | SE | KoreaCentral |
+> | SEA | southeastasia |
+> | SEAU | australiasoutheast |
+> | SUK | uksouth |
+> | WCUS | westcentralus |
+> | WEU | westeurope |
+> | WUS | westus |
+> | WUS2 | westus2 |
+>
+> | Air-gapped clouds | Region code | Region name |
+> | :- | :- | :- |
+> | UsNat | EXE | usnateast |
+> | UsNat | EXW | usnatwest |
+> | UsGov | FF | usgovvirginia |
+> | China | MC | ChinaEast2 |
+> | UsGov | PHX | usgovarizona |
+> | UsSec | RXE | usseceast |
+> | UsSec | RXW | ussecwest |
## Next steps
defender-for-cloud Plan Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers.md
The following diagram shows an overview of the Defender for Servers deployment p
- Learn more about [foundational cloud security posture management (CSPM)](concept-cloud-security-posture-management.md#defender-cspm-plan-options). - Learn more about [Azure Arc](../azure-arc/index.yml) onboarding.
+When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on an Azure subscription or a connected AWS account, all of the connected machines will be protected by Defender for Servers. You can enable Microsoft Defender for Servers at the Log Analytics workspace level, but only servers reporting to that workspace will be protected and billed and those servers won't receive some benefits, such as Microsoft Defender for Endpoint, vulnerability assessment, and just-in-time VM access.
+
+## Defender for Servers pricing FAQ
+
+- [My subscription has Microsoft Defender for Servers enabled, which machines do I pay for?](#my-subscription-has-microsoft-defender-for-servers-enabled-which-machines-do-i-pay-for)
+- [If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Defender for Servers?](#if-i-already-have-a-license-for-microsoft-defender-for-endpoint-can-i-get-a-discount-for-defender-for-servers)
+
+### My subscription has Microsoft Defender for Servers enabled, which machines do I pay for?
+
+When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on a subscription, all machines in that subscription (including machines that are part of PaaS services and reside in this subscription) are billed according to their power state as shown in the following table:
+
+| State | Description | Instance usage billed |
+|--|--|--|
+| Starting | VM is starting up. | Not billed |
+| Running | Normal working state for a VM | Billed |
+| Stopping | This state is transitional. When completed, it will show as Stopped. | Billed |
+| Stopped | The VM has been shut down from within the guest OS or using the PowerOff APIs. Hardware is still allocated to the VM and it remains on the host. | Billed |
+| Deallocating | This state is transitional. When completed, the VM will show as Deallocated. | Not billed |
+| Deallocated | The VM has been stopped successfully and removed from the host. | Not billed |
++
+### If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Defender for Servers?
+
+If you already have a license for **Microsoft Defender for Endpoint for Servers Plan 2**, you won't have to pay for that part of your Microsoft Defender for Servers license. Learn more about [this license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements).
+
+To request your discount, [contact Defender for Cloud's support team](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). You'll need to provide the relevant workspace ID, region, and number of Microsoft Defender for Endpoint for servers licenses applied for machines in the given workspace.
+
+The discount will be effective starting from the approval date, and won't take place retroactively.
+ ## Next steps
-You've begun the Defender for Servers planning process. Review the next article in the planning guide to [understand how your data is stored and the Log Analytics workspace requirements](plan-defender-for-servers-data-workspace.md).
+After kicking off the planning process, review the [second article in this planning series](plan-defender-for-servers-data-workspace.md) to understand how your data is stored, and Log Analytics workspace requirements.
defender-for-cloud Protect Network Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/protect-network-resources.md
Title: Protecting your network resources in Microsoft Defender for Cloud description: This document addresses recommendations in Microsoft Defender for Cloud that help you protect your Azure network resources and stay in compliance with security policies. Previously updated : 01/24/2023 Last updated : 10/23/2022 - # Protect your network resources Microsoft Defender for Cloud continuously analyzes the security state of your Azure resources for network security best practices. When Defender for Cloud identifies potential security vulnerabilities, it creates recommendations that guide you through the process of configuring the needed controls to harden and protect your resources.
To open the Network map:
The default view of the topology map displays: - Currently selected subscriptions - The map is optimized for the subscriptions you selected in the portal. If you modify your selection, the map is regenerated with the new selections.-- VMs, subnets, and VNets of the Resource Manager resource type ("classic" Azure resources aren't supported)
+- VMs, subnets, and VNets of the Resource Manager resource type ("classic" Azure resources are not supported)
- Peered VNets - Only resources that have [network recommendations](review-security-recommendations.md) with a high or medium severity - Internet-facing resources
In the **Topology** view of the networking map, you can view the following insig
- In the inner circle, you can see all the Vnets within your selected subscriptions, the next circle is all the subnets, the outer circle is all the virtual machines. - The lines connecting the resources in the map let you know which resources are associated with each other, and how your Azure network is structured. - Use the severity indicators to quickly get an overview of which resources have open recommendations from Defender for Cloud.-- You can select any of the resources to drill down into them and view the details of that resource and its recommendations directly, and in the context of the Network map.
+- You can click any of the resources to drill down into them and view the details of that resource and its recommendations directly, and in the context of the Network map.
- If there are too many resources being displayed on the map, Microsoft Defender for Cloud uses its proprietary algorithm to 'smart cluster' your resources, highlighting the ones that are in the most critical state, and have the most high severity recommendations. Because the map is interactive and dynamic, every node is clickable, and the view can change based on the filters:
Because the map is interactive and dynamic, every node is clickable, and the vie
- **Recommendations**: You can select which resources are displayed based on which recommendations are active on those resources. For example, you can view only resources for which Defender for Cloud recommends you enable Network Security Groups. - **Network zones**: By default, the map displays only Internet facing resources, you can select internal VMs as well.
-2. You can select **Reset** in top left corner at any time to return the map to its default state.
+2. You can click **Reset** in top left corner at any time to return the map to its default state.
To drill down into a resource:
To drill down into a resource:
### The Traffic view
-The **Traffic** view provides you with a map of all the possible traffic between your resources. This provides you with a visual map of all the rules you configured that define which resources can communicate with whom. This enables you to see the existing configuration of the network security groups and quickly identify possible risky configurations within your workloads.
+The **Traffic** view provides you with a map of all the possible traffic between your resources. This provides you with a visual map of all the rules you configured that define which resources can communicate with whom. This enables you to see the existing configuration of the network security groups as well as quickly identify possible risky configurations within your workloads.
### Uncover unwanted connections
For example, you might detect two machines that you werenΓÇÖt aware could commun
To drill down into a resource: 1. When you select a specific resource on the map, the right pane opens and gives you general information about the resource, connected security solutions if there are any, and the recommendations relevant to the resource. It's the same type of behavior for each type of resource you select.
-2. Select **Traffic** to see the list of possible outbound and inbound traffic on the resource - this is a comprehensive list of who can communicate with the resource and who it can communicate with, and through which protocols and ports. For example, when you select a VM, all the VMs it can communicate with are shown, and when you select a subnet, all the subnets, which it can communicate with are shown.
+2. Click **Traffic** to see the list of possible outbound and inbound traffic on the resource - this is a comprehensive list of who can communicate with the resource and who it can communicate with, and through which protocols and ports. For example, when you select a VM, all the VMs it can communicate with are shown, and when you select a subnet, all the subnets which it can communicate with are shown.
**This data is based on analysis of the Network Security Groups as well as advanced machine learning algorithms that analyze multiple rules to understand their crossovers and interactions.**
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Applications that are installed in virtual machines could often have vulnerabili
Azure Security Center's support for threat protection and vulnerability assessment for SQL DBs running on IaaS VMs is now in preview.
-[Vulnerability assessment](./sql-azure-vulnerability-assessment-overview.md) is an easy to configure service that can discover, track, and help you remediate potential database vulnerabilities. It provides visibility into your security posture as part of secure score and includes the steps to resolve security issues and enhance your database fortifications.
+[Vulnerability assessment](/azure/azure-sql/database/sql-vulnerability-assessment) is an easy to configure service that can discover, track, and help you remediate potential database vulnerabilities. It provides visibility into your security posture as part of secure score and includes the steps to resolve security issues and enhance your database fortifications.
[Advanced threat protection](/azure/azure-sql/database/threat-detection-overview) detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit your SQL server. It continuously monitors your database for suspicious activities and provides action-oriented security alerts on anomalous database access patterns. These alerts provide the suspicious activity details and recommended actions to investigate and mitigate the threat.
Azure Security Center (ASC) has launched new networking recommendations and impr
One of the biggest attack surfaces for workloads running in the public cloud are connections to and from the public Internet. Our customers find it hard to know which Network Security Group (NSG) rules should be in place to make sure that Azure workloads are only available to required source ranges. With this feature, Security Center learns the network traffic and connectivity patterns of Azure workloads and provides NSG rule recommendations, for Internet facing virtual machines. This helps our customer better configure their network access policies and limit their exposure to attacks.
-[Learn more about adaptive network hardening](adaptive-network-hardening.md).
+[Learn more about adaptive network hardening](adaptive-network-hardening.md).
defender-for-cloud Sql Azure Vulnerability Assessment Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-manage.md
The express configuration is supported in the latest REST API version with the f
| VA settings (GET only is supported for Express Configuration) | User Database | [Database Sql Vulnerability Assessments Settings](/rest/api/sql/2022-05-01-preview/database-sql-vulnerability-assessments-settings) | | VA Settings operations | Server | [Sql Vulnerability Assessments Settings](/rest/api/sql/2022-05-01-preview/sql-vulnerability-assessments-settings)<br>[Sql Vulnerability Assessments](/rest/api/sql/2022-05-01-preview/sql-vulnerability-assessments) |
-### Using Resource Manager templates
+### Using Azure Resource Manager templates
+
+Use the [following ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.sql/sql-logical-server/azuredeploy.json) to create a new Azure SQL Logical Server with expresss configuration for SQL vulnerability assessment.
To configure vulnerability assessment baselines by using Azure Resource Manager templates, use the `Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/baselines` type. Make sure that `vulnerabilityAssessments` is enabled before you add baselines.
Old results and baselines settings remain available on your storage account, but
When express configuration is enabled, you don't have direct access to the result and baseline data because it's stored on internal Microsoft storage.
+### Can I setup reccuring scans with express configuration?
+
+Express configuration automatically sets up reccuring scans for all databases under your server. This is the default and is not configurable at server or database level.
+ ### Is there a way with express configuration to get the weekly email report that is provided in the classic configuration? You can use workflow automation and Logic Apps email scheduling, following the Microsoft Defender for Cloud processes:
defender-for-cloud Support Matrix Defender For Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-cloud.md
+
+ Title: Microsoft Defender for Cloud interoperability with Azure services and Azure clouds
+description: Learn about the Azure cloud environments where Defender for Cloud can be used and the Azure services that Defender for Cloud protects.
+++ Last updated : 02/07/2023++
+# Microsoft Defender for Cloud interoperability with Azure services and Azure clouds
+
+This article indicates the Azure clouds and Azure services that are supported by Microsoft Defender for Cloud.
+
+## Security benefits for Azure services
+
+Defender for Cloud provides recommendations, security alerts, and vulnerability assessment for these Azure
+
+|Service|[Recommendations](security-policy-concept.md) free with [Foundational CSPM](concept-cloud-security-posture-management.md) |[Security alerts](alerts-overview.md) |Vulnerability assessment|
+|:-|:-:|:-:|:-:|
+|Azure App Service|Γ£ö|Γ£ö|-|
+|Azure Automation account|Γ£ö|-|-|
+|Azure Batch account|Γ£ö|-|-|
+|Azure Blob Storage|Γ£ö|Γ£ö|-|
+|Azure Cache for Redis|Γ£ö|-|-|
+|Azure Cloud Services|Γ£ö|-|-|
+|Azure Cognitive Search|Γ£ö|-|-|
+|Azure Container Registry|Γ£ö|Γ£ö|[Defender for Containers](defender-for-containers-introduction.md)|
+|Azure Cosmos DB*|Γ£ö|Γ£ö|-|
+|Azure Data Lake Analytics|Γ£ö|-|-|
+|Azure Data Lake Storage|Γ£ö|Γ£ö|-|
+|Azure Database for MySQL*|-|Γ£ö|-|
+|Azure Database for PostgreSQL*|-|Γ£ö|-|
+|Azure Event Hubs namespace|Γ£ö|-|-|
+|Azure Functions app|Γ£ö|-|-|
+|Azure Key Vault|Γ£ö|Γ£ö|-|
+|Azure Kubernetes Service|Γ£ö|Γ£ö|-|
+|Azure Load Balancer|Γ£ö|-|-|
+|Azure Logic Apps|Γ£ö|-|-|
+|Azure SQL Database|Γ£ö|Γ£ö|[Defender for Azure SQL](defender-for-sql-introduction.md)|
+|Azure SQL Managed Instance|Γ£ö|Γ£ö|[Defender for Azure SQL](defender-for-sql-introduction.md)|
+|Azure Service Bus namespace|Γ£ö|-|-|
+|Azure Service Fabric account|Γ£ö|-|-|
+|Azure Storage accounts|Γ£ö|Γ£ö|-|
+|Azure Stream Analytics|Γ£ö|-|-|
+|Azure Subscription|Γ£ö **|Γ£ö|-|
+|Azure Virtual Network</br> (incl. subnets, NICs, and network security groups)|Γ£ö|-|-|
+
+\* These features are currently supported in preview.
+
+\*\* Azure Active Directory (Azure AD) recommendations are available only for subscriptions with [enhanced security features enabled](enable-enhanced-security.md).
+
+## Features supported in different Azure cloud environments
+
+Microsoft Defender for Cloud is available in the following Azure cloud environments:
+
+| Feature/Service | Azure | Azure Government | Azure China 21Vianet |
+||-|--|--|
+| **Defender for Cloud free features** | | | |
+| - [Continuous export](./continuous-export.md) | GA | GA | GA |
+| - [Workflow automation](./workflow-automation.md) | GA | GA | GA |
+| - [Recommendation exemption rules](./exempt-resource.md) | Public Preview | Not Available | Not Available |
+| - [Alert suppression rules](./alerts-suppression-rules.md) | GA | GA | GA |
+| - [Email notifications for security alerts](./configure-email-notifications.md) | GA | GA | GA |
+| - [Deployment of agents and extensions](monitoring-components.md) | GA | GA | GA |
+| - [Asset inventory](./asset-inventory.md) | GA | GA | GA |
+| - [Azure Monitor Workbooks reports in Microsoft Defender for Cloud's workbooks gallery](./custom-dashboards-azure-workbooks.md) | GA | GA | GA |
+| - [Integration with Microsoft Defender for Cloud Apps](./other-threat-protections.md#display-recommendations-in-microsoft-defender-for-cloud-apps) | GA | GA | Not Available |
+| **Microsoft Defender plans and extensions** | | | |
+| - [Microsoft Defender for Servers](./defender-for-servers-introduction.md) | GA | GA | GA |
+| - [Microsoft Defender for App Service](./defender-for-app-service-introduction.md) | GA | Not Available | Not Available |
+| - [Microsoft Defender for DNS](./defender-for-dns-introduction.md) | GA | GA | GA |
+| - [Microsoft Defender for Kubernetes](./defender-for-kubernetes-introduction.md) <sup>[1](#footnote1)</sup> | GA | GA | GA |
+| - [Microsoft Defender for Containers](./defender-for-containers-introduction.md) <sup>[7](#footnote7)</sup> | GA | GA | GA |
+| - [Defender extension for Azure Arc-enabled Kubernetes clusters, servers or data services](./defender-for-kubernetes-azure-arc.md) <sup>[2](#footnote2)</sup> | Public Preview | Not Available | Not Available |
+| - [Microsoft Defender for Azure SQL database servers](./defender-for-sql-introduction.md) | GA | GA | GA <sup>[6](#footnote6)</sup> |
+| - [Microsoft Defender for SQL servers on machines](./defender-for-sql-introduction.md) | GA | GA | Not Available |
+| - [Microsoft Defender for open-source relational databases](./defender-for-databases-introduction.md) | GA | Not Available | Not Available |
+| - [Microsoft Defender for Key Vault](./defender-for-key-vault-introduction.md) | GA | Not Available | Not Available |
+| - [Microsoft Defender for Resource Manager](./defender-for-resource-manager-introduction.md) | GA | GA | GA |
+| - [Microsoft Defender for Storage](./defender-for-storage-introduction.md) <sup>[3](#footnote3)</sup> | GA | GA | Not Available |
+| - [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) | Public Preview | Not Available | Not Available |
+| - [Kubernetes workload protection](./kubernetes-workload-protections.md) | GA | GA | GA |
+| - [Bi-directional alert synchronization with Sentinel](../sentinel/connect-azure-security-center.md) | Public Preview | Not Available | Not Available |
+| **Microsoft Defender for Servers features** <sup>[4](#footnote4)</sup> | | | |
+| - [Just-in-time VM access](./just-in-time-access-usage.md) | GA | GA | GA |
+| - [File Integrity Monitoring](./file-integrity-monitoring-overview.md) | GA | GA | GA |
+| - [Adaptive application controls](./adaptive-application-controls.md) | GA | GA | GA |
+| - [Adaptive network hardening](./adaptive-network-hardening.md) | GA | GA | Not Available |
+| - [Docker host hardening](./harden-docker-hosts.md) | GA | GA | GA |
+| - [Integrated Qualys vulnerability scanner](./deploy-vulnerability-assessment-vm.md) | GA | Not Available | Not Available |
+| - [Regulatory compliance dashboard & reports](./regulatory-compliance-dashboard.md) <sup>[5](#footnote5)</sup> | GA | GA | GA |
+| - [Microsoft Defender for Endpoint deployment and integrated license](./integration-defender-for-endpoint.md) | GA | GA | Not Available |
+| - [Connect AWS account](./quickstart-onboard-aws.md) | GA | Not Available | Not Available |
+| - [Connect GCP project](./quickstart-onboard-gcp.md) | GA | Not Available | Not Available |
+
+<sup><a name="footnote1"></a>1</sup> Partially GA: Support for Azure Arc-enabled clusters is in public preview and not available on Azure Government.
+
+<sup><a name="footnote2"></a>2</sup> Requires Microsoft Defender for Kubernetes or Microsoft Defender for Containers.
+
+<sup><a name="footnote3"></a>3</sup> Partially GA: Some of the threat protection alerts from Microsoft Defender for Storage are in public preview.
+
+<sup><a name="footnote4"></a>4</sup> These features all require [Microsoft Defender for Servers](./defender-for-servers-introduction.md).
+
+<sup><a name="footnote5"></a>5</sup> There may be differences in the standards offered per cloud type.
+
+<sup><a name="footnote6"></a>6</sup> Partially GA: Subset of alerts and vulnerability assessment for SQL servers. Behavioral threat protections aren't available.
+
+<sup><a name="footnote7"></a>7</sup> Partially GA: Support for Arc-enabled Kubernetes clusters (and therefore AWS EKS too) is in public preview and not available on Azure Government. Run-time visibility of vulnerabilities in container images is also a preview feature.
+
+## Next steps
+
+This article explained how Microsoft Defender for Cloud is supported in the Azure, Azure Government, and Azure China 21Vianet clouds. Now that you're familiar with the Defender for Cloud capabilities supported in your cloud, learn how to:
+
+- [Manage security recommendations in Defender for Cloud](review-security-recommendations.md)
+- [Manage and respond to security alerts in Defender for Cloud](managing-and-responding-alerts.md)
defender-for-cloud Support Matrix Defender For Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-containers.md
+
+ Title: Matrices of Defender for Containers features in Azure, multicloud, and on-premises environments
+description: Learn about the container and Kubernetes services that you can protect with Defender for Containers.
+++ Last updated : 01/01/2023+++
+# Defender for Containers feature availability
+
+These tables show the features that are available, by environment, for Microsoft Defender for Containers. For more information about Defender for Containers, see [Microsoft Defender for Containers](defender-for-containers-introduction.md).
+
+## Azure (AKS)
+
+| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing Tier | Azure clouds availability |
+|--|--|--|--|--|--|--|--|
+| Compliance | Docker CIS | VM, Virtual Machine Scale Set | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Vulnerability Assessment <sup>[2](#footnote2)</sup> | Registry scan - OS packages | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Vulnerability Assessment <sup>[3](#footnote3)</sup> | Registry scan - language specific packages | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Vulnerability Assessment | View vulnerabilities for running images | AKS | GA | Preview | Defender profile | Defender for Containers | Commercial clouds |
+| Hardening | Control plane recommendations | ACR, AKS | GA | Preview | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Hardening | Kubernetes data plane recommendations | AKS | GA | - | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Runtime protection| Threat detection (control plane)| AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Runtime protection| Threat detection (workload) | AKS | GA | - | Defender profile | Defender for Containers | Commercial clouds |
+| Discovery and provisioning | Discovery of unprotected clusters | AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and provisioning | Collection of control plane threat data | AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and provisioning | Auto provisioning of Defender profile | AKS | GA | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and provisioning | Auto provisioning of Azure policy add-on | AKS | GA | - | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+
+<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+<sup><a name="footnote2"></a>2</sup> VA can detect vulnerabilities for these [OS packages](#registries-and-images).
+
+<sup><a name="footnote3"></a>3</sup> VA can detect vulnerabilities for these [language specific packages](#registries-and-images).
+
+### Additional environment information
+
+#### Registries and images
+
+| Aspect | Details |
+|--|--|
+| Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) <br> ΓÇó Providing image tag information for [multi-architecture images](https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/) is currently unsupported|
+| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.16 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35|
+| Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
+
+#### Kubernetes distributions and configurations
+
+| Aspect | Details |
+|--|--|
+| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> |
+
+<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
+
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+
+> [!NOTE]
+> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+
+#### Network restrictions
+
+##### Private link
+
+Defender for Containers relies on the Defender profile/extension for several features. The Defender profile/extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
++
+Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
+
+Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
+
+## AWS (EKS)
+
+| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
+|--|--| -- | -- | -- | -- | --|
+| Compliance | Docker CIS | EC2 | Preview | - | Log Analytics agent | Defender for Servers Plan 2 |
+| Vulnerability Assessment | Registry scan | ECR | Preview | - | Agentless | Defender for Containers |
+| Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - |
+| Hardening | Control plane recommendations | - | - | - | - | - |
+| Hardening | Kubernetes data plane recommendations | EKS | Preview | - | Azure Policy extension | Defender for Containers |
+| Runtime protection| Threat detection (control plane)| EKS | Preview | Preview | Agentless | Defender for Containers |
+| Runtime protection| Threat detection (workload) | EKS | Preview | - | Defender extension | Defender for Containers |
+| Discovery and provisioning | Discovery of unprotected clusters | EKS | Preview | - | Agentless | Free |
+| Discovery and provisioning | Collection of control plane threat data | EKS | Preview | Preview | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Defender extension | - | - | - | - | - |
+| Discovery and provisioning | Auto provisioning of Azure policy extension | - | - | - | - | - |
+
+<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+### Additional environment information
+
+#### Images
+
+| Aspect | Details |
+|--|--|
+| Registries and images | **Unsupported** <br>ΓÇó Images that have at least one layer over 2 GB<br> ΓÇó Public repositories and manifest lists <br>ΓÇó Images in the AWS management account aren't scanned so that we don't create resources in the management account. |
+
+#### Kubernetes distributions and configurations
+
+| Aspect | Details |
+|--|--|
+| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> |
+
+<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
+
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+
+> [!NOTE]
+> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+
+#### Network restrictions
+
+##### Private link
+
+Defender for Containers relies on the Defender profile/extension for several features. The Defender profile/extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
++
+Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
+
+Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
+
+##### Outbound proxy support
+
+Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
+
+## GCP (GKE)
+
+| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
+|--|--| -- | -- | -- | -- | --|
+| Compliance | Docker CIS | GCP VMs | Preview | - | Log Analytics agent | Defender for Servers Plan 2 |
+| Vulnerability Assessment | Registry scan | - | - | - | - | - |
+| Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - |
+| Hardening | Control plane recommendations | - | - | - | - | - |
+| Hardening | Kubernetes data plane recommendations | GKE | Preview | - | Azure Policy extension | Defender for Containers |
+| Runtime protection| Threat detection (control plane)| GKE | Preview | Preview | Agentless | Defender for Containers |
+| Runtime protection| Threat detection (workload) | GKE | Preview | - | Defender extension | Defender for Containers |
+| Discovery and provisioning | Discovery of unprotected clusters | GKE | Preview | - | Agentless | Free |
+| Discovery and provisioning | Collection of control plane threat data | GKE | Preview | Preview | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Defender extension | GKE | Preview | - | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Azure policy extension | GKE | Preview | - | Agentless | Defender for Containers |
+
+<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+### Additional information
+
+#### Kubernetes distributions and configurations
+
+| Aspect | Details |
+|--|--|
+| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> |
+
+<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
+
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+
+> [!NOTE]
+> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+
+#### Network restrictions
+
+##### Private link
+
+Defender for Containers relies on the Defender profile/extension for several features. The Defender profile/extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
++
+Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
+
+Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
+
+##### Outbound proxy support
+
+Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
+
+## On-premises Arc-enabled machines
+
+| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
+|--|--| -- | -- | -- | -- | --|
+| Compliance | Docker CIS | Arc enabled VMs | Preview | - | Log Analytics agent | Defender for Servers Plan 2 |
+| Vulnerability Assessment <sup>[2](#footnote2)</sup> | Registry scan - OS packages | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers |
+| Vulnerability Assessment <sup>[3](#footnote3)</sup> | Registry scan - language specific packages | ACR, Private ACR | Preview | - | Agentless | Defender for Containers |
+| Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - |
+| Hardening | Control plane recommendations | - | - | - | - | - |
+| Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | - | Azure Policy extension | Defender for Containers |
+| Runtime protection| Threat detection (control plane)| Arc enabled K8s clusters | Preview | Preview | Defender extension | Defender for Containers |
+| Runtime protection <sup>[4](#footnote4)</sup> | Threat detection (workload)| Arc enabled K8s clusters | Preview | - | Defender extension | Defender for Containers |
+| Discovery and provisioning | Discovery of unprotected clusters | Arc enabled K8s clusters | Preview | - | Agentless | Free |
+| Discovery and provisioning | Collection of control plane threat data | Arc enabled K8s clusters | Preview | Preview | Defender extension | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Defender extension | Arc enabled K8s clusters | Preview | Preview | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Azure policy extension | Arc enabled K8s clusters | Preview | - | Agentless | Defender for Containers |
+
+<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+<sup><a name="footnote2"></a>2</sup> VA can detect vulnerabilities for these [OS packages](#registries-and-images-1).
+
+<sup><a name="footnote3"></a>3</sup> VA can detect vulnerabilities for these [language specific packages](#registries-and-images-1).
+
+<sup><a name="footnote4"></a>4</sup> Runtime protection can detect threats for these [Supported host operating systems](#supported-host-operating-systems).
++
+### Additional information
+
+#### Registries and images
+
+| Aspect | Details |
+|--|--|
+| Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) <br> ΓÇó Providing image tag information for [multi-architecture images](https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/) is currently unsupported |
+| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.15 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35|
+| Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
+
+#### Kubernetes distributions and configurations
+
+| Aspect | Details |
+|--|--|
+| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> |
+
+<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
+
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+
+> [!NOTE]
+> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+
+#### Supported host operating systems
+
+Defender for Containers relies on the **Defender extension** for several features. The Defender extension is supported on the following host operating systems:
+
+- Amazon Linux 2
+- CentOS 8
+- Debian 10
+- Debian 11
+- Google Container-Optimized OS
+- Mariner 1.0
+- Mariner 2.0
+- Red Hat Enterprise Linux 8
+- Ubuntu 16.04
+- Ubuntu 18.04
+- Ubuntu 20.04
+- Ubuntu 22.04
+
+Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, will only get partial coverage.
+
+#### Network restrictions
+
+##### Private link
+
+Defender for Containers relies on the Defender profile/extension for several features. The Defender profile/extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
++
+Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
+
+Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
+
+##### Outbound proxy support
+
+Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
+
+## Next steps
+
+- Learn how [Defender for Cloud collects data using the Log Analytics Agent](monitoring-components.md).
+- Learn how [Defender for Cloud manages and safeguards data](data-security.md).
+- Review the [platforms that support Defender for Cloud](security-center-os-coverage.md).
defender-for-cloud Support Matrix Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-servers.md
+
+ Title: Matrices of Defender for Servers features in foundational CSPM, Azure Arc, multicloud, and endpoint protection solutions
+description: Learn about the environments where you can protect servers and virtual machines with Defender for Servers.
+++ Last updated : 01/01/2023++
+# Support matrices for Defender for Servers
+
+This article provides information about the environments where you can protect servers and virtual machines with Defender for Servers and the endpoint protections that you can use to protect them.
+
+## Supported features for virtual machines and servers<a name="vm-server-features"></a>
+
+The following tables show the features that are supported for virtual machines and servers in Azure, Azure Arc, and other clouds.
+
+- [Windows machines](#windows-machines)
+- [Linux machines](#linux-machines)
+- [Multicloud machines](#multicloud-machines)
+
+### Windows machines
+
+| **Feature** | **Azure Virtual Machines** | **Azure Virtual Machine Scale Sets** | **Azure Arc-enabled machines** | **Defender for Servers required** |
+|--|::|::|::|::|
+| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö</br>(on supported versions) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
+| [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [Fileless security alerts](alerts-reference.md#alerts-windows) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [Network-based security alerts](other-threat-protections.md#network-layer) | Γ£ö | Γ£ö | - | Yes |
+| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - | - | Yes |
+| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | - | Γ£ö | Yes |
+| [File Integrity Monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | - | Γ£ö | Yes |
+| [Network map](protect-network-resources.md#network-map) | Γ£ö | Γ£ö | - | Yes |
+| [Adaptive network hardening](adaptive-network-hardening.md) | Γ£ö | - | - | Yes |
+| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [Docker host hardening](./harden-docker-hosts.md) | - | - | - | Yes |
+| Missing OS patches assessment | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| Security misconfigurations assessment | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions) | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö | - | No |
+| Third-party vulnerability assessment | Γ£ö | - | Γ£ö | No |
+| [Network security assessment](protect-network-resources.md) | Γ£ö | Γ£ö | - | No |
++
+### Linux machines
+
+| **Feature** | **Azure Virtual Machines** | **Azure Virtual Machine Scale Sets** | **Azure Arc-enabled machines** | **Defender for Servers required** |
+|--|::|::|::|::|
+| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö | - | Γ£ö | Yes |
+| [Virtual machine behavioral analytics (and security alerts)](./azure-defender.md) | Γ£ö</br>(on supported versions) | Γ£&#