Updates from: 04/09/2021 03:08:07
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Analytics With Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/analytics-with-application-insights.md
To fit your business needs, you might want to record more claims. To add a claim
### Manipulate claims
-You can use [input claims transformations](custom-policy-trust-frameworks.md#manipulating-your-claims) to modify the input claims or generate new ones before sending them to Application Insights. In the following example, the technical profile includes the `CheckIsAdmin` input claims transformation.
+You can use [input claims transformations](custom-policy-overview.md#manipulating-your-claims) to modify the input claims or generate new ones before sending them to Application Insights. In the following example, the technical profile includes the `CheckIsAdmin` input claims transformation.
```xml <TechnicalProfile Id="AppInsights-SignInComplete">
You can use [input claims transformations](custom-policy-trust-frameworks.md#man
### Add events
-To add an event, create a new technical profile that includes the `AppInsights-Common` technical profile. Then add the new technical profile as an orchestration step to the [user journey](custom-policy-trust-frameworks.md#orchestration-steps). Use the [Precondition](userjourneys.md#preconditions) element to trigger the event when you're ready. For example, report the event only when users run through multifactor authentication.
+To add an event, create a new technical profile that includes the `AppInsights-Common` technical profile. Then add the new technical profile as an orchestration step to the [user journey](custom-policy-overview.md#orchestration-steps). Use the [Precondition](userjourneys.md#preconditions) element to trigger the event when you're ready. For example, report the event only when users run through multifactor authentication.
```xml <TechnicalProfile Id="AppInsights-MFA-Completed">
active-directory-b2c Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/best-practices.md
The following best practices and recommendations cover some of the primary aspec
| Best practice | Description | |--|--|
-| Choose user flows for most scenarios | The Identity Experience Framework of Azure AD B2C is the core strength of the service. Policies fully describe identity experiences such as sign-up, sign-in, or profile editing. To help you set up the most common identity tasks, the Azure AD B2C portal includes predefined, configurable policies called user flows. With user flows, you can create great user experiences in minutes, with just a few clicks. [Learn when to use user flows vs. custom policies](custom-policy-overview.md#comparing-user-flows-and-custom-policies).|
+| Choose user flows for most scenarios | The Identity Experience Framework of Azure AD B2C is the core strength of the service. Policies fully describe identity experiences such as sign-up, sign-in, or profile editing. To help you set up the most common identity tasks, the Azure AD B2C portal includes predefined, configurable policies called user flows. With user flows, you can create great user experiences in minutes, with just a few clicks. [Learn when to use user flows vs. custom policies](user-flow-overview.md#comparing-user-flows-and-custom-policies).|
| App registrations | Every application (web, native) and API that is being secured must be registered in Azure AD B2C. If an app has both a web and native version of iOS and Android, you can register them as one application in Azure AD B2C with the same client ID. Learn how to [register OIDC, SAML, web, and native apps](./tutorial-register-applications.md?tabs=applications). Learn more about [application types that can be used in Azure AD B2C](./application-types.md). | | Move to monthly active users billing | Azure AD B2C has moved from monthly active authentications to monthly active users (MAU) billing. Most customers will find this model cost-effective. [Learn more about monthly active users billing](https://azure.microsoft.com/updates/mau-billing/). |
active-directory-b2c Custom Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-overview.md
Title: Azure Active Directory B2C custom policies | Microsoft Docs
-description: Learn about Azure Active Directory B2C custom policies.
+ Title: Azure Active Directory B2C custom policy overview | Microsoft Docs
+description: A topic about Azure Active Directory B2C custom policies and the Identity Experience Framework.
- Previously updated : 06/06/2019-+ Last updated : 04/08/2021
-# Custom policies in Azure Active Directory B2C
+# Azure AD B2C custom policy overview
+Custom policies are configuration files that define the behavior of your Azure Active Directory B2C (Azure AD B2C) tenant. While [user flows](user-flow-overview.md) are predefined in the Azure AD B2C portal for the most common identity tasks, custom policies can be fully edited by an identity developer to complete many different tasks.
-Custom policies are configuration files that define the behavior of your Azure Active Directory B2C (Azure AD B2C) tenant. User flows are predefined in the Azure AD B2C portal for the most common identity tasks. Custom policies can be fully edited by an identity developer to complete many different tasks.
+A custom policy is fully configurable and policy-driven. A custom policy orchestrates trust between entities in standard protocols. For example, OpenID Connect, OAuth, SAML, and a few non-standard ones, for example REST API-based system-to-system claims exchanges. The framework creates user-friendly, white-labeled experiences.
-## Comparing user flows and custom policies
+A custom policy is represented as one or more XML-formatted files, which refer to each other in a hierarchical chain. The XML elements define the building blocks, the interaction with the user, and other parties, and the business logic.
-| Context | User flows | Custom policies |
-|-|-|--|
-| Target users | All application developers with or without identity expertise. | Identity pros, systems integrators, consultants, and in-house identity teams. They are comfortable with OpenID Connect flows and understand identity providers and claims-based authentication. |
-| Configuration method | Azure portal with a user-friendly user-interface (UI). | Directly editing XML files and then uploading to the Azure portal. |
-| UI customization | Full UI customization including HTML, CSS and JavaScript.<br><br>Multilanguage support with Custom strings. | Same |
-| Attribute customization | Standard and custom attributes. | Same |
-| Token and session management | Custom token and multiple session options. | Same |
-| Identity Providers | Predefined local or social provider and most OIDC identity providers, such as federation with Azure Active Directory tenants. | Standards-based OIDC, OAUTH, and SAML. Authentication is also possible by using integration with REST APIs. |
-| Identity Tasks | Sign-up or sign-in with local or many social accounts.<br><br>Self-service password reset.<br><br>Profile edit.<br><br>Multi-Factor Authentication.<br><br>Customize tokens and sessions.<br><br>Access token flows. | Complete the same tasks as user flows using custom identity providers or use custom scopes.<br><br>Provision a user account in another system at the time of registration.<br><br>Send a welcome email using your own email service provider.<br><br>Use a user store outside Azure AD B2C.<br><br>Validate user provided information with a trusted system by using an API. |
+## Custom policy starter pack
-## Policy files
+Azure AD B2C custom policy [starter pack](custom-policy-get-started.md#get-the-starter-pack) comes with several pre-built policies to get you going quickly. Each of these starter packs contains the smallest number of technical profiles and user journeys needed to achieve the scenarios described:
-These three types of policy files are used:
+- **LocalAccounts** - Enables the use of local accounts only.
+- **SocialAccounts** - Enables the use of social (or federated) accounts only.
+- **SocialAndLocalAccounts** - Enables the use of both local and social accounts. Most of our samples refer to this policy.
+- **SocialAndLocalAccountsWithMFA** - Enables social, local, and multi-factor authentication options.
-- **Base file** - contains most of the definitions. It is recommended that you make a minimum number of changes to this file to help with troubleshooting, and long-term maintenance of your policies.-- **Extensions file** - holds the unique configuration changes for your tenant.-- **Relying Party (RP) file** - The single task-focused file that is invoked directly by the application or service (also, known as a Relying Party). Each unique task requires its own RP and depending on branding requirements, the number might be "total of applications x total number of use cases."
+## Understanding the basics
-User flows in Azure AD B2C follow the file pattern depicted above, but the developer only sees the RP file, while the Azure portal makes changes in the background to the extensions file.
+### Claims
-Although there are three types of policy files, you aren't restricted to only three files. You may have multiple files of each file type. For example, if you don't want to make changes to your Extensions file, you can create an Extensions2 file to further extend the Extensions file.
+A claim provides temporary storage of data during an Azure AD B2C policy execution. It can store information about the user, such as first name, last name, or any other claim obtained from the user or other systems (claims exchanges). The [claims schema](claimsschema.md) is the place where you declare your claims.
-## Custom policy core concepts
+When the policy runs, Azure AD B2C sends and receives claims to and from internal and external parties and then sends a subset of these claims to your relying party application as part of the token. Claims are used in these ways:
-The customer identity and access management (CIAM) service in Azure includes:
+- A claim is saved, read, or updated against the directory user object.
+- A claim is received from an external identity provider.
+- Claims are sent or received using a custom REST API service.
+- Data is collected as claims from the user during the sign-up or edit profile flows.
-- A user directory that is accessible by using Microsoft Graph and which holds user data for both local accounts and federated accounts.-- Access to the **Identity Experience Framework** that orchestrates trust between users and entities and passes claims between them to complete an identity or access management task.-- A security token service (STS) that issues ID tokens, refresh tokens, and access tokens (and equivalent SAML assertions) and validates them to protect resources.
+### Manipulating your claims
-Azure AD B2C interacts with identity providers, users, other systems, and with the local user directory in sequence to achieve an identity task. For example, sign in a user, register a new user, or reset a password. The Identity Experience Framework and a policy (also called a user journey or a trust framework policy) establishes multi-party trust and explicitly defines the actors, the actions, the protocols, and the sequence of steps to complete.
+The [claims transformations](claimstransformations.md) are predefined functions that can be used to convert a given claim into another one, evaluate a claim, or set a claim value. For example adding an item to a string collection, changing the case of a string, or evaluate a date and time claim. A claims transformation specifies a transform method.
-The Identity Experience Framework is a fully configurable, policy-driven, cloud-based Azure platform that orchestrates trust between entities in standard protocol formats such as OpenID Connect, OAuth, SAML, and a few non-standard ones, for example REST API-based system-to-system claims exchanges. The framework creates user-friendly, white-labeled experiences that support HTML and CSS.
+### Customize and localize your UI
-A custom policy is represented as one or several XML-formatted files that refer to each other in a hierarchical chain. The XML elements define the claims schema, claims transformations, content definitions, claims providers, technical profiles, and user journey orchestration steps, among other elements. A custom policy is accessible as one or several XML files that are executed by the Identity Experience Framework when invoked by a relying party. Developers configuring custom policies must define the trusted relationships in careful detail to include metadata endpoints, exact claims exchange definitions, and configure secrets, keys, and certificates as needed by each identity provider.
+When you'd like to collect information from your users by presenting a page in their web browser, use the [self-asserted technical profile](self-asserted-technical-profile.md). You can edit your self-asserted technical profile to [add claims and customize user input](./configure-user-input.md).
-### Inheritance model
+To [customize the user interface](customize-ui-with-html.md) for your self-asserted technical profile, you specify a URL in the [content definition](contentdefinitions.md) element with customized HTML content. In the self-asserted technical profile, you point to this content definition ID.
-When an application calls the RP policy file, the Identity Experience Framework in Azure AD B2C adds all of the elements from base file, from the extensions file, and then from the RP policy file to assemble the current policy in effect. Elements of the same type and name in the RP file will override those in the extensions, and extensions overrides base.
+To customize language-specific strings, use the [localization](localization.md) element. A content definition may contain a [localization](localization.md) reference that specifies a list of localized resources to load. Azure AD B2C merges user interface elements with the HTML content that's loaded from your URL and then displays the page to the user.
+## Relying party policy overview
+
+A relying party application, which in the SAML protocol is known as a service provider, calls the [relying party policy](relyingparty.md) to execute a specific user journey. The relying party policy specifies the user journey to be executed, and list of claims that the token includes.
+
+![Diagram showing the policy execution flow](./media/custom-policy-overview/custom-policy-execution.png)
+
+All relying party applications that use the same policy will receive the same token claims, and the user goes through the same user journey.
+
+### User journeys
+
+[User journeys](userjourneys.md) allows you to define the business logic with path through which user will follow to gain access to your application. The user is taken through the user journey to retrieve the claims that are to be presented to your application. A user journey is built from a sequence of [orchestration steps](userjourneys.md#orchestrationsteps). A user must reach the last step to acquire a token.
+
+The following instructions describe how you can add orchestration steps to the [social and local account starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/SocialAndLocalAccounts) policy. Here's an example of a REST API call that has been added.
+
+![customized user journey](media/custom-policy-overview/user-journey-flow.png)
++
+### Orchestration steps
+
+The orchestration step references to a method that implements its intended purpose or functionality. This method is called a [technical profile](technicalprofiles.md). When your user journey needs branching to better represent the business logic, the orchestration step references to [sub journey](subjourneys.md). A sub journey contains its own set of orchestration steps.
+
+A user must reach the last orchestration step in the user journey to acquire a token. But users may not need to travel through all of the orchestration steps. Orchestration steps can be conditionally executed based on [preconditions](userjourneys.md#preconditions) defined in the orchestration step.
+
+After an orchestration step completes, Azure AD B2C stores the outputted claims in the **claims bag**. The claims in the claims bag can be utilized by any further orchestration steps in the user journey.
+
+The following diagram shows how the user journey's orchestration steps can access the claims bag.
+
+![Azure AD B2C user journey](media/custom-policy-overview/user-journey-diagram.png)
+
+### Technical profile
+
+A technical profile provides an interface to communicate with different types of parties. A user journey combines calling technical profiles via orchestration steps to define your business logic.
+
+All types of technical profiles share the same concept. You send input claims, run claims transformation, and communicate with the configured party. After the process is completed, the technical profile returns the output claims to claims bag. For more information, see [technical profiles overview](technicalprofiles.md).
+
+### Validation technical profile
+
+When a user interacts with the user interface, you may want to validate the data that is collected. To interact with the user, a [self-asserted technical profile](self-asserted-technical-profile.md) must be used.
+
+To validate the user input, a [validation technical profile](validation-technical-profile.md) is called from the self-asserted technical profile. A validation technical profile is a method to call any non-interactive technical profile. In this case, the technical profile can return output claims, or an error message. The error message is rendered to the user on screen, allowing the user to retry.
+
+The following diagram illustrates how Azure AD B2C uses a validation technical profile to validate the user credentials.
+
+![Validation technical profile diagram](media/custom-policy-overview/validation-technical-profile.png)
+
+## Inheritance model
+
+Each starter pack includes the following files:
+
+- A **Base** file that contains most of the definitions. To help with troubleshooting and long-term maintenance of your policies, try to minimize the number of changes you make to this file.
+- An **Extensions** file that holds the unique configuration changes for your tenant. This policy file is derived from the Base file. Use this file to add new functionality or override existing functionality. For example, use this file to federate with new identity providers.
+- A **Relying Party (RP)** file that is the single task-focused file that is invoked directly by the relying party application, such as your web, mobile, or desktop applications. Each unique task, such as sign-up, sign-in, password reset, or profile edit, requires its own relying party policy file. This policy file is derived from the extensions file.
+
+The inheritance model is as follows:
+
+- The child policy at any level can inherit from the parent policy and extend it by adding new elements.
+- For more complex scenarios, you can add more inheritance levels (up to 10 in total).
+- You can add more relying party policies. For example, delete my account, change a phone number, SAML relying party policy and more.
+
+The following diagram shows the relationship between the policy files and the relying party applications.
+
+![Diagram showing the trust framework policy inheritance model](media/custom-policy-overview/policies.png)
++
+## Guidance and best practices
+
+### Best practices
+
+Within an Azure AD B2C custom policy, you can integrate your own business logic to build the user experiences you require and extend functionality of the service. We have a set of best practices and recommendations to get started.
+
+- Create your logic within the **extension policy**, or **relying party policy**. You can add new elements, which will override the base policy by referencing the same ID. This will allow you to scale out your project while making it easier to upgrade base policy later on if Microsoft releases new starter packs.
+- Within the **base policy**, we highly recommend avoiding making any changes. When necessary, make comments where the changes are made.
+- When you're overriding an element, such as technical profile metadata, avoid copying the entire technical profile from the base policy. Instead, copy only the required section of the element. See [Disable email verification](./disable-email-verification.md) for an example of how to make the change.
+- To reduce duplication of technical profiles, where core functionality is shared, use [technical profile inclusion](technicalprofiles.md#include-technical-profile).
+- Avoid writing to the Azure AD directory during sign-in, which may lead to throttling issues.
+- If your policy has external dependencies, such as REST APIs, makes sure they're highly available.
+- For a better user experience, make sure your custom HTML templates, are globally deployed using [online content delivery](../cdn/index.yml). Azure Content Delivery Network (CDN) lets you reduce load times, save bandwidth, and improve response speed.
+- If you want to make a change to user journey, copy the entire user journey from the base policy to the extension policy. Provide a unique user journey ID to the user journey you've copied. Then in the [relying party policy](relyingparty.md), change the [default user journey](relyingparty.md#defaultuserjourney) element to point to the new user journey.
+
+## Troubleshooting
+
+When developing with Azure AD B2C policies, you may run into errors or exceptions while executing your user journey. The can be investigated using Application Insights.
+
+- Integrate Application Insights with Azure AD B2C to [diagnose exceptions](troubleshoot-with-application-insights.md).
+- The [Azure AD B2C extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=AzureADB2CTools.aadb2c) can help you access and [visualize the logs](https://github.com/azure-ad-b2c/vscode-extension/blob/master/src/help/app-insights.md) based on a policy name and time.
+- The most common error in setting up custom policies is improperly formatted XML. Use [XML schema validation](troubleshoot-custom-policies.md) to identify errors before you upload your XML file.
+
+## Continuous integration
+
+By using a continuous integration and delivery (CI/CD) pipeline that you set up in Azure Pipelines, you can [include your Azure AD B2C custom policies in your software delivery](deploy-custom-policies-devops.md) and code control automation. As you deploy to different Azure AD B2C environments, for example dev, test, and production, we recommend that you remove manual processes and perform automated testing by using Azure Pipelines.
+
+## Prepare your environment
+
+You get started with Azure AD B2C custom policy:
+
+1. [Create an Azure AD B2C tenant](tutorial-create-tenant.md)
+1. [Register a web application](tutorial-register-applications.md) using the Azure portal so you'll be able to test your policy.
+1. Add the necessary [policy keys](custom-policy-get-started.md#add-signing-and-encryption-keys) and [register the Identity Experience Framework applications](custom-policy-get-started.md#register-identity-experience-framework-applications).
+1. [Get the Azure AD B2C policy starter pack](custom-policy-get-started.md#get-the-starter-pack) and upload to your tenant.
+1. After you upload the starter pack, [test your sign-up or sign-in policy](custom-policy-get-started.md#test-the-custom-policy).
+1. We recommend you to download and install [Visual Studio Code](https://code.visualstudio.com/) (VS Code). Visual Studio Code is a lightweight but powerful source code editor, which runs on your desktop and is available for Windows, macOS, and Linux. With VS Code, you can quickly navigate through and edit your Azure AD B2C custom policy XML files by installing the [Azure AD B2C extension for VS Code](https://marketplace.visualstudio.com/items?itemName=AzureADB2CTools.aadb2c)
+
## Next steps
-> [!div class="nextstepaction"]
-> [Get started with custom policies](custom-policy-get-started.md)
+After you set up and test your Azure AD B2C policy, you can start customizing your policy. Go through the following articles to learn how to:
+
+- [Add claims and customize user input](./configure-user-input.md) using custom policies. Learn how to define a claim and add a claim to the user interface by customizing some of the starter pack technical profiles.
+- [Customize the user interface](customize-ui-with-html.md) of your application using a custom policy. Learn how to create your own HTML content, and customize the content definition.
+- [Localize the user interface](./language-customization.md) of your application using a custom policy. Learn how to set up the list of supported languages, and provide language-specific labels, by adding the localized resources element.
+- During your policy development and testing, you can [disable email verification](./disable-email-verification.md). Learn how to overwrite a technical profile metadata.
+- [Set up sign-in with a Google account](./identity-provider-google.md) using custom policies. Learn how to create a new claims provider with OAuth2 technical profile. Then customize the user journey to include the Google sign-in option.
+- To diagnose problems with your custom policies you can [Collect Azure Active Directory B2C logs with Application Insights](troubleshoot-with-application-insights.md). Learn how to add new technical profiles, and configure your relying party policy.
active-directory-b2c Custom Policy Trust Frameworks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-trust-frameworks.md
- Title: Azure AD B2C custom policy overview | Microsoft Docs
-description: A topic about Azure Active Directory B2C custom policies and the Identity Experience Framework.
------- Previously updated : 12/14/2020----
-# Azure AD B2C custom policy overview
-
-Custom policies are configuration files that define the behavior of your Azure Active Directory B2C (Azure AD B2C) tenant. While [user flows](user-flow-overview.md) are predefined in the Azure AD B2C portal for the most common identity tasks, custom policies can be fully edited by an identity developer to complete many different tasks.
-
-A custom policy is fully configurable and policy-driven. A custom policy orchestrates trust between entities in standard protocol formats such as OpenID Connect, OAuth, SAML, and a few non-standard ones, for example REST API-based system-to-system claims exchanges. The framework creates user-friendly, white-labeled experiences.
-
-A custom policy is represented as one or more XML-formatted files, which refer to each other in a hierarchical chain. The XML elements define the building blocks, the interaction with the user, and other parties, and the business logic.
-
-## Custom policy starter pack
-
-Azure AD B2C custom policy [starter pack](custom-policy-get-started.md#get-the-starter-pack) comes with several pre-built policies to get you going quickly. Each of these starter packs contains the smallest number of technical profiles and user journeys needed to achieve the scenarios described:
--- **LocalAccounts** - Enables the use of local accounts only.-- **SocialAccounts** - Enables the use of social (or federated) accounts only.-- **SocialAndLocalAccounts** - Enables the use of both local and social accounts. Most of our samples refer to this policy.-- **SocialAndLocalAccountsWithMFA** - Enables social, local, and multi-factor authentication options.-
-## Understanding the basics
-
-### Claims
-
-A claim provides temporary storage of data during an Azure AD B2C policy execution. It can store information about the user, such as first name, last name, or any other claim obtained from the user or other systems (claims exchanges). The [claims schema](claimsschema.md) is the place where you declare your claims.
-
-When the policy runs, Azure AD B2C sends and receives claims to and from internal and external parties and then sends a subset of these claims to your relying party application as part of the token. Claims are used in these ways:
--- A claim is saved, read, or updated against the directory user object.-- A claim is received from an external identity provider.-- Claims are sent or received using a custom REST API service.-- Data is collected as claims from the user during the sign-up or edit profile flows.-
-### Manipulating your claims
-
-The [claims transformations](claimstransformations.md) are predefined functions that can be used to convert a given claim into another one, evaluate a claim, or set a claim value. For example adding an item to a string collection, changing the case of a string, or evaluate a date and time claim. A claims transformation specifies a transform method.
-
-### Customize and localize your UI
-
-When you'd like to collect information from your users by presenting a page in their web browser, use the [self-asserted technical profile](self-asserted-technical-profile.md). You can edit your self-asserted technical profile to [add claims and customize user input](./configure-user-input.md).
-
-To [customize the user interface](customize-ui-with-html.md) for your self-asserted technical profile, you specify a URL in the [content definition](contentdefinitions.md) element with customized HTML content. In the self-asserted technical profile, you point to this content definition ID.
-
-To customize language-specific strings, use the [localization](localization.md) element. A content definition may contain a [localization](localization.md) reference that specifies a list of localized resources to load. Azure AD B2C merges user interface elements with the HTML content that's loaded from your URL and then displays the page to the user.
-
-## Relying party policy overview
-
-A relying party application, which in the SAML protocol is known as a service provider, calls the [relying party policy](relyingparty.md) to execute a specific user journey. The relying party policy specifies the user journey to be executed, and list of claims that the token includes.
-
-![Diagram showing the policy execution flow](./media/custom-policy-trust-frameworks/custom-policy-execution.png)
-
-All relying party applications that use the same policy will receive the same token claims, and the user goes through the same user journey.
-
-### User journeys
-
-[User journeys](userjourneys.md) allows you to define the business logic with path through which user will follow to gain access to your application. The user is taken through the user journey to retrieve the claims that are to be presented to your application. A user journey is built from a sequence of [orchestration steps](userjourneys.md#orchestrationsteps). A user must reach the last step to acquire a token.
-
-The following instructions describe how you can add orchestration steps to the [social and local account starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/SocialAndLocalAccounts) policy. Here's an example of a REST API call that has been added.
-
-![customized user journey](media/custom-policy-trust-frameworks/user-journey-flow.png)
--
-### Orchestration steps
-
-The orchestration step references to a method that implements its intended purpose or functionality. This method is called a [technical profile](technicalprofiles.md). When your user journey needs branching to better represent the business logic, the orchestration step references to [sub journey](subjourneys.md). A sub journey contains its own set of orchestration steps.
-
-A user must reach the last orchestration step in the user journey to acquire a token. But users may not need to travel through all of the orchestration steps. Orchestration steps can be conditionally executed based on [preconditions](userjourneys.md#preconditions) defined in the orchestration step.
-
-After an orchestration step completes, Azure AD B2C stores the outputted claims in the **claims bag**. The claims in the claims bag can be utilized by any further orchestration steps in the user journey.
-
-The following diagram shows how the user journey's orchestration steps can access the claims bag.
-
-![Azure AD B2C user journey](media/custom-policy-trust-frameworks/user-journey-diagram.png)
-
-### Technical profile
-
-A technical profile provides an interface to communicate with different types of parties. A user journey combines calling technical profiles via orchestration steps to define your business logic.
-
-All types of technical profiles share the same concept. You send input claims, run claims transformation, and communicate with the configured party. After the process is completed, the technical profile returns the output claims to claims bag. For more information, see [technical profiles overview](technicalprofiles.md).
-
-### Validation technical profile
-
-When a user interacts with the user interface, you may want to validate the data that is collected. To interact with the user, a [self-asserted technical profile](self-asserted-technical-profile.md) must be used.
-
-To validate the user input, a [validation technical profile](validation-technical-profile.md) is called from the self-asserted technical profile. A validation technical profile is a method to call any non-interactive technical profile. In this case, the technical profile can return output claims, or an error message. The error message is rendered to the user on screen, allowing the user to retry.
-
-The following diagram illustrates how Azure AD B2C uses a validation technical profile to validate the user credentials.
-
-![Validation technical profile diagram](media/custom-policy-trust-frameworks/validation-technical-profile.png)
-
-## Inheritance model
-
-Each starter pack includes the following files:
--- A **Base** file that contains most of the definitions. To help with troubleshooting and long-term maintenance of your policies, try to minimize the number of changes you make to this file.-- An **Extensions** file that holds the unique configuration changes for your tenant. This policy file is derived from the Base file. Use this file to add new functionality or override existing functionality. For example, use this file to federate with new identity providers.-- A **Relying Party (RP)** file that is the single task-focused file that is invoked directly by the relying party application, such as your web, mobile, or desktop applications. Each unique task, such as sign-up, sign-in, password reset, or profile edit, requires its own relying party policy file. This policy file is derived from the extensions file.-
-The inheritance model is as follows:
--- The child policy at any level can inherit from the parent policy and extend it by adding new elements.-- For more complex scenarios, you can add more inheritance levels (up to 10 in total).-- You can add more relying party policies. For example, delete my account, change a phone number, SAML relying party policy and more.-
-The following diagram shows the relationship between the policy files and the relying party applications.
-
-![Diagram showing the trust framework policy inheritance model](media/custom-policy-trust-frameworks/policies.png)
--
-## Guidance and best practices
-
-### Best practices
-
-Within an Azure AD B2C custom policy, you can integrate your own business logic to build the user experiences you require and extend functionality of the service. We have a set of best practices and recommendations to get started.
--- Create your logic within the **extension policy**, or **relying party policy**. You can add new elements, which will override the base policy by referencing the same ID. This will allow you to scale out your project while making it easier to upgrade base policy later on if Microsoft releases new starter packs.-- Within the **base policy**, we highly recommend avoiding making any changes. When necessary, make comments where the changes are made.-- When you're overriding an element, such as technical profile metadata, avoid copying the entire technical profile from the base policy. Instead, copy only the required section of the element. See [Disable email verification](./disable-email-verification.md) for an example of how to make the change.-- To reduce duplication of technical profiles, where core functionality is shared, use [technical profile inclusion](technicalprofiles.md#include-technical-profile).-- Avoid writing to the Azure AD directory during sign-in, which may lead to throttling issues.-- If your policy has external dependencies, such as REST APIs, makes sure they're highly available.-- For a better user experience, make sure your custom HTML templates, are globally deployed using [online content delivery](../cdn/index.yml). Azure Content Delivery Network (CDN) lets you reduce load times, save bandwidth, and improve response speed.-- If you want to make a change to user journey, copy the entire user journey from the base policy to the extension policy. Provide a unique user journey ID to the user journey you've copied. Then in the [relying party policy](relyingparty.md), change the [default user journey](relyingparty.md#defaultuserjourney) element to point to the new user journey.-
-## Troubleshooting
-
-When developing with Azure AD B2C policies, you may run into errors or exceptions while executing your user journey. The can be investigated using Application Insights.
--- Integrate Application Insights with Azure AD B2C to [diagnose exceptions](troubleshoot-with-application-insights.md).-- The [Azure AD B2C extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=AzureADB2CTools.aadb2c) can help you access and [visualize the logs](https://github.com/azure-ad-b2c/vscode-extension/blob/master/src/help/app-insights.md) based on a policy name and time.-- The most common error in setting up custom policies is improperly formatted XML. Use [XML schema validation](troubleshoot-custom-policies.md) to identify errors before you upload your XML file.-
-## Continuous integration
-
-By using a continuous integration and delivery (CI/CD) pipeline that you set up in Azure Pipelines, you can [include your Azure AD B2C custom policies in your software delivery](deploy-custom-policies-devops.md) and code control automation. As you deploy to different Azure AD B2C environments, for example dev, test, and production, we recommend that you remove manual processes and perform automated testing by using Azure Pipelines.
-
-## Prepare your environment
-
-You get started with Azure AD B2C custom policy:
-
-1. [Create an Azure AD B2C tenant](tutorial-create-tenant.md)
-1. [Register a web application](tutorial-register-applications.md) using the Azure portal so you'll be able to test your policy.
-1. Add the necessary [policy keys](custom-policy-get-started.md#add-signing-and-encryption-keys) and [register the Identity Experience Framework applications](custom-policy-get-started.md#register-identity-experience-framework-applications).
-1. [Get the Azure AD B2C policy starter pack](custom-policy-get-started.md#get-the-starter-pack) and upload to your tenant.
-1. After you upload the starter pack, [test your sign-up or sign-in policy](custom-policy-get-started.md#test-the-custom-policy).
-1. We recommend you to download and install [Visual Studio Code](https://code.visualstudio.com/) (VS Code). Visual Studio Code is a lightweight but powerful source code editor, which runs on your desktop and is available for Windows, macOS, and Linux. With VS Code, you can quickly navigate through and edit your Azure AD B2C custom policy XML files by installing the [Azure AD B2C extension for VS Code](https://marketplace.visualstudio.com/items?itemName=AzureADB2CTools.aadb2c)
-
-## Next steps
-
-After you set up and test your Azure AD B2C policy, you can start customizing your policy. Go through the following articles to learn how to:
--- [Add claims and customize user input](./configure-user-input.md) using custom policies. Learn how to define a claim and add a claim to the user interface by customizing some of the starter pack technical profiles.-- [Customize the user interface](customize-ui-with-html.md) of your application using a custom policy. Learn how to create your own HTML content, and customize the content definition.-- [Localize the user interface](./language-customization.md) of your application using a custom policy. Learn how to set up the list of supported languages, and provide language-specific labels, by adding the localized resources element.-- During your policy development and testing, you can [disable email verification](./disable-email-verification.md). Learn how to overwrite a technical profile metadata.-- [Set up sign-in with a Google account](./identity-provider-google.md) using custom policies. Learn how to create a new claims provider with OAuth2 technical profile. Then customize the user journey to include the Google sign-in option.-- To diagnose problems with your custom policies you can [Collect Azure Active Directory B2C logs with Application Insights](troubleshoot-with-application-insights.md). Learn how to add new technical profiles, and configure your relying party policy.
active-directory-b2c Force Password Reset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/force-password-reset.md
Previously updated : 03/03/2021 Last updated : 04/08/2021 zone_pivot_groups: b2c-policy-type
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
-As an administrator, you can [reset a user's password](manage-users-portal.md#reset-a-users-password) if the user forgets their password. Or you would like to force them to reset the password. In this article, you'll learn how to force a password reset in these scenarios.
+> [!IMPORTANT]
+> Force password reset is a public preview feature of Azure AD B2C. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Overview
+As an administrator, you can [reset a user's password](manage-users-portal.md#reset-a-users-password) if the user forgets their password. Or you would like to force them to reset the password. In this article, you'll learn how to force a password reset in these scenarios.
-When an administrator resets a user's password via the Azure portal, the value of the [forceChangePasswordNextSignIn](user-profile-attributes.md#password-profile-property) attribute is set to `true`.
-
-The [sign-in and sign-up journey](add-sign-up-and-sign-in-policy.md) checks the value of this attribute. After the user completes the sign-in, if the attribute is set to `true`, the user must reset their password. Then the value of the attribute is set to back `false`.
+When an administrator resets a user's password via the Azure portal, the value of the [forceChangePasswordNextSignIn](user-profile-attributes.md#password-profile-property) attribute is set to `true`. The [sign-in and sign-up journey](add-sign-up-and-sign-in-policy.md) checks the value of this attribute. After the user completes the sign-in, if the attribute is set to `true`, the user must reset their password. Then the value of the attribute is set to back `false`.
![Force password reset flow](./media/force-password-reset/force-password-reset-flow.png) The password reset flow is applicable to local accounts in Azure AD B2C that use an [email address](identity-provider-local.md#email-sign-in) or [username](identity-provider-local.md#username-sign-in) with a password for sign-in. + ### Force a password reset after 90 days As an administrator, you can set a user's password expiration to 90 days, using [MS Graph](microsoft-graph-operations.md). After 90 days, the value of [forceChangePasswordNextSignIn](user-profile-attributes.md#password-profile-property) attribute is automatically set to `true`. For more information on how to set a user's password expiration policy, see [Password policy attribute](user-profile-attributes.md#password-policy-attribute).
Once a password expiration policy has been set, you must also configure force pa
## Configure your policy - To enable the **Forced password reset** setting in a sign-up or sign-in user flow: 1. Sign in to the [Azure portal](https://portal.azure.com).
To enable the **Forced password reset** setting in a sign-up or sign-in user flo
::: zone pivot="b2c-custom-policy"
-1. Get the example of a force password reset on [GitHub](https://github.com/azure-ad-b2c/samples/tree/master/policies/force-password-reset).
-1. In each file, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is *contosob2c*, all instances of `yourtenant.onmicrosoft.com` become `contosob2c.onmicrosoft.com`.
-1. Upload the policy files in the following order: the extension policy `TrustFrameworkExtensionsCustomForcePasswordReset.xml`, then the relying party policy `SignUpOrSigninCustomForcePasswordReset.xml`.
-
-### Test the policy
-
-1. Sign in to the [Azure portal](https://portal.azure.com) as a user administrator or a password administrator. For more information about the available roles, see [Assigning administrator roles in Azure Active Directory](../active-directory/roles/permissions-reference.md#all-roles).
-1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
-1. In the Azure portal, search for and select **Azure AD B2C**.
-1. Select **Users**. Search for and select the user you'll use to test the password reset, and then select **Reset Password**.
-1. In the Azure portal, search for and select **Azure AD B2C**.
-1. Under **Policies**, select **Identity Experience Framework**.
-1. Select the `B2C_1A_signup_signin_Custom_ForcePasswordReset` policy to open it.
-1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
-1. Select the **Run now** button.
-1. Sign in with the user account for which you reset the password.
-1. You now must change the password for the user. Change the password and select **Continue**. The token is returned to `https://jwt.ms` and should be displayed to you.
+This feature is currently only available for User Flows. For setup steps, choose **User Flow** above.
::: zone-end
active-directory-b2c Troubleshoot Custom Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/troubleshoot-custom-policies.md
Previously updated : 04/06/2021 Last updated : 04/08/2021
You can include the correlation ID in your Azure AD B2C tokens. To include the c
To diagnose problems with your custom policies, use [Application Insights](troubleshoot-with-application-insights.md). Application Insights traces the activity of your custom policy user journey. It provides a way to diagnose exceptions and observe the exchange of claims between Azure AD B2C and the various claims providers that are defined by technical profiles, such as identity providers, API-based services, the Azure AD B2C user directory, and other services.
-We recommend installing the [Azure AD B2C extension](https://marketplace.visualstudio.com/items?itemName=AzureADB2CTools.aadb2c) for [VS Code](https://code.visualstudio.com/). With the Azure AD B2C extension, the logs are organized for you by policy name, correlation ID (Application Insights presents the first digit of the correlation ID), and the log timestamp. This feature helps you find the relevant log based on the local timestamp and see the user journey as executed by Azure AD B2C.
+We recommend installing the [Azure AD B2C extension](https://marketplace.visualstudio.com/items?itemName=AzureADB2CTools.aadb2c) for [VS Code](https://code.visualstudio.com/). With the Azure AD B2C extension, the logs are organized for you by policy name, correlation ID (Application Insights presents the first digit of the correlation ID), and the log timestamp. This feature helps you find the relevant log based on the local timestamp and see the user journey as executed by Azure AD B2C.
> [!NOTE]
-> The community has developed the Visual Studio Code extension for Azure AD B2C to help identity developers. The extension is not supported by Microsoft and is made available strictly as-is.
+> - There is a short delay, typically less than five minutes, before you can see new logs in Application Insights.
+> - The community has developed the Visual Studio Code extension for Azure AD B2C to help identity developers. The extension is not supported by Microsoft and is made available strictly as-is.
A single sign-in flow can issue more than one Azure Application Insights trace. In the following screenshot, the *B2C_1A_signup_signin* policy has three logs. Each log represents part of the sign-in flow.
+The following screenshot shows the Azure AD B2C extension for VS Code with Azure Application Insights trace explorer.
+ ![Screenshot of Azure AD B2C extension for VS Code with Azure Application Insights trace.](./media/troubleshoot-custom-policies/vscode-extension-application-insights-trace.png)
+### Filter the trace log
+
+With the focus on the Azure AD B2C trace explorer, start to type the first digit of the correlation ID, or a time you want to find. You will see a filter box in the top-right of the Azure AD B2C trace explorer showing what you have typed so far, and matching trace logs will be highlighted.
+
+![Screenshot of Azure AD B2C extension Azure AD B2C trace explorer filter highlighting.](./media/troubleshoot-custom-policies/vscode-extension-application-insights-highlight.png)
+
+Hovering over the filter box and selecting **Enable Filter on Type** will show only matching trace logs. Use the **'X' Clear button** to clear the filter.
+
+![Screenshot of Azure AD B2C extension Azure AD B2C trace explorer filter.](./media/troubleshoot-custom-policies/vscode-extension-application-insights-filter.png)
+ ### Application Insights trace log details
-When you select an Azure Application Insights trace, the extension opens the **Application Insights details** page with the following information:
+When you select an Azure Application Insights trace, the extension opens the **Application Insights details** window with the following information:
- **Application Insights** - Generic information about the trace log, including the policy name, correlation ID, Azure Application Insights trace ID, and trace timestamp. - **Technical profiles** - List of technical profiles that appear in the trace log.
When you select an Azure Application Insights trace, the extension opens the **A
- **Exceptions** - List of exceptions or fatal errors that appear in the trace log. - **Application Insights JSON** - The raw data the returns from the Application Insights.
+The following screenshot shows an example of the Application Insights trace log details window.
+
+![Screenshot of Azure AD B2C extension Azure AD B2C trace report.](./media/troubleshoot-custom-policies/vscode-extension-application-insights-report.png)
+ ## Troubleshoot JWT tokens For JWT token validation and debugging purposes, your can decode JWTs using a site like [https://jwt.ms](https://jwt.ms). Create a test application that can redirect to `https://jwt.ms` for token inspection. If you haven't already done so, [register a web application](tutorial-register-applications.md), and [enable ID token implicit grant](tutorial-register-applications.md#enable-id-token-implicit-grant).
active-directory-b2c Tutorial Create User Flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/tutorial-create-user-flows.md
Title: Tutorial - Create user flows - Azure Active Directory B2C
-description: Follow this tutorial to learn how to create user flows in the Azure portal to enable sign up, sign in, and user profile editing for your applications in Azure Active Directory B2C.
+ Title: Tutorial - Create user flows and custom policies - Azure Active Directory B2C
+description: Follow this tutorial to learn how to create user flows and custom policies in the Azure portal to enable sign up, sign in, and user profile editing for your applications in Azure Active Directory B2C.
Previously updated : 03/22/2021 Last updated : 04/08/2021
+zone_pivot_groups: b2c-policy-type
# Tutorial: Create user flows in Azure Active Directory B2C
-In your applications you may have [user flows](user-flow-overview.md) that enable users to sign up, sign in, or manage their profile. You can create multiple user flows of different types in your Azure Active Directory B2C (Azure AD B2C) tenant and use them in your applications as needed. User flows can be reused across applications.
-In this article, you learn how to:
+In your applications you may have user flows that enable users to sign up, sign in, or manage their profile. You can create multiple user flows of different types in your Azure Active Directory B2C (Azure AD B2C) tenant and use them in your applications as needed. User flows can be reused across applications.
+
+A user flow lets you determine how users interact with your application when they do things like sign in, sign up, edit a profile, or reset a password. In this article, you learn how to:
+
+[Custom policies](custom-policy-overview.md) are configuration files that define the behavior of your Azure Active Directory B2C (Azure AD B2C) tenant. In this article, you learn how to:
> [!div class="checklist"] > * Create a sign-up and sign-in user flow > * Enable self-service password reset > * Create a profile editing user flow -
-This tutorial shows you how to create some recommended user flows by using the Azure portal. If you're looking for information about how to set up a resource owner password credentials (ROPC) flow in your application, see [Configure the resource owner password credentials flow in Azure AD B2C](add-ropc-policy.md).
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- > [!IMPORTANT] > We've changed the way we reference user flow versions. Previously, we offered V1 (production-ready) versions, and V1.1 and V2 (preview) versions. Now, we've consolidated user flows into **Recommended** (next-generation preview) and **Standard** (generally available) versions. All V1.1 and V2 legacy preview user flows are on a path to deprecation by **August 1, 2021**. For details, see [User flow versions in Azure AD B2C](user-flow-versions.md). ## Prerequisites
-[Register your applications](tutorial-register-applications.md) that are part of the user flows you want to create.
+- If you don't have one already, [create an Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription.
+- [Register your application](tutorial-register-applications.md) in the tenant that you created so that it can communicate with Azure AD B2C.
+
+- If you don't have one already, [create an Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription.
+- [Register your application](tutorial-register-applications.md) in the tenant that you created so that it can communicate with Azure AD B2C.
+- Complete the steps in [Set up sign-up and sign-in with a Facebook account](identity-provider-facebook.md) to configure a Facebook application. Although a Facebook application is not required for using custom policies, it's used in this walkthrough to demonstrate enabling social login in a custom policy.
## Create a sign-up and sign-in user flow The sign-up and sign-in user flow handles both sign-up and sign-in experiences with a single configuration. Users of your application are led down the right path depending on the context.
If you want to enable users to edit their profile in your application, you use a
1. For **Application**, select the web application named *webapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`. 1. Click **Run user flow**, and then sign in with the account that you previously created. 1. You now have the opportunity to change the display name and job title for the user. Click **Continue**. The token is returned to `https://jwt.ms` and should be displayed to you.+
+> [!TIP]
+> This article explains how to set up your tenant manually. You can automate the entire process from this article. Automating will deploy the Azure AD B2C [SocialAndLocalAccountsWithMFA starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack), which will provide Sign Up and Sign In, Password Reset and Profile Edit journeys. To automate the walkthrough below, visit the [IEF Setup App](https://aka.ms/iefsetup) and follow the instructions.
++
+## Add signing and encryption keys
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. On the overview page, under **Policies**, select **Identity Experience Framework**.
+
+### Create the signing key
+
+1. Select **Policy Keys** and then select **Add**.
+1. For **Options**, choose `Generate`.
+1. In **Name**, enter `TokenSigningKeyContainer`. The prefix `B2C_1A_` might be added automatically.
+1. For **Key type**, select **RSA**.
+1. For **Key usage**, select **Signature**.
+1. Select **Create**.
+
+### Create the encryption key
+
+1. Select **Policy Keys** and then select **Add**.
+1. For **Options**, choose `Generate`.
+1. In **Name**, enter `TokenEncryptionKeyContainer`. The prefix `B2C_1A`_ might be added automatically.
+1. For **Key type**, select **RSA**.
+1. For **Key usage**, select **Encryption**.
+1. Select **Create**.
+
+### Create the Facebook key
+
+Add your Facebook application's [App Secret](identity-provider-facebook.md) as a policy key. You can use the App Secret of the application you created as part of this article's prerequisites.
+
+1. Select **Policy Keys** and then select **Add**.
+1. For **Options**, choose `Manual`.
+1. For **Name**, enter `FacebookSecret`. The prefix `B2C_1A_` might be added automatically.
+1. In **Secret**, enter your Facebook application's *App Secret* from developers.facebook.com. This value is the secret, not the application ID.
+1. For **Key usage**, select **Signature**.
+1. Select **Create**.
+
+## Register Identity Experience Framework applications
+
+Azure AD B2C requires you to register two applications that it uses to sign up and sign in users with local accounts: *IdentityExperienceFramework*, a web API, and *ProxyIdentityExperienceFramework*, a native app with delegated permission to the IdentityExperienceFramework app. Your users can sign up with an email address or username and a password to access your tenant-registered applications, which creates a "local account." Local accounts exist only in your Azure AD B2C tenant.
+
+You need to register these two applications in your Azure AD B2C tenant only once.
+
+### Register the IdentityExperienceFramework application
+
+To register an application in your Azure AD B2C tenant, you can use the **App registrations** experience.
+
+1. Select **App registrations**, and then select **New registration**.
+1. For **Name**, enter `IdentityExperienceFramework`.
+1. Under **Supported account types**, select **Accounts in this organizational directory only**.
+1. Under **Redirect URI**, select **Web**, and then enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com`, where `your-tenant-name` is your Azure AD B2C tenant domain name.
+1. Under **Permissions**, select the *Grant admin consent to openid and offline_access permissions* check box.
+1. Select **Register**.
+1. Record the **Application (client) ID** for use in a later step.
+
+Next, expose the API by adding a scope:
+
+1. In the left menu, under **Manage**, select **Expose an API**.
+1. Select **Add a scope**, then select **Save and continue** to accept the default application ID URI.
+1. Enter the following values to create a scope that allows custom policy execution in your Azure AD B2C tenant:
+ * **Scope name**: `user_impersonation`
+ * **Admin consent display name**: `Access IdentityExperienceFramework`
+ * **Admin consent description**: `Allow the application to access IdentityExperienceFramework on behalf of the signed-in user.`
+1. Select **Add scope**
+
+* * *
+
+### Register the ProxyIdentityExperienceFramework application
+
+1. Select **App registrations**, and then select **New registration**.
+1. For **Name**, enter `ProxyIdentityExperienceFramework`.
+1. Under **Supported account types**, select **Accounts in this organizational directory only**.
+1. Under **Redirect URI**, use the drop-down to select **Public client/native (mobile & desktop)**.
+1. For **Redirect URI**, enter `myapp://auth`.
+1. Under **Permissions**, select the *Grant admin consent to openid and offline_access permissions* check box.
+1. Select **Register**.
+1. Record the **Application (client) ID** for use in a later step.
+
+Next, specify that the application should be treated as a public client:
+
+1. In the left menu, under **Manage**, select **Authentication**.
+1. Under **Advanced settings**, in the **Allow public client flows** section, set **Enable the following mobile and desktop flows** to **Yes**. Ensure that **"allowPublicClient": true** is set in the application manifest.
+1. Select **Save**.
+
+Now, grant permissions to the API scope you exposed earlier in the *IdentityExperienceFramework* registration:
+
+1. In the left menu, under **Manage**, select **API permissions**.
+1. Under **Configured permissions**, select **Add a permission**.
+1. Select the **My APIs** tab, then select the **IdentityExperienceFramework** application.
+1. Under **Permission**, select the **user_impersonation** scope that you defined earlier.
+1. Select **Add permissions**. As directed, wait a few minutes before proceeding to the next step.
+1. Select **Grant admin consent for (your tenant name)**.
+1. Select your currently signed-in administrator account, or sign in with an account in your Azure AD B2C tenant that's been assigned at least the *Cloud application administrator* role.
+1. Select **Accept**.
+1. Select **Refresh**, and then verify that "Granted for ..." appears under **Status** for the scopes - offline_access, openid and user_impersonation. It might take a few minutes for the permissions to propagate.
+
+* * *
+
+## Custom policy starter pack
+
+Custom policies are a set of XML files you upload to your Azure AD B2C tenant to define technical profiles and user journeys. We provide starter packs with several pre-built policies to get you going quickly. Each of these starter packs contains the smallest number of technical profiles and user journeys needed to achieve the scenarios described:
+
+- **LocalAccounts** - Enables the use of local accounts only.
+- **SocialAccounts** - Enables the use of social (or federated) accounts only.
+- **SocialAndLocalAccounts** - Enables the use of both local and social accounts.
+- **SocialAndLocalAccountsWithMFA** - Enables social, local, and multi-factor authentication options.
+
+Each starter pack contains:
+
+- **Base file** - Few modifications are required to the base. Example: *TrustFrameworkBase.xml*
+- **Extension file** - This file is where most configuration changes are made. Example: *TrustFrameworkExtensions.xml*
+- **Relying party files** - Task-specific files called by your application. Examples: *SignUpOrSignin.xml*, *ProfileEdit.xml*, *PasswordReset.xml*
+
+In this article, you edit the XML custom policy files in the **SocialAndLocalAccounts** starter pack. If you need an XML editor, try [Visual Studio Code](https://code.visualstudio.com/download), a lightweight cross-platform editor.
+
+### Get the starter pack
+
+Get the custom policy starter packs from GitHub, then update the XML files in the SocialAndLocalAccounts starter pack with your Azure AD B2C tenant name.
+
+1. [Download the .zip file](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or clone the repository:
+
+ ```console
+ git clone https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack
+ ```
+
+1. In all of the files in the **SocialAndLocalAccounts** directory, replace the string `yourtenant` with the name of your Azure AD B2C tenant.
+
+ For example, if the name of your B2C tenant is *contosotenant*, all instances of `yourtenant.onmicrosoft.com` become `contosotenant.onmicrosoft.com`.
+
+### Add application IDs to the custom policy
+
+Add the application IDs to the extensions file *TrustFrameworkExtensions.xml*.
+
+1. Open `SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`** and find the element `<TechnicalProfile Id="login-NonInteractive">`.
+1. Replace both instances of `IdentityExperienceFrameworkAppId` with the application ID of the IdentityExperienceFramework application that you created earlier.
+1. Replace both instances of `ProxyIdentityExperienceFrameworkAppId` with the application ID of the ProxyIdentityExperienceFramework application that you created earlier.
+1. Save the file.
+
+## Upload the policies
+
+1. Select the **Identity Experience Framework** menu item in your B2C tenant in the Azure portal.
+1. Select **Upload custom policy**.
+1. In this order, upload the policy files:
+ 1. *TrustFrameworkBase.xml*
+ 1. *TrustFrameworkExtensions.xml*
+ 1. *SignUpOrSignin.xml*
+ 1. *ProfileEdit.xml*
+ 1. *PasswordReset.xml*
+
+As you upload the files, Azure adds the prefix `B2C_1A_` to each.
+
+> [!TIP]
+> If your XML editor supports validation, validate the files against the `TrustFrameworkPolicy_0.3.0.0.xsd` XML schema that is located in the root directory of the starter pack. XML schema validation identifies errors before uploading.
+
+## Test the custom policy
+
+1. Under **Custom policies**, select **B2C_1A_signup_signin**.
+1. For **Select application** on the overview page of the custom policy, select the web application named *webapp1* that you previously registered.
+1. Make sure that the **Reply URL** is `https://jwt.ms`.
+1. Select **Run now**.
+1. Sign up using an email address.
+1. Select **Run now** again.
+1. Sign in with the same account to confirm that you have the correct configuration.
+
+## Add Facebook as an identity provider
+
+As mentioned in [Prerequisites](#prerequisites), Facebook is *not* required for using custom policies, but is used here to demonstrate how you can enable federated social login in a custom policy.
+
+1. In the `SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`** file, replace the value of `client_id` with the Facebook application ID:
+
+ ```xml
+ <TechnicalProfile Id="Facebook-OAUTH">
+ <Metadata>
+ <!--Replace the value of client_id in this technical profile with the Facebook app ID"-->
+ <Item Key="client_id">00000000000000</Item>
+ ```
+
+1. Upload the *TrustFrameworkExtensions.xml* file to your tenant.
+1. Under **Custom policies**, select **B2C_1A_signup_signin**.
+1. Select **Run now** and select Facebook to sign in with Facebook and test the custom policy.
## Next steps In this article, you learned how to:
active-directory-b2c User Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-flow-overview.md
Title: User flows in Azure Active Directory B2C | Microsoft Docs
+ Title: User flows and custom policies in Azure Active Directory B2C | Microsoft Docs
-description: Learn more about the extensible policy framework of Azure Active Directory B2C and how to create various user flows.
+description: Learn more about built-in user flows and the custom policy extensible policy framework of Azure Active Directory B2C.
Previously updated : 07/30/2020 Last updated : 04/08/2021
-# User flows in Azure Active Directory B2C
+# User flows and custom policies overview
-To help you set up the most common identity tasks for your applications, the Azure AD B2C portal includes predefined, configurable policies called **user flows**. A user flow lets you determine how users interact with your application when they do things like sign in, sign up, edit a profile, or reset a password. With user flows, you can control the following capabilities:
+In Azure AD B2C, you can define the business logic that users follow to gain access to your application. For example, you can determine the sequence of steps users follow when they sign in, sign up, edit a profile, or reset a password. After completing the sequence, the user acquires a token and gains access to your application.
-- Account types used for sign-in, such as social accounts like a Facebook or local accounts-- Attributes to be collected from the consumer, such as first name, postal code, and shoe size-- Azure AD Multi-Factor Authentication-- Customization of the user interface-- Information that the application receives as claims in a token
+In Azure AD B2C, there are two ways to provide identity user experiences:
-You can create many user flows of different types in your tenant and use them in your applications as needed. User flows can be reused across applications. This flexibility enables you to define and modify identity experiences with minimal or no changes to your code. Your application triggers a user flow by using a standard HTTP authentication request that includes a user flow parameter. A customized [token](tokens-overview.md) is received as a response.
+* **User flows** are predefined, built-in, configurable policies that we provide so you can create sign-up, sign-in, and policy editing experiences in minutes.
-The following examples show the "p" query string parameter that specifies the user flow to be used:
+* **Custom policies** enable you to create your own user journeys for complex identity experience scenarios.
-```
-https://contosob2c.b2clogin.com/contosob2c.onmicrosoft.com/oauth2/v2.0/authorize?
-client_id=2d4d11a2-f814-46a7-890a-274a72a7309e // Your registered Application ID
-&redirect_uri=https%3A%2F%2Flocalhost%3A44321%2F // Your registered Reply URL, url encoded
-&response_mode=form_post // 'query', 'form_post' or 'fragment'
-&response_type=id_token
-&scope=openid
-&nonce=dummy
-&state=12345 // Any value provided by your application
-&p=b2c_1_siup // Your sign-up user flow
-```
+The following screenshot shows the user flow settings UI, versus custom policy configuration files.
-```
-https://contosob2c.b2clogin.com/contosob2c.onmicrosoft.com/oauth2/v2.0/authorize?
-client_id=2d4d11a2-f814-46a7-890a-274a72a7309e // Your registered Application ID
-&redirect_uri=https%3A%2F%2Flocalhost%3A44321%2F // Your registered Reply URL, url encoded
-&response_mode=form_post // 'query', 'form_post' or 'fragment'
-&response_type=id_token
-&scope=openid
-&nonce=dummy
-&state=12345 // Any value provided by your application
-&p=b2c_1_siin // Your sign-in user flow
-```
+![Screenshot shows the user flow settings UI, versus custom policy configuration files.](media/user-flow-overview/user-flow-vs-custom-policy.png)
-## User flow versions
+This article gives a brief overview of user flows and custom policies, and helps you decide which method will work best for your business needs.
-Azure AD B2C includes several types of user flows:
+## User flows
-- **Sign up and sign in** - Handles both of the sign-up and sign-in experiences with a single configuration. Users are led down the right path depending on the context. Also included are separate **sign-up** or **sign-in** user flows. But we generally recommend the combined sign up and sign in user flow.-- **Profile editing** - Enables users to edit their profile information.-- **Password reset** - Enables you to configure whether and how users can reset their password.
+To set up the most common identity tasks, the Azure portal includes several predefined and configurable policies called *user flows*.
-Most user flow types have both a **Recommended** version and a **Standard** version. For details, see [user flow versions](user-flow-versions.md).
+You can configure user flow settings like these to control identity experience behaviors in your applications:
-> [!IMPORTANT]
-> If you've worked with user flows in Azure AD B2C before, you'll notice that we've changed the way we reference user flow versions. Previously, we offered V1 (production-ready) versions, and V1.1 and V2 (preview) versions. Now, we've consolidated user flows into two versions:
->
->- **Recommended** user flows are the new preview versions of user flows. They're thoroughly tested and combine all the features of the legacy **V2** and **V1.1** versions. Going forward, the new recommended user flows will be maintained and updated. Once you move to these new recommended user flows, you'll have access to new features as they're released.
->- **Standard** user flows, previously known as **V1**, are generally available, production-ready user flows. If your user flows are mission-critical and depend on highly stable versions, you can continue to use standard user flows, realizing that these versions won't be maintained and updated.
->
->All legacy preview user flows (V1.1 and V2) are on a path to deprecation by **August 1, 2021**. Wherever possible, we highly recommend that you [switch to the new **Recommended** user flows](user-flow-versions.md#how-to-switch-to-a-new-recommended-user-flow) as soon as possible so you can always take advantage of the latest features and updates.
+* Account types used for sign-in, such as social accounts like a Facebook, or local accounts that use an email address and password for sign-in
+* Attributes to be collected from the consumer, such as first name, postal code, or country/region of residency
+* Azure AD Multi-Factor Authentication (MFA)
+* Customization of the user interface
+* Set of claims in a token that your application receives after the user completes the user flow
+* Session management
+* ...and more
-## Linking user flows
+Most of the common identity scenarios for apps can be defined and implemented effectively with user flows. We recommend that you use the built-in user flows, unless you have complex user journey scenarios that require the full flexibility of custom policies.
-A **sign-up or sign-in** user flow with local accounts includes a **Forgot password?** link on the first page of the experience. Clicking this link doesn't automatically trigger a password reset user flow.
+## Custom policies
-Instead, the error code `AADB2C90118` is returned to your application. Your application needs to handle this error code by running a specific user flow that resets the password. To see an example, take a look at a [simple ASP.NET sample](https://github.com/AzureADQuickStarts/B2C-WebApp-OpenIDConnect-DotNet-SUSI) that demonstrates the linking of user flows.
+Custom policies are configuration files that define the behavior of your Azure AD B2C tenant user experience. While user flows are predefined in the Azure AD B2C portal for the most common identity tasks, custom policies can be fully edited by an identity developer to complete many different tasks.
-## Email address storage
+A custom policy is fully configurable and policy-driven. It orchestrates trust between entities in standard protocols. For example, OpenID Connect, OAuth, SAML, and a few non-standard ones, for example REST API-based system-to-system claims exchanges. The framework creates user-friendly, white-labeled experiences.
-An email address can be required as part of a user flow. If the user authenticates with a social identity provider, the email address is stored in the **otherMails** property. If a local account is based on a user name, then the email address is stored in a strong authentication detail property. If a local account is based on an email address, then the email address is stored in the **signInNames** property.
+The custom policy gives you the ability to construct user journeys with any combination of steps. For example:
-The email address isn't guaranteed to be verified in any of these cases. A tenant administrator can disable email verification in the basic policies for local accounts. Even if email address verification is enabled, addresses aren't verified if they come from a social identity provider and they haven't been changed.
+* Federate with other identity providers
+* First- and third-party multi-factor authentication (MFA) challenges
+* Collect any user input
+* Integrate with external systems using REST API communication
+
+Each user journey is defined by a policy. You can build as many or as few policies as you need to enable the best user experience for your organization.
+
+![Diagram showing an example of a complex user journey enabled by IEF](media/user-flow-overview/custom-policy-diagram.png)
+
+A custom policy is defined by several XML files that refer to each other in a hierarchical chain. The XML elements define the claims schema, claims transformations, content definitions, claims providers, technical profiles, user journey orchestration steps, and other aspects of the identity experience.
+
+The powerful flexibility of custom policies is most appropriate for when you need to build complex identity scenarios. Developers configuring custom policies must define the trusted relationships in careful detail to include metadata endpoints, exact claims exchange definitions, and configure secrets, keys, and certificates as needed by each identity provider.
+
+Learn more about custom policies in [Custom policies in Azure Active Directory B2C](custom-policy-overview.md).
+
+## Comparing user flows and custom policies
+
+The following table gives a detailed comparison of the scenarios you can with Azure AD B2C user flows and custom policy.
+
+| Context | User flows | Custom policies |
+|-|-|--|
+| Target users | All application developers with or without identity expertise. | Identity pros, systems integrators, consultants, and in-house identity teams. They are comfortable with OpenID Connect flows and understand identity providers and claims-based authentication. |
+| Configuration method | Azure portal with a user-friendly user-interface (UI). | Directly editing XML files and then uploading to the Azure portal. |
+| UI customization | [Full UI customization](customize-ui-with-html.md) including HTML, CSS and, [JavaScript](javascript-and-page-layout.md).<br><br>[Multilanguage support](language-customization.md) with Custom strings. | Same |
+| Attribute customization | Standard and custom attributes. | Same |
+| Token and session management | [Customize tokens](configure-tokens.md) and [sessions behavior](session-behavior.md). | Same |
+| Identity Providers | [Predefined local](identity-provider-local.md) or [social provider](add-identity-provider.md), such as federation with Azure Active Directory tenants. | Standards-based OIDC, OAUTH, and SAML. Authentication is also possible by using integration with REST APIs. |
+| Identity Tasks | [Sign-up or sign-in](add-sign-up-and-sign-in-policy.md) with local or many social accounts.<br><br>[Self-service password reset](add-password-reset-policy.md).<br><br>[Profile edit](add-profile-editing-policy.md).<br><br>Multi-Factor Authentication.<br><br>Access token flows. | Complete the same tasks as user flows using custom identity providers or use custom scopes.<br><br>Provision a user account in another system at the time of registration.<br><br>Send a welcome email using your own email service provider.<br><br>Use a user store outside Azure AD B2C.<br><br>Validate user provided information with a trusted system by using an API. |
+
+## Application integration
+
+You can create many user flows, or custom policies of different types in your tenant and use them in your applications as needed. Both user flows and custom policies can be reused across applications. This flexibility enables you to define and modify identity experiences with minimal or no changes to your code.
+
+When a user wants to sign in to your application, the application initiates an authorization request to a user flow- or custom policy-provided endpoint. The user flow or custom policy defines and controls the user's experience. When they complete a user flow, Azure AD B2C generates a token, then redirects the user back to your application.
+
+![Mobile app with arrows showing flow between Azure AD B2C sign-in page](media/user-flow-overview/app-integration.png)
+
+Multiple applications can use the same user flow or custom policy. A single application can use multiple user flows or custom policies.
+
+For example, to sign in to an application, the application uses the *sign up or sign in* user flow. After the user has signed in, they may want to edit their profile. To edit the profile, the application initiates another authorization request, this time using the *profile edit* user flow.
+
+Your application triggers a user flow by using a standard HTTP authentication request that includes the user flow or custom policy name. A customized [token](tokens-overview.md) is received as a response.
-Only the **otherMails** and **signInNames** properties are exposed through the Microsoft Graph API. The email address in the strong authentication detail property is not available.
## Next steps
-To create the recommended user flows, follow the instructions in [Tutorial: Create a user flow](tutorial-create-user-flows.md).
+- To create the recommended user flows, follow the instructions in [Tutorial: Create a user flow](tutorial-create-user-flows.md).
+- Learn about the [user flow versions in Azure AD B2C](user-flow-versions.md).
+- Learn more about [Azure AD B2C custom policies](custom-policy-overview.md).
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/whats-new-docs.md
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Set up sign-up and sign-in with a Twitter account using Azure Active Directory B2C](identity-provider-twitter.md) - [Set up sign-up and sign-in with a WeChat account using Azure Active Directory B2C](identity-provider-wechat.md) - [Set up sign-up and sign-in with a Weibo account using Azure Active Directory B2C](identity-provider-weibo.md)-- [Azure AD B2C custom policy overview](custom-policy-trust-frameworks.md)
+- [Azure AD B2C custom policy overview](custom-policy-overview.md)
## December 2020
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-passwordless.md
The following providers offer FIDO2 security keys of different form factors that
| KONA I | [https://konai.com/business/security/fido](https://konai.com/business/security/fido) | | Excelsecu | [https://www.excelsecu.com/productdetail/esecufido2secu.html](https://www.excelsecu.com/productdetail/esecufido2secu.html) | | Token2 Switzerland | [https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key](https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key) |
+| Go-Trust ID | [https://www.gotrustid.com/](https://www.gotrustid.com/) |
+| Kensington | [https://www.kensington.com/solutions/product-category/why-biometrics/](https://www.kensington.com/solutions/product-category/why-biometrics/) |
> [!NOTE] > If you purchase and plan to use NFC-based security keys, you need a supported NFC reader for the security key. The NFC reader isn't an Azure requirement or limitation. Check with the vendor for your NFC-based security key for a list of supported NFC readers.
active-directory Concept Sspr Howitworks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-sspr-howitworks.md
To get started with SSPR, complete the following tutorial:
## Require users to register when they sign in
-You can enable the option to require a user to complete the SSPR registration if they sign in to any applications using Azure AD. This workflow includes the following applications:
+You can enable the option to require a user to complete the SSPR registration if they use modern authentication or web browser to sign in to any applications using Azure AD. This workflow includes the following applications:
* Microsoft 365 * Azure portal
active-directory What Is Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/what-is-cloud-sync.md
The following table provides a comparison between Azure AD Connect and Azure AD
| Support for federation |ΓùÅ|ΓùÅ| | Seamless Single Sign-on|ΓùÅ |ΓùÅ| | Supports installation on a Domain Controller |ΓùÅ |ΓùÅ |
-| Support for Windows Server 2012 and Windows Server 2012 R2 |ΓùÅ |ΓùÅ |
+| Support for Windows Server 2016|ΓùÅ |ΓùÅ |
| Filter on Domains/OUs/groups |ΓùÅ |ΓùÅ | | Filter on objects' attribute values |ΓùÅ | | | Allow minimal set of attributes to be synchronized (MinSync) |ΓùÅ |ΓùÅ |
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/access-tokens.md
Previously updated : 2/18/2021 Last updated : 04/02/2021
Microsoft identities can authenticate in different ways, which may be relevant t
| `wiaormfa`| The user used Windows or an MFA credential to authenticate. | | `none` | No authentication was done. |
+## Access token lifetime
+
+The default lifetime of an access token varies, depending on the client application requesting the token. For example, continuous access evaluation (CAE) capable clients that negotiate CAE-aware sessions will see a long lived token lifetime (up to 28 hours). When the access token expires, the client must use the refresh token to (usually silently) acquire a new refresh token and access token.
+
+You can adjust the lifetime of an access token to control how often the client application expires the application session, and how often it requires the user to re-authenticate (either silently or interactively). For more information, read [Configurable token lifetimes](active-directory-configurable-token-lifetimes.md).
+ ## Validating tokens Not all apps should validate tokens. Only in specific scenarios should apps validate a token:
active-directory Application Consent Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/application-consent-experience.md
Previously updated : 03/27/2019 Last updated : 04/06/2021 -+ # Understanding Azure AD application consent experiences
Learn more about the Azure Active Directory (Azure AD) application consent user
Consent is the process of a user granting authorization to an application to access protected resources on their behalf. An admin or user can be asked for consent to allow access to their organization/individual data.
-The actual user experience of granting consent will differ depending on policies set on the user's tenant, the user's scope of authority (or role), and the type of [permissions](../azuread-dev/v1-permissions-consent.md) being requested by the client application. This means that application developers and tenant admins have some control over the consent experience. Admins have the flexibility of setting and disabling policies on a tenant or app to control the consent experience in their tenant. Application developers can dictate what types of permissions are being requested and if they want to guide users through the user consent flow or the admin consent flow.
+The actual user experience of granting consent will differ depending on policies set on the user's tenant, the user's scope of authority (or role), and the type of [permissions](v2-permissions-and-consent.md) being requested by the client application. This means that application developers and tenant admins have some control over the consent experience. Admins have the flexibility of setting and disabling policies on a tenant or app to control the consent experience in their tenant. Application developers can dictate what types of permissions are being requested and if they want to guide users through the user consent flow or the admin consent flow.
- **User consent flow** is when an application developer directs users to the authorization endpoint with the intent to record consent for only the current user. - **Admin consent flow** is when an application developer directs users to the admin consent endpoint with the intent to record consent for the entire tenant. To ensure the admin consent flow works properly, application developers must list all permissions in the `RequiredResourceAccess` property in the application manifest. For more info, see [Application manifest](./reference-app-manifest.md).
The consent prompt is designed to ensure users have enough information to determ
The following diagram and table provide information about the building blocks of the consent prompt.
-![Building blocks of the consent prompt](./media/application-consent-experience/consent_prompt.png)
| # | Component | Purpose | | -- | -- | -- |
The following diagram and table provide information about the building blocks of
| 3 | App logo | This image should help users have a visual cue of whether this app is the app they intended to access. This image is provided by application developers and the ownership of this image isn't validated. | | 4 | App name | This value should inform users which application is requesting access to their data. Note this name is provided by the developers and the ownership of this app name isn't validated. | | 5 | Publisher domain | This value should provide users with a domain they may be able to evaluate for trustworthiness. This domain is provided by the developers and the ownership of this publisher domain is validated. |
-| 6 | Permissions | This list contains the permissions being requested by the client application. Users should always evaluate the types of permissions being requested to understand what data the client application will be authorized to access on their behalf if they accept. As an application developer it is best to request access, to the permissions with the least privilege. |
-| 7 | Permission description | This value is provided by the service exposing the permissions. To see the permission descriptions, you must toggle the chevron next to the permission. |
-| 8 | App terms | These terms contain links to the terms of service and privacy statement of the application. The publisher is responsible for outlining their rules in their terms of service. Additionally, the publisher is responsible for disclosing the way they use and share user data in their privacy statement. If the publisher doesn't provide links to these values for multi-tenant applications, there will be a bolded warning on the consent prompt. |
-| 9 | https://myapps.microsoft.com | This is the link where users can review and remove any non-Microsoft applications that currently have access to their data. |
+| 6 | Publisher verified | The blue "verified" badge means that the app publisher has verified their identity using a Microsoft Partner Network account and has completed the verification process.|
+| 7 | Publisher information | Displays whether the application is published by Microsoft or your organization. |
+| 8 | Permissions | This list contains the permissions being requested by the client application. Users should always evaluate the types of permissions being requested to understand what data the client application will be authorized to access on their behalf if they accept. As an application developer it is best to request access, to the permissions with the least privilege. |
+| 9 | Permission description | This value is provided by the service exposing the permissions. To see the permission descriptions, you must toggle the chevron next to the permission. |
+| 10| App terms | These terms contain links to the terms of service and privacy statement of the application. The publisher is responsible for outlining their rules in their terms of service. Additionally, the publisher is responsible for disclosing the way they use and share user data in their privacy statement. If the publisher doesn't provide links to these values for multi-tenant applications, there will be a bolded warning on the consent prompt. |
+| 11 | https://myapps.microsoft.com | This is the link where users can review and remove any non-Microsoft applications that currently have access to their data. |
+| 12 | Report it here | This link is used to report a suspicious app if you don't trust the app, if you believe the app is impersonating another app, if you believe the app will misuse your data, or for some other reason. |
-## Common consent scenarios
+## App requires a permission within the user's scope of authority
-Here are the consent experiences that a user may see in the common consent scenarios:
+A common consent scenario is that the user accesses an app which requires a permission set that is within the user's scope of authority. The user is directed to the user consent flow.
-1. Individuals accessing an app that directs them to the user consent flow while requiring a permission set that is within their scope of authority.
-
- 1. Admins will see an additional control on the traditional consent prompt that will allow them consent on behalf of the entire tenant. The control will be defaulted to off, so only when admins explicitly check the box will consent be granted on behalf of the entire tenant. As of today, this check box will only show for the Global Admin role, so Cloud Admin and App Admin will not see this checkbox.
+Admins will see an additional control on the traditional consent prompt that will allow them consent on behalf of the entire tenant. The control will be defaulted to off, so only when admins explicitly check the box will consent be granted on behalf of the entire tenant. As of today, this check box will only show for the Global Admin role, so Cloud Admin and App Admin will not see this checkbox.
- ![Consent prompt for scenario 1a](./media/application-consent-experience/consent_prompt_1a.png)
-
- 2. Users will see the traditional consent prompt.
- ![Screenshot that shows the traditional consent prompt.](./media/application-consent-experience/consent_prompt_1b.png)
+Users will see the traditional consent prompt.
-2. Individuals accessing an app that requires at least one permission that is outside their scope of authority.
- 1. Admins will see the same prompt as 1.i shown above.
- 2. Users will be blocked from granting consent to the application, and they will be told to ask their admin for access to the app.
-
- ![Screenshot of the consent prompt telling the user to ask an admin for access to the app.](./media/application-consent-experience/consent_prompt_2b.png)
-3. Individuals that navigate or are directed to the admin consent flow.
- 1. Admin users will see the admin consent prompt. The title and the permission descriptions changed on this prompt, the changes highlight the fact that accepting this prompt will grant the app access to the requested data on behalf of the entire tenant.
-
- ![Consent prompt for scenario 1b](./media/application-consent-experience/consent_prompt_3a.png)
-
- 1. Non-admin users will see the same screen as 2.ii shown above.
+## App requires a permission outside of the user's scope of authority
+
+Another common consent scenario is that the user accesses an app which requires at least one permission that is outside the user's scope of authority.
+
+Admins will see an additional control on the traditional consent prompt that will allow them consent on behalf of the entire tenant.
++
+Non-admin users will be blocked from granting consent to the application, and they will be told to ask their admin for access to the app.
++
+## User is directed to the admin consent flow
+
+Another common scenario is when the user navigates to or is directed to the admin consent flow.
+
+Admin users will see the admin consent prompt. The title and the permission descriptions changed on this prompt, the changes highlight the fact that accepting this prompt will grant the app access to the requested data on behalf of the entire tenant.
++
+Non-admin users will be blocked from granting consent to the application, and they will be told to ask their admin for access to the app.
+ ## Next steps+ - Get a step-by-step overview of [how the Azure AD consent framework implements consent](./quickstart-register-app.md). - For more depth, learn [how a multi-tenant application can use the consent framework](./howto-convert-app-to-be-multi-tenant.md) to implement "user" and "admin" consent, supporting more advanced multi-tier application patterns. - Learn [how to configure the app's publisher domain](howto-configure-publisher-domain.md).
active-directory Id Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/id-tokens.md
Previously updated : 09/09/2020 Last updated : 04/02/2021
To ensure that the token size doesn't exceed HTTP header size limits, Azure AD l
} ```
+## ID token lifetime
+
+By default, an ID token is valid for 1 hour - after 1 hour, the client must acquire a new ID token.
+
+You can adjust the lifetime of an ID token to control how often the client application expires the application session, and how often it requires the user to re-authenticate (either silently or interactively). For more information, read [Configurable token lifetimes](active-directory-configurable-token-lifetimes.md).
+ ## Validating an id_token Validating an `id_token` is similar to the first step of [validating an access token](access-tokens.md#validating-tokens) - your client can validate that the correct issuer has sent back the token and that it hasn't been tampered with. Because `id_tokens` are always a JWT token, many libraries exist to validate these tokens - we recommend you use one of these rather than doing it yourself. Note that only confidential clients (those with a secret) should validate ID tokens. Public applications (code running entirely on a device or network you don't control - for instance, a user's browser or their home network) don't benefit from validating the ID token, as a malicious user can intercept and edit the keys used for validation of the token.
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-dynamic-membership.md
The following is an example of a properly constructed membership rule with a sin
user.department -eq "Sales" ```
-Parentheses are optional for a single expression. The total length of the body of your membership rule cannot exceed 2048 characters.
+Parentheses are optional for a single expression. The total length of the body of your membership rule cannot exceed 3072 characters.
## Constructing the body of a membership rule
active-directory Resilient End User Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/resilient-end-user-experience.md
Choose built-in user flows if your business requirements can be met by them. Sin
Should you [choose custom policies](../../active-directory-b2c/custom-policy-get-started.md) because of your business requirements, make sure you perform policy-level testing for functional, performance, or scale in addition to application-level testing.
-See the article that [compares user flows and custom polices](../../active-directory-b2c/custom-policy-overview.md#comparing-user-flows-and-custom-policies) to help you decide.
+See the article that [compares user flows and custom polices](../../active-directory-b2c/user-flow-overview.md#comparing-user-flows-and-custom-policies) to help you decide.
## Choose multiple IDPs
active-directory How To Connect Import Export Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-import-export-config.md
To import previously exported settings:
1. Select **Import synchronization settings**. Browse for the previously exported JSON settings file. 1. Select **Install**.
- ![Screenshot that shows the Install required components screen](media/how-to-connect-import-export-config/import1.png)
+ ![Screenshot that shows the Install required components screen](media/how-to-connect-import-export-config/import-1.png)
> [!NOTE] > Override settings on this page like the use of SQL Server instead of LocalDB or the use of an existing service account instead of a default VSA. These settings aren't imported from the configuration settings file. They are there for information and comparison purposes.
Here are the only changes that can be made during the installation experience. A
- **On-premises directory credentials**: For each on-premises directory included in your synchronization settings, you must provide credentials to create a synchronization account or supply a pre-created custom synchronization account. This procedure is identical to the clean install experience with the exception that you can't add or remove directories. - **Configuration options**: As with a clean install, you might choose to configure the initial settings for whether to start automatic synchronization or enable Staging mode. The main difference is that Staging mode is intentionally enabled by default to allow comparison of the configuration and synchronization results prior to actively exporting the results to Azure.
-![Screenshot that shows the Connect your directories screen](media/how-to-connect-import-export-config/import2.png)
+![Screenshot that shows the Connect your directories screen](media/how-to-connect-import-export-config/import-2.png)
> [!NOTE] > Only one synchronization server can be in the primary role and actively exporting configuration changes to Azure. All other servers must be placed in Staging mode.
Migration requires running a PowerShell script that extracts the existing settin
### Migration process To migrate the settings:
-1. Start **AzureADConnect.msi** on the new staging server, and stop at the **Welcome** page of Azure AD Connect.
+ 1. Start **AzureADConnect.msi** on the new staging server, and stop at the **Welcome** page of Azure AD Connect.
-1. Copy **MigrateSettings.ps1** from the Microsoft Azure AD Connect\Tools directory to a location on the existing server. An example is C:\setup, where setup is a directory that was created on the existing server.
+ 2. Copy **MigrateSettings.ps1** from the Microsoft Azure AD Connect\Tools directory to a location on the existing server. An example is C:\setup, where setup is a directory that was created on the existing server.</br>
+ ![Screenshot that shows Azure AD Connect directories.](media/how-to-connect-import-export-config/migrate-1.png)
+
+ >[!NOTE]
+ > If you see a message: ΓÇ£A positional parameter cannot be found that accepts argument **True**.ΓÇ¥, as below:
+ >
+ >
+ >![Screenshot of error](media/how-to-connect-import-export-config/migrate-5.png)
+ >Then edit the MigrateSettings.ps1 file and remove **$true** and run the script:
+ >![Screenshot to edit config](media/how-to-connect-import-export-config/migrate-6.png)
+
- ![Screenshot that shows Azure AD Connect directories.](media/how-to-connect-import-export-config/migrate1.png)
-1. Run the script as shown here, and save the entire down-level server configuration directory. Copy this directory to the new staging server. You must copy the entire **Exported-ServerConfiguration-*** folder to the new server.
- ![Screenshot that shows script in Windows PowerShell.](media/how-to-connect-import-export-config/migrate2.png)
- ![Screenshot that shows copying the Exported-ServerConfiguration-* folder.](media/how-to-connect-import-export-config/migrate3.png)
+ 3. Run the script as shown here, and save the entire down-level server configuration directory. Copy this directory to the new staging server. You must copy the entire **Exported-ServerConfiguration-*** folder to the new server.
+ ![Screenshot that shows script in Windows PowerShell.](media/how-to-connect-import-export-config/migrate-2.png)![Screenshot that shows copying the Exported-ServerConfiguration-* folder.](media/how-to-connect-import-export-config/migrate-3.png)
-1. Start **Azure AD Connect** by double-clicking the icon on the desktop. Accept the Microsoft Software License Terms, and on the next page, select **Customize**.
-1. Select the **Import synchronization settings** check box. Select **Browse** to browse the copied-over Exported-ServerConfiguration-* folder. Select the MigratedPolicy.json to import the migrated settings.
+ 4. Start **Azure AD Connect** by double-clicking the icon on the desktop. Accept the Microsoft Software License Terms, and on the next page, select **Customize**.
+ 5. Select the **Import synchronization settings** check box. Select **Browse** to browse the copied-over Exported-ServerConfiguration-* folder. Select the MigratedPolicy.json to import the migrated settings.
- ![Screenshot that shows the Import synchronization settings option.](media/how-to-connect-import-export-config/migrate4.png)
+ ![Screenshot that shows the Import synchronization settings option.](media/how-to-connect-import-export-config/migrate-4.png)
## Post-installation verification
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
-# Configure the admin consent workflow (preview)
+# Configure the admin consent workflow
-This article describes how to enable the admin consent workflow (preview) feature, which gives end users a way to request access to applications that require admin consent.
+This article describes how to enable the admin consent workflow feature, which gives end users a way to request access to applications that require admin consent.
Without an admin consent workflow, a user in a tenant where user consent is disabled will be blocked when they try to access any app that requires permissions to access organizational data. The user sees a generic error message that says they're unauthorized to access the app and they should ask their admin for help. But often, the user doesn't know who to contact, so they either give up or create a new local account in the application. Even when an admin is notified, there isn't always a streamlined process to help the admin grant access and notify their users.
To enable the admin consent workflow and choose reviewers:
3. In the filter search box, type "**Azure Active Directory**" and select **the Azure Active Directory** item. 4. From the navigation menu, click **Enterprise applications**. 5. Under **Manage**, select **User settings**.
-6. Under **Admin consent requests (Preview)**, set **Users can request admin consent to apps they are unable to consent to** to **Yes**.
+6. Under **Admin consent requests**, set **Users can request admin consent to apps they are unable to consent to** to **Yes**.
![Configure admin consent workflow settings](media/configure-admin-consent-workflow/admin-consent-requests-settings.png)
To review the admin consent requests and take action:
2. Select **All services** at the top of the left-hand navigation menu. The **Azure Active Directory Extension** opens. 3. In the filter search box, type "**Azure Active Directory**" and select the **Azure Active Directory** item. 4. From the navigation menu, click **Enterprise applications**.
-5. Under **Activity**, select **Admin consent requests (Preview)**.
+5. Under **Activity**, select **Admin consent requests**.
> [!NOTE] > Reviewers will only see admin requests that were created after they were designated as a reviewer.
Requestors will receive email notifications when:
## Audit logs
-The table below outlines the scenarios and audit values available for the admin consent workflow.
-
-> [!NOTE]
-> The user context of the audit actor is currently missing in all scenarios. This is a known limitation in the preview version.
-
+The table below outlines the scenarios and audit values available for the admin consent workflow.
|Scenario |Audit Service |Audit Category |Audit Activity |Audit Actor |Audit log limitations | |||||||
For more information on consenting to applications, see [Azure Active Directory
[Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md)
-[Azure AD on Microsoft Q&A](/answers/topics/azure-active-directory.html)
+[Azure AD on Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Manage Consent Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/manage-consent-requests.md
After end-user consent is disabled or restricted, there are several important co
## Process changes and education
- 1. Consider enabling the [admin consent workflow (preview)](configure-admin-consent-workflow.md) to allow users to request administrator approval directly from the consent screen.
+ 1. Consider enabling the [admin consent workflow](configure-admin-consent-workflow.md) to allow users to request administrator approval directly from the consent screen.
2. Ensure all administrators understand the [permissions and consent framework](../develop/consent-framework.md), how the [consent prompt](../develop/application-consent-experience.md) works, and how to [evaluate a request for tenant-wide admin consent](#evaluating-a-request-for-tenant-wide-admin-consent). 3. Review your organization's existing processes for users to request administrator approval for an application, and make updates if necessary. If processes are changed:
To disable all future user consent operations in your entire directory, follow t
* [Five steps to securing your identity infrastructure](../../security/fundamentals/steps-secure-identity.md#before-you-begin-protect-privileged-accounts-with-mfa) * [Configure the admin consent workflow](configure-admin-consent-workflow.md) * [Configure how end-users consent to applications](configure-user-consent.md)
-* [Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md)
+* [Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md)
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/known-issues.md
Title: FAQs and known issues with managed identities - Azure AD
+ Title: Known issues with managed identities - Azure Active Directory
description: Known issues with managed identities for Azure resources. documentationcenter:
ms.devlang:
Previously updated : 02/04/2021 Last updated : 04/08/2021
-# FAQs and known issues with managed identities for Azure resources
+# Known issues with Managed Identities
+This article discusses a couple of issues around managed identities and how to address them. Common questions about managed identities are documented in our [frequently asked questions](managed-identities-faq.md) article.
+## VM fails to start after being moved
-## Frequently Asked Questions (FAQs)
-
-> [!NOTE]
-> Managed identities for Azure resources is the new name for the service formerly known as Managed Service Identity (MSI).
-
-### How can you find resources that have a managed identity?
-
-You can find the list of resources that have a system-assigned managed identity by using the following Azure CLI Command:
-
-```azurecli-interactive
-az resource list --query "[?identity.type=='SystemAssigned'].{Name:name, principalId:identity.principalId}" --output table
-```
-
-### Do managed identities have a backing app object?
-
-No. Managed identities and Azure AD App Registrations are not the same thing in the directory.
-
-App registrations have two components: An Application Object + A Service Principal Object.
-Managed Identities for Azure resources have only one of those components: A Service Principal Object.
-
-Managed identities don't have an application object in the directory, which is what is commonly used to grant app permissions for MS graph. Instead, MS graph permissions for managed identities need to be granted directly to the Service Principal.
-
-### Can the same managed identity be used across multiple regions?
-
-In short, yes you can use user assigned managed identities in more than one Azure region. The longer answer is that while user assigned managed identities are created as regional resources the associated [service principal](../develop/app-objects-and-service-principals.md#service-principal-object) (SPN) created in Azure AD is available globally. The service principal can be used from any Azure region and its availability is dependent on the availability of Azure AD. For example, if you created a user assigned managed identity in the South-Central region and that region becomes unavailable this issue only impacts [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md) activities on the managed identity itself. The activities performed by any resources already configured to use the managed identities would not be impacted.
-
-### Does managed identities for Azure resources work with Azure Cloud Services?
-
-No, there are no plans to support managed identities for Azure resources in Azure Cloud Services.
-
-### What is the credential associated with a managed identity? How long is it valid and how often is it rotated?
-
-> [!NOTE]
-> How managed identities authenticate is an internal implementation detail that is subject to change without notice.
-
-Managed identities use certificate-based authentication. Each managed identityΓÇÖs credential has an expiration of 90 days and it is rolled after 45 days.
-
-### What is the security boundary of managed identities for Azure resources?
-
-The security boundary of the identity is the resource to which it is attached to. For example, the security boundary for a Virtual Machine with managed identities for Azure resources enabled, is the Virtual Machine. Any code running on that VM, is able to call the managed identities for Azure resources endpoint and request tokens. It is the similar experience with other resources that support managed identities for Azure resources.
-
-### What identity will IMDS default to if don't specify the identity in the request?
--- If system assigned managed identity is enabled and no identity is specified in the request, IMDS will default to the system assigned managed identity.-- If system assigned managed identity is not enabled, and only one user assigned managed identity exists, IMDS will default to that single user assigned managed identity. -- If system assigned managed identity is not enabled, and multiple user assigned managed identities exist, then specifying a managed identity in the request is required.-
-### Will managed identities be recreated automatically if I move a subscription to another directory?
-
-No. If you move a subscription to another directory, you will have to manually re-create them and grant Azure role assignments again.
-- For system assigned managed identities: disable and re-enable. -- For user assigned managed identities: delete, re-create, and attach them again to the necessary resources (for example, virtual machines)-
-### Can I use a managed identity to access a resource in a different directory/tenant?
-
-No. Managed identities do not currently support cross-directory scenarios.
-
-### What Azure RBAC permissions are required to managed identity on a resource?
--- System-assigned managed identity: You need write permissions over the resource. For example, for virtual machines you need Microsoft.Compute/virtualMachines/write. This action is included in resource specific built-in roles like [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor).-- User-assigned managed identity: You need write permissions over the resource. For example, for virtual machines you need Microsoft.Compute/virtualMachines/write. In addition to [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) role assignment over the managed identity.-
-### How do I prevent the creation of user-assigned managed identities?
-
-You can keep your users from creating user-assigned managed identities using [Azure Policy](../../governance/policy/overview.md)
--- Navigate to the [Azure portal](https://portal.azure.com) and go to **Policy**.-- Choose **Definitions**-- Select **+ Policy definition** and enter the necessary information.-- In the policy rule section paste-
-```json
-{
- "mode": "All",
- "policyRule": {
- "if": {
- "field": "type",
- "equals": "Microsoft.ManagedIdentity/userAssignedIdentities"
- },
- "then": {
- "effect": "deny"
- }
- },
- "parameters": {}
-}
-
-```
-
-After creating the policy assign it to the resource group that you would like to use.
--- Navigate to resource groups.-- Find the resource group that you are using for testing.-- Choose **Policies** from the left menu.-- Select **Assign policy**-- In the **Basics** section provide:
- - **Scope** The resource group that we are using for testing
- - **Policy definition**: The policy that we created earlier.
-- Leave all other settings at their defaults and choose **Review + Create**-
-At this point any attempt to create a user-assigned managed identity in the resource group will fail.
-
- ![Policy violation](./media/known-issues/policy-violation.png)
-
-## Known issues
-
-### "Automation script" fails when attempting schema export for managed identities for Azure resources extension
-
-When managed identities for Azure resources is enabled on a VM, the following error is shown when attempting to use the ΓÇ£Automation scriptΓÇ¥ feature for the VM, or its resource group:
-
-![Managed identities for Azure resources automation script export error](./media/msi-known-issues/automation-script-export-error.png)
-
-The managed identities for Azure resources VM extension (planned for deprecation in January 2019) does not currently support the ability to export its schema to a resource group template. As a result, the generated template does not show configuration parameters to enable managed identities for Azure resources on the resource. These sections can be added manually by following the examples in [Configure managed identities for Azure resources on an Azure VM using a templates](qs-configure-template-windows-vm.md).
-
-When the schema export functionality becomes available for the managed identities for Azure resources VM extension (planned for deprecation in January 2019), it will be listed in [Exporting Resource Groups that contain VM extensions](../../virtual-machines/extensions/export-templates.md#supported-virtual-machine-extensions).
-
-### VM fails to start after being moved from resource group or subscription
-
-If you move a VM in the running state, it continues to run during the move. However, after the move, if the VM is stopped and restarted, it will fail to start. This issue happens because the VM is not updating the reference to the managed identities for Azure resources identity and continues to point to it in the old resource group.
+If you move a VM in a running state from a resource group or subscription, it continues to run during the move. However, after the move, if the VM is stopped and restarted, it will fail to start. This issue happens because the VM is not updating the reference to the managed identities for Azure resources identity and continues to point to it in the old resource group.
**Workaround**
Once the VM is started, the tag can be removed by using following command:
az vm update -n <VM Name> -g <Resource Group> --remove tags.fixVM ```
-### Transferring a subscription between Azure AD directories
+## Transferring a subscription between Azure AD directories
Managed identities do not get updated when a subscription is moved/transferred to another directory. As a result, any existent system-assigned or user-assigned managed identities will be broken.
Workaround for managed identities in a subscription that has been moved to anoth
For more information, see [Transfer an Azure subscription to a different Azure AD directory](../../role-based-access-control/transfer-subscription.md).
-### Moving a user-assigned managed identity to a different resource group/subscription
-Moving a user-assigned managed identity to a different resource group is not supported.
+## Next steps
+
+You can review our article listing the [services that support managed identities](services-support-managed-identities.md) and our [frequently asked questions](managed-identities-faq.md)
active-directory Managed Identities Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/managed-identities-faq.md
+
+ Title: Managed identities for Azure resources frequently asked questions - Azure AD"
+description: Frequently asked questions about managed identities
+
+documentationcenter:
++
+editor:
++
+ms.devlang:
+
+
+ Last updated : 04/08/2021+++
+# Managed identities for Azure resources frequently asked questions - Azure AD
++
+> [!NOTE]
+> Managed identities for Azure resources is the new name for the service formerly known as Managed Service Identity (MSI).
+
+## Administration
+
+### How can you find resources that have a managed identity?
+
+You can find the list of resources that have a system-assigned managed identity by using the following Azure CLI Command:
+
+```azurecli-interactive
+az resource list --query "[?identity.type=='SystemAssigned'].{Name:name, principalId:identity.principalId}" --output table
+```
++
+### What Azure RBAC permissions are required to managed identity on a resource?
+
+- System-assigned managed identity: You need write permissions over the resource. For example, for virtual machines you need Microsoft.Compute/virtualMachines/write. This action is included in resource specific built-in roles like [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor).
+- User-assigned managed identity: You need write permissions over the resource. For example, for virtual machines you need Microsoft.Compute/virtualMachines/write. In addition to [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) role assignment over the managed identity.
+
+### How do I prevent the creation of user-assigned managed identities?
+
+You can keep your users from creating user-assigned managed identities using [Azure Policy](../../governance/policy/overview.md)
+
+1. Navigate to the [Azure portal](https://portal.azure.com) and go to **Policy**.
+2. Choose **Definitions**
+3. Select **+ Policy definition** and enter the necessary information.
+4. In the policy rule section paste
+
+ ```json
+ {
+ "mode": "All",
+ "policyRule": {
+ "if": {
+ "field": "type",
+ "equals": "Microsoft.ManagedIdentity/userAssignedIdentities"
+ },
+ "then": {
+ "effect": "deny"
+ }
+ },
+ "parameters": {}
+ }
+
+ ```
+
+After creating the policy, assign it to the resource group that you would like to use.
+
+1. Navigate to resource groups.
+2. Find the resource group that you are using for testing.
+3. Choose **Policies** from the left menu.
+4. Select **Assign policy**
+5. In the **Basics** section, provide:
+ 1. **Scope** The resource group that we are using for testing
+ 1. **Policy definition**: The policy that we created earlier.
+6. Leave all other settings at their defaults and choose **Review + Create**
+
+At this point, any attempt to create a user-assigned managed identity in the resource group will fail.
+
+ ![Policy violation](./media/known-issues/policy-violation.png)
+
+## Concepts
+
+### Do managed identities have a backing app object?
+
+No. Managed identities and Azure AD App Registrations are not the same thing in the directory.
+
+App registrations have two components: An Application Object + A Service Principal Object.
+Managed Identities for Azure resources have only one of those components: A Service Principal Object.
+
+Managed identities don't have an application object in the directory, which is what is commonly used to grant app permissions for MS graph. Instead, MS graph permissions for managed identities need to be granted directly to the Service Principal.
+
+### What is the credential associated with a managed identity? How long is it valid and how often is it rotated?
+
+> [!NOTE]
+> How managed identities authenticate is an internal implementation detail that is subject to change without notice.
+
+Managed identities use certificate-based authentication. Each managed identityΓÇÖs credential has an expiration of 90 days and it is rolled after 45 days.
+
+### What identity will IMDS default to if don't specify the identity in the request?
+
+- If system assigned managed identity is enabled and no identity is specified in the request, IMDS defaults to the system assigned managed identity.
+- If system assigned managed identity is not enabled, and only one user assigned managed identity exists, IMDS defaults to that single user assigned managed identity.
+- If system assigned managed identity is not enabled, and multiple user assigned managed identities exist, then you are required to specify a managed identity in the request.
+
+## Limitations
+
+### Can the same managed identity be used across multiple regions?
+
+In short, yes you can use user assigned managed identities in more than one Azure region. The longer answer is that while user assigned managed identities are created as regional resources the associated [service principal](../develop/app-objects-and-service-principals.md#service-principal-object) (SP) created in Azure AD is available globally. The service principal can be used from any Azure region and its availability is dependent on the availability of Azure AD. For example, if you created a user assigned managed identity in the South-Central region and that region becomes unavailable this issue only impacts [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md) activities on the managed identity itself. The activities performed by any resources already configured to use the managed identities would not be impacted.
+
+### Does managed identities for Azure resources work with Azure Cloud Services?
+
+No, there are no plans to support managed identities for Azure resources in Azure Cloud Services.
++
+### What is the security boundary of managed identities for Azure resources?
+
+The security boundary of the identity is the resource to which it is attached to. For example, the security boundary for a Virtual Machine with managed identities for Azure resources enabled, is the Virtual Machine. Any code running on that VM, is able to call the managed identities for Azure resources endpoint and request tokens. It is the similar experience with other resources that support managed identities for Azure resources.
+
+### Will managed identities be recreated automatically if I move a subscription to another directory?
+
+No. If you move a subscription to another directory, you will have to manually re-create them and grant Azure role assignments again.
+- For system assigned managed identities: disable and re-enable.
+- For user assigned managed identities: delete, re-create, and attach them again to the necessary resources (for example, virtual machines)
+
+### Can I use a managed identity to access a resource in a different directory/tenant?
+
+No. Managed identities do not currently support cross-directory scenarios.
+
+### Are there any rate limits that apply to managed identities?
+
+Managed identities limits have dependencies on Azure service limits, Azure Instance Metadata Service (IMDS) limits, and Azure Active Directory service limits.
+
+- **Azure service limits** define the number of create operations that can be performed at the tenant and subscription levels. User assigned managed identities also have [limitations](../../azure-resource-manager/management/azure-subscription-service-limits.md#managed-identity-limits) around how they may be named.
+- **IMDS** In general, requests to IMDS are limited to five requests per second. Requests exceeding this threshold will be rejected with 429 responses. Requests to the Managed Identity category are limited to 20 requests per second and 5 concurrent requests. You can read more at the [Azure Instance Metadata Serice (Windows)](../../virtual-machines/windows/instance-metadata-service.md?tabs=windows#managed-identity) article.
+- **Azure Active Directory service** Each managed identity counts towards the object quota limit in an Azure AD tenant as described in Azure [AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md).
++
+## Is it Ok to move a user-assigned managed identity to a different resource group/subscription?
+
+Moving a user-assigned managed identity to a different resource group is not supported.
+
+## Next steps
+
+- Learn [how managed identities work with virtual machines](how-managed-identities-work-vm.md)
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/overview.md
ms.devlang: Previously updated : 04/05/2021 Last updated : 04/07/2021
A common challenge for developers is the management of secrets and credentials used to secure communication between different components making up a solution. Managed identities eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications may use the managed identity to obtain Azure AD tokens. For example, an application may use a managed identity to access resources like [Azure Key Vault](../../key-vault/general/overview.md) where developers can store credentials in a secure manner or to access storage accounts.
-What can a managed identity be used for?
+What can a managed identity be used for?</br>
- > [!VIDEO https://www.youtube.com/embed/5lqayO_oeEo]
+> [!VIDEO https://www.youtube.com/embed/5lqayO_oeEo]
Here are some of the benefits of using Managed identities:
There are two types of managed identities:
- **System-assigned** Some Azure services allow you to enable a managed identity directly on a service instance. When you enable a system-assigned managed identity an identity is created in Azure AD that is tied to the lifecycle of that service instance. So when the resource is deleted, Azure automatically deletes the identity for you. By design, only that Azure resource can use this identity to request tokens from Azure AD. - **User-assigned** You may also create a managed identity as a standalone Azure resource. You can [create a user-assigned managed identity](how-to-manage-ua-identity-portal.md) and assign it to one or more instances of an Azure service. In the case of user-assigned managed identities, the identity is managed separately from the resources that use it. </br></br>
- > [!VIDEO https://www.youtube.com/embed/OzqpxeD3fG0]
+> [!VIDEO https://www.youtube.com/embed/OzqpxeD3fG0]
The table below shows the differences between the two types of managed identities.
The table below shows the differences between the two types of managed identitie
| Sharing across Azure resources | Cannot be shared. <br/> It can only be associated with a single Azure resource. | Can be shared <br/> The same user-assigned managed identity can be associated with more than one Azure resource. | | Common use cases | Workloads that are contained within a single Azure resource <br/> Workloads for which you need independent identities. <br/> For example, an application that runs on a single virtual machine | Workloads that run on multiple resources and which can share a single identity. <br/> Workloads that need pre-authorization to a secure resource as part of a provisioning flow. <br/> Workloads where resources are recycled frequently, but permissions should stay consistent. <br/> For example, a workload where multiple virtual machines need to access the same resource |
->[!IMPORTANT]
->Regardless of the type of identity chosen a managed identity is a service principal of a special type that may only be used with Azure resources. When the managed identity is deleted, the corresponding service principal is automatically removed.
+> [!IMPORTANT]
+> Regardless of the type of identity chosen a managed identity is a service principal of a special type that may only be used with Azure resources. When the managed identity is deleted, the corresponding service principal is automatically removed.
## How can I use managed identities for Azure resources?
The table below shows the differences between the two types of managed identitie
Managed identities for Azure resources can be used to authenticate to services that support Azure AD authentication. For a list of Azure services that support the managed identities for Azure resources feature, see [Services that support managed identities for Azure resources](./services-support-managed-identities.md).
+## Which operations can I perform using managed identities?
+
+Resources that support system assigned managed identities allow you to:
+
+- Enable or disable managed identities at the resource level.
+- Use RBAC roles to [grant permissions](howto-assign-access-portal.md).
+- View create, read, update, delete (CRUD) operations in [Azure Activity logs](../../azure-resource-manager/management/view-activity-logs.md).
+- View sign-in activity in Azure AD [sign-in logs](../reports-monitoring/concept-sign-ins.md).
+
+If you choose a user assigned managed identity instead:
+
+- You can [create, read, update, delete](how-to-manage-ua-identity-portal.md) the identities.
+- You can use RBAC role assignments to [grant permissions](howto-assign-access-portal.md).
+- User assigned managed identities can be used on more than one resource.
+- CRUD operations are available for review in [Azure Activity logs](../../azure-resource-manager/management/view-activity-logs.md).
+- View sign-in activity in Azure AD [sign-in logs](../reports-monitoring/concept-sign-ins.md).
+
+Operations on managed identities may be performed by using an Azure Resource Manager (ARM) template, the Azure Portal, the Azure CLI, PowerShell, and REST APIs.
+ ## Next steps * [Use a Windows VM system-assigned managed identity to access Resource Manager](tutorial-windows-vm-access-arm.md) * [Use a Linux VM system-assigned managed identity to access Resource Manager](tutorial-linux-vm-access-arm.md) * [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md) * [How to use managed identities with Azure Container Instances](../../container-instances/container-instances-managed-identity.md)
-* [Implementing Managed Identities for Microsoft Azure Resources](https://www.pluralsight.com/courses/microsoft-azure-resources-managed-identities-implementing).
+* [Implementing Managed Identities for Microsoft Azure Resources](https://www.pluralsight.com/courses/microsoft-azure-resources-managed-identities-implementing).
active-directory My Apps Portal User Collections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/my-apps-portal-user-collections.md
Title: Collections (preview) in the My Apps portal - Azure AD
+ Title: Organize apps into collections in the My Apps portal - Azure AD
description: Learn how to create, edit, delete, hide, and show app collections in My Apps.
Previously updated : 01/19/2021 Last updated : 04/07/2021
-# User collections (preview) in My Apps
+# Organize apps using collections from My Apps
My Apps is your one-stop shop for launching and managing all of your work or school apps. Create collections to organize your apps and make it easier to find the apps you need. - In this article, youΓÇÖll learn how to: - Create your own collections of apps
In this article, youΓÇÖll learn how to:
:::image type="content" source="media/my-apps-portal-user-collections/3-add-apps.png" alt-text="Adding apps from the list to your collection":::
-1. On the following you can reorder or delete apps, or select **Add apps** to select more apps for the collection. When youΓÇÖre happy with your choices, select **Create new**.
+1. On the **Create new** pane you can reorder or delete apps, or select **Add apps** to select more apps for the collection. When youΓÇÖre happy with your choices, select **Create new**.
:::image type="content" source="media/my-apps-portal-user-collections/4-create-button.png" alt-text="Select the Create new button to save the collection to My Apps":::
You can only edit collections you created. To edit a collection you already crea
:::image type="content" source="media/my-apps-portal-user-collections/9-manage-apps-again.png" alt-text="Use the Manage command to manage your apps":::
-1. From here you can set the order in which collections appear in My Apps. The collection at th top of the list will be the default collection you see every time you go to myapps.microsoft.com.
+1. From here you can set the order in which collections appear in My Apps. The collection at the top of the list will be the default collection you see every time you go to myapps.microsoft.com.
:::image type="content" source="media/my-apps-portal-user-collections/10-default-collection.png" alt-text="My Apps now contains your new collection":::
aks Azure Ad Integration Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-ad-integration-cli.md
For best practices on identity and resource control, see [Best practices for aut
[az-ad-signed-in-user-show]: /cli/azure/ad/signed-in-user#az-ad-signed-in-user-show [install-azure-cli]: /cli/azure/install-azure-cli [az-ad-sp-credential-reset]: /cli/azure/ad/sp/credential#az-ad-sp-credential-reset
-[rbac-authorization]: concepts-identity.md#kubernetes-role-based-access-control-kubernetes-rbac
+[rbac-authorization]: concepts-identity.md#kubernetes-rbac
[operator-best-practices-identity]: operator-best-practices-identity.md [azure-ad-rbac]: azure-ad-rbac.md [managed-aad]: managed-aad.md
aks Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-ad-rbac.md
For best practices on identity and resource control, see [Best practices for aut
[az-ad-user-create]: /cli/azure/ad/user#az-ad-user-create [az-ad-group-member-add]: /cli/azure/ad/group/member#az-ad-group-member-add [az-ad-group-show]: /cli/azure/ad/group#az-ad-group-show
-[rbac-authorization]: concepts-identity.md#kubernetes-role-based-access-control-kubernetes-rbac
+[rbac-authorization]: concepts-identity.md#kubernetes-rbac
[operator-best-practices-identity]: operator-best-practices-identity.md
aks Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/best-practices.md
Title: Best practices for Azure Kubernetes Service (AKS)
description: Collection of the cluster operator and developer best practices to build and manage applications in Azure Kubernetes Service (AKS) Previously updated : 12/07/2018 Last updated : 03/09/2021 # Cluster operator and developer best practices to build and manage applications on Azure Kubernetes Service (AKS)
-To build and run applications successfully in Azure Kubernetes Service (AKS), there are some key considerations to understand and implement. These areas include multi-tenancy and scheduler features, cluster and pod security, or business continuity and disaster recovery. The following best practices are grouped to help cluster operators and developers understand the considerations for each of these areas, and implement the appropriate features.
+Building and running applications successfully in Azure Kubernetes Service (AKS) require understanding and implementation of some key considerations, including:
+* Multi-tenancy and scheduler features.
+* Cluster and pod security.
+* Business continuity and disaster recovery.
++
+The AKS product group, engineering teams, and field teams (including global black belts [GBBs]) contributed to, wrote, and grouped the following best practices and conceptual articles. Their purpose is to help cluster operators and developers understand the considerations above and implement the appropriate features.
-These best practices and conceptual articles have been written in conjunction with the AKS product group, engineering teams, and field teams including global black belts (GBBs).
## Cluster operator best practices
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-clusters-workloads.md
Title: Concepts - Kubernetes basics for Azure Kubernetes Services (AKS)
description: Learn the basic cluster and workload components of Kubernetes and how they relate to features in Azure Kubernetes Service (AKS) Previously updated : 12/07/2020 Last updated : 03/05/2020 # Kubernetes core concepts for Azure Kubernetes Service (AKS)
-As application development moves towards a container-based approach, the need to orchestrate and manage resources is important. Kubernetes is the leading platform that provides the ability to provide reliable scheduling of fault-tolerant application workloads. Azure Kubernetes Service (AKS) is a managed Kubernetes offering that further simplifies container-based application deployment and management.
+Application development continues to move toward a container-based approach, increasing our need to orchestrate and manage resources. As the leading platform, Kubernetes provides reliable scheduling of fault-tolerant application workloads. Azure Kubernetes Service (AKS), a managed Kubernetes offering, further simplifies container-based application deployment and management.
-This article introduces the core Kubernetes infrastructure components such as the *control plane*, *nodes*, and *node pools*. Workload resources such as *pods*, *deployments*, and *sets* are also introduced, along with how to group resources into *namespaces*.
+This article introduces:
+* Core Kubernetes infrastructure components:
+ * *control plane*
+ * *nodes*
+ * *node pools*
+* Workload resources:
+ * *pods*
+ * *deployments*
+ * *sets*
+* How to group resources into *namespaces*.
## What is Kubernetes?
-Kubernetes is a rapidly evolving platform that manages container-based applications and their associated networking and storage components. The focus is on the application workloads, not the underlying infrastructure components. Kubernetes provides a declarative approach to deployments, backed by a robust set of APIs for management operations.
+Kubernetes is a rapidly evolving platform that manages container-based applications and their associated networking and storage components. Kubernetes focuses on the application workloads, not the underlying infrastructure components. Kubernetes provides a declarative approach to deployments, backed by a robust set of APIs for management operations.
-You can build and run modern, portable, microservices-based applications that benefit from Kubernetes orchestrating and managing the availability of those application components. Kubernetes supports both stateless and stateful applications as teams progress through the adoption of microservices-based applications.
+You can build and run modern, portable, microservices-based applications, using Kubernetes to orchestrate and manage the availability of the application components. Kubernetes supports both stateless and stateful applications as teams progress through the adoption of microservices-based applications.
As an open platform, Kubernetes allows you to build your applications with your preferred programming language, OS, libraries, or messaging bus. Existing continuous integration and continuous delivery (CI/CD) tools can integrate with Kubernetes to schedule and deploy releases.
-Azure Kubernetes Service (AKS) provides a managed Kubernetes service that reduces the complexity for deployment and core management tasks, including coordinating upgrades. The AKS control plane is managed by the Azure platform, and you only pay for the AKS nodes that run your applications. AKS is built on top of the open-source Azure Kubernetes Service Engine ([aks-engine][aks-engine]).
+AKS provides a managed Kubernetes service that reduces the complexity of deployment and core management tasks, like upgrade coordination. The Azure platform manages the AKS control plane, and you only pay for the AKS nodes that run your applications. AKS is built on top of the open-source Azure Kubernetes Service Engine: [aks-engine][aks-engine].
## Kubernetes cluster architecture A Kubernetes cluster is divided into two components: -- The *Control plane* provides the core Kubernetes services and orchestration of application workloads.-- *Nodes* which run your application workloads.
+- *Control plane*: provides the core Kubernetes services and orchestration of application workloads.
+- *Nodes*: run your application workloads.
![Kubernetes control plane and node components](media/concepts-clusters-workloads/control-plane-and-nodes.png) ## Control plane
-When you create an AKS cluster, a control plane is automatically created and configured. This control plane is provided as a managed Azure resource abstracted from the user. There's no cost for the control plane, only the nodes that are part of the AKS cluster. The control plane and its resources reside only on the region where you created the cluster.
+When you create an AKS cluster, a control plane is automatically created and configured. This control plane is provided at no cost as a managed Azure resource abstracted from the user. You only pay for the nodes attached to the AKS cluster. The control plane and its resources reside only on the region where you created the cluster.
The control plane includes the following core Kubernetes components: -- *kube-apiserver* - The API server is how the underlying Kubernetes APIs are exposed. This component provides the interaction for management tools, such as `kubectl` or the Kubernetes dashboard.-- *etcd* - To maintain the state of your Kubernetes cluster and configuration, the highly available *etcd* is a key value store within Kubernetes.-- *kube-scheduler* - When you create or scale applications, the Scheduler determines what nodes can run the workload and starts them.-- *kube-controller-manager* - The Controller Manager oversees a number of smaller Controllers that perform actions such as replicating pods and handling node operations.
+| Component | Description |
+| -- | - |
+| *kube-apiserver* | The API server is how the underlying Kubernetes APIs are exposed. This component provides the interaction for management tools, such as `kubectl` or the Kubernetes dashboard. |
+| *etcd* | To maintain the state of your Kubernetes cluster and configuration, the highly available *etcd* is a key value store within Kubernetes. |
+| *kube-scheduler* | When you create or scale applications, the Scheduler determines what nodes can run the workload and starts them. |
+| *kube-controller-manager* | The Controller Manager oversees a number of smaller Controllers that perform actions such as replicating pods and handling node operations. |
-AKS provides a single-tenant control plane, with a dedicated API server, Scheduler, etc. You define the number and size of the nodes, and the Azure platform configures the secure communication between the control plane and nodes. Interaction with the control plane occurs through Kubernetes APIs, such as `kubectl` or the Kubernetes dashboard.
+AKS provides a single-tenant control plane, with a dedicated API server, scheduler, etc. You define the number and size of the nodes, and the Azure platform configures the secure communication between the control plane and nodes. Interaction with the control plane occurs through Kubernetes APIs, such as `kubectl` or the Kubernetes dashboard.
-This managed control plane means you don't need to configure components like a highly available *etcd* store, but it also means you can't access the control plane directly. Upgrades to Kubernetes are orchestrated through the Azure CLI or Azure portal, which upgrades the control plane and then the nodes. To troubleshoot possible issues, you can review the control plane logs through Azure Monitor logs.
+While you don't need to configure components (like a highly available *etcd* store) with this managed control plane, you can't access the control plane directly. Kubernetes control plane and node upgrades are orchestrated through the Azure CLI or Azure portal. To troubleshoot possible issues, you can review the control plane logs through Azure Monitor logs.
-If you need to configure the control plane in a particular way or need direct access to it, you can deploy your own Kubernetes cluster using [aks-engine][aks-engine].
+To configure or directly access a control plane, deploy your own Kubernetes cluster using [aks-engine][aks-engine].
For associated best practices, see [Best practices for cluster security and upgrades in AKS][operator-best-practices-cluster-security]. ## Nodes and node pools
-To run your applications and supporting services, you need a Kubernetes *node*. An AKS cluster has one or more nodes, which is an Azure virtual machine (VM) that runs the Kubernetes node components and container runtime:
+To run your applications and supporting services, you need a Kubernetes *node*. An AKS cluster has at least one node, an Azure virtual machine (VM) that runs the Kubernetes node components and container runtime.
+
+| Component | Description |
+| -- | - |
+| `kubelet` | The Kubernetes agent that processes the orchestration requests from the control plane and scheduling of running the requested containers. |
+| *kube-proxy* | Handles virtual networking on each node. The proxy routes network traffic and manages IP addressing for services and pods. |
+| *container runtime* | Allows containerized applications to run and interact with additional resources, such as the virtual network and storage. AKS clusters using Kubernetes version 1.19+ node pools use `containerd` as their container runtime. AKS clusters using Kubernetes prior to node pool version 1.19 for node pools use [Moby](https://mobyproject.org/) (upstream docker) as their container runtime. |
-- The `kubelet` is the Kubernetes agent that processes the orchestration requests from the control plane and scheduling of running the requested containers.-- Virtual networking is handled by the *kube-proxy* on each node. The proxy routes network traffic and manages IP addressing for services and pods.-- The *container runtime* is the component that allows containerized applications to run and interact with additional resources such as the virtual network and storage. AKS clusters using Kubernetes version 1.19 node pools and greater use `containerd` as its container runtime. AKS clusters using Kubernetes prior to v1.19 for node pools use [Moby](https://mobyproject.org/) (upstream docker) as its container runtime. ![Azure virtual machine and supporting resources for a Kubernetes node](media/concepts-clusters-workloads/aks-node-resource-interactions.png)
-The Azure VM size for your nodes defines how many CPUs, how much memory, and the size and type of storage available (such as high-performance SSD or regular HDD). If you anticipate a need for applications that require large amounts of CPU and memory or high-performance storage, plan the node size accordingly. You can also scale out the number of nodes in your AKS cluster to meet demand.
+The Azure VM size for your nodes defines the storage CPUs, memory, size, and type available (such as high-performance SSD or regular HDD). Plan the node size around whether your applications may require large amounts of CPU and memory or high-performance storage. Scale out the number of nodes in your AKS cluster to meet demand.
-In AKS, the VM image for the nodes in your cluster is currently based on Ubuntu Linux or Windows Server 2019. When you create an AKS cluster or scale out the number of nodes, the Azure platform creates the requested number of VMs and configures them. There's no manual configuration for you to perform. Agent nodes are billed as standard virtual machines, so any discounts you have on the VM size you're using (including [Azure reservations][reservation-discounts]) are automatically applied.
+In AKS, the VM image for your cluster's nodes is based on Ubuntu Linux or Windows Server 2019. When you create an AKS cluster or scale out the number of nodes, the Azure platform automatically creates and configures the requested number of VMs. Agent nodes are billed as standard VMs, so any VM size discounts (including [Azure reservations][reservation-discounts]) are automatically applied.
-If you need to use a different host OS, container runtime, or include custom packages, you can deploy your own Kubernetes cluster using [aks-engine][aks-engine]. The upstream `aks-engine` releases features and provides configuration options before they are officially supported in AKS clusters. For example, if you wish to use a container runtime other than `containerd` or Moby, you can use `aks-engine` to configure and deploy a Kubernetes cluster that meets your current needs.
+Deploy your own Kubernetes cluster with [aks-engine][aks-engine] if using a different host OS, container runtime, or including different custom packages. The upstream `aks-engine` releases features and provides configuration options ahead of support in AKS clusters. So, if you wish to use a container runtime other than `containerd` or [Moby](https://mobyproject.org/), you can run `aks-engine` to configure and deploy a Kubernetes cluster that meets your current needs.
### Resource reservations
-Node resources are utilized by AKS to make the node function as part of your cluster. This usage can create a discrepancy between your node's total resources and the resources allocatable when used in AKS. This information is important to note when setting requests and limits for user deployed pods.
+AKS uses node resources to help the node function as part of your cluster. This usage can create a discrepancy between your node's total resources and the allocatable resources in AKS. Remember this information when setting requests and limits for user deployed pods.
To find a node's allocatable resources, run: ```kubectl kubectl describe node [NODE_NAME] ```
-To maintain node performance and functionality, resources are reserved on each node by AKS. As a node grows larger in resources, the resource reservation grows due to a higher amount of user deployed pods needing management.
+To maintain node performance and functionality, AKS reserves resources on each node. As a node grows larger in resources, the resource reservation grows due to a higher need for management of user-deployed pods.
>[!NOTE] > Using AKS add-ons such as Container Insights (OMS) will consume additional node resources. Two types of resources are reserved: -- **CPU** - Reserved CPU is dependent on node type and cluster configuration, which may cause less allocatable CPU due to running additional features
+- **CPU**
+ Reserved CPU is dependent on node type and cluster configuration, which may cause less allocatable CPU due to running additional features.
| CPU cores on host | 1 | 2 | 4 | 8 | 16 | 32|64| ||||||||| |Kube-reserved (millicores)|60|100|140|180|260|420|740| -- **Memory** - Memory utilized by AKS includes the sum of two values.
+- **Memory**
+ Memory utilized by AKS includes the sum of two values.
- 1. The kubelet daemon is installed on all Kubernetes agent nodes to manage container creation and termination. By default on AKS, this daemon has the following eviction rule: *memory.available<750Mi*, which means a node must always have at least 750 Mi allocatable at all times. When a host is below that threshold of available memory, the kubelet will terminate one of the running pods to free memory on the host machine and protect it. This action is triggered once available memory decreases beyond the 750Mi threshold.
+ 1. **`kubelet` daemon**
+ The `kubelet` daemon is installed on all Kubernetes agent nodes to manage container creation and termination.
+
+ By default on AKS, `kubelet` daemon has the *memory.available<750Mi* eviction rule, ensuring a node must always have at least 750 Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` will trigger to terminate one of the running pods and free up memory on the host machine.
- 2. The second value is a regressive rate of memory reservations for the kubelet daemon to properly function (kube-reserved).
+ 2. **A regressive rate of memory reservations** for the kubelet daemon to properly function (*kube-reserved*).
- 25% of the first 4 GB of memory - 20% of the next 4 GB of memory (up to 8 GB) - 10% of the next 8 GB of memory (up to 16 GB) - 6% of the next 112 GB of memory (up to 128 GB) - 2% of any memory above 128 GB
-The above rules for memory and CPU allocation are used to keep agent nodes healthy, including some hosting system pods that are critical to cluster health. These allocation rules also cause the node to report less allocatable memory and CPU than it normally would if it were not part of a Kubernetes cluster. The above resource reservations can't be changed.
+Memory and CPU allocation rules:
+* Keep agent nodes healthy, including some hosting system pods critical to cluster health.
+* Cause the node to report less allocatable memory and CPU than it would if it were not part of a Kubernetes cluster.
+
+The above resource reservations can't be changed.
For example, if a node offers 7 GB, it will report 34% of memory not allocatable including the 750Mi hard eviction threshold.
For associated best practices, see [Best practices for basic scheduler features
### Node pools
-Nodes of the same configuration are grouped together into *node pools*. A Kubernetes cluster contains one or more node pools. The initial number of nodes and size are defined when you create an AKS cluster, which creates a *default node pool*. This default node pool in AKS contains the underlying VMs that run your agent nodes.
+Nodes of the same configuration are grouped together into *node pools*. A Kubernetes cluster contains at least one node pool. The initial number of nodes and size are defined when you create an AKS cluster, which creates a *default node pool*. This default node pool in AKS contains the underlying VMs that run your agent nodes.
> [!NOTE]
-> To ensure your cluster operates reliably, you should run at least 2 (two) nodes in the default node pool.
+> To ensure your cluster operates reliably, you should run at least two (2) nodes in the default node pool.
-When you scale or upgrade an AKS cluster, the action is performed against the default node pool. You can also choose to scale or upgrade a specific node pool. For upgrade operations, running containers are scheduled on other nodes in the node pool until all the nodes are successfully upgraded.
+You scale or upgrade an AKS cluster against the default node pool. You can choose to scale or upgrade a specific node pool. For upgrade operations, running containers are scheduled on other nodes in the node pool until all the nodes are successfully upgraded.
For more information about how to use multiple node pools in AKS, see [Create and manage multiple node pools for a cluster in AKS][use-multiple-node-pools]. ### Node selectors
-In an AKS cluster that contains multiple node pools, you may need to tell the Kubernetes Scheduler which node pool to use for a given resource. For example, ingress controllers shouldn't run on Windows Server nodes. Node selectors let you define various parameters, such as the node OS, to control where a pod should be scheduled.
+In an AKS cluster with multiple node pools, you may need to tell the Kubernetes Scheduler which node pool to use for a given resource. For example, ingress controllers shouldn't run on Windows Server nodes.
+
+Node selectors let you define various parameters, like node OS, to control where a pod should be scheduled.
The following basic example schedules an NGINX instance on a Linux node using the node selector *"beta.kubernetes.io/os": linux*:
For more information on how to control where pods are scheduled, see [Best pract
## Pods
-Kubernetes uses *pods* to run an instance of your application. A pod represents a single instance of your application. Pods typically have a 1:1 mapping with a container, although there are advanced scenarios where a pod may contain multiple containers. These multi-container pods are scheduled together on the same node, and allow containers to share related resources.
+Kubernetes uses *pods* to run an instance of your application. A pod represents a single instance of your application.
-When you create a pod, you can define *resource requests* to request a certain amount of CPU or memory resources. The Kubernetes Scheduler tries to schedule the pods to run on a node with available resources to meet the request. You can also specify maximum resource limits that prevent a given pod from consuming too much compute resource from the underlying node. A best practice is to include resource limits for all pods to help the Kubernetes Scheduler understand which resources are needed and permitted.
+Pods typically have a 1:1 mapping with a container. In advanced scenarios, a pod may contain multiple containers. Multi-container pods are scheduled together on the same node, and allow containers to share related resources.
+
+When you create a pod, you can define *resource requests* to request a certain amount of CPU or memory resources. The Kubernetes Scheduler tries meet the request by scheduling the pods to run on a node with available resources. You can also specify maximum resource limits to prevent a pod from consuming too much compute resource from the underlying node. Best practice is to include resource limits for all pods to help the Kubernetes Scheduler identify necessary, permitted resources.
For more information, see [Kubernetes pods][kubernetes-pods] and [Kubernetes pod lifecycle][kubernetes-pod-lifecycle].
-A pod is a logical resource, but the container(s) are where the application workloads run. Pods are typically ephemeral, disposable resources, and individually scheduled pods miss some of the high availability and redundancy features Kubernetes provides. Instead, pods are deployed and managed by Kubernetes *Controllers*, such as the Deployment Controller.
+A pod is a logical resource, but application workloads run on the containers. Pods are typically ephemeral, disposable resources. Individually scheduled pods miss some of the high availability and redundancy Kubernetes features. Instead, pods are deployed and managed by Kubernetes *Controllers*, such as the Deployment Controller.
## Deployments and YAML manifests
-A *deployment* represents one or more identical pods, managed by the Kubernetes Deployment Controller. A deployment defines the number of *replicas* (pods) to create, and the Kubernetes Scheduler ensures that if pods or nodes encounter problems, additional pods are scheduled on healthy nodes.
+A *deployment* represents identical pods managed by the Kubernetes Deployment Controller. A deployment defines the number of pod *replicas* to create. The Kubernetes Scheduler ensures that additional pods are scheduled on healthy nodes if pods or nodes encounter problems.
+
+You can update deployments to change the configuration of pods, container image used, or attached storage. The Deployment Controller:
+* Drains and terminates a given number of replicas.
+* Creates replicas from the new deployment definition.
+* Continues the process until all replicas in the deployment are updated.
-You can update deployments to change the configuration of pods, container image used, or attached storage. The Deployment Controller drains and terminates a given number of replicas, creates replicas from the new deployment definition, and continues the process until all replicas in the deployment are updated.
+Most stateless applications in AKS should use the deployment model rather than scheduling individual pods. Kubernetes can monitor deployment health and status to ensure that the required number of replicas run within the cluster. When scheduled individually, pods aren't restarted if they encounter a problem, and aren't rescheduled on healthy nodes if their current node encounters a problem.
-Most stateless applications in AKS should use the deployment model rather than scheduling individual pods. Kubernetes can monitor the health and status of deployments to ensure that the required number of replicas run within the cluster. When you only schedule individual pods, the pods aren't restarted if they encounter a problem, and aren't rescheduled on healthy nodes if their current node encounters a problem.
+You don't want to disrupt management decisions with an update process if your application requires a minimum number of available instances. *Pod Disruption Budgets* define how many replicas in a deployment can be taken down during an update or node upgrade. For example, if you have *five (5)* replicas in your deployment, you can define a pod disruption of *4 (four)* to only allow one replica to be deleted or rescheduled at a time. As with pod resource limits, best practice is to define pod disruption budgets on applications that require a minimum number of replicas to always be present.
-If an application requires a quorum of instances to always be available for management decisions to be made, you don't want an update process to disrupt that ability. *Pod Disruption Budgets* can be used to define how many replicas in a deployment can be taken down during an update or node upgrade. For example, if you have *five (5)* replicas in your deployment, you can define a pod disruption of *4* to only permit one replica from being deleted/rescheduled at a time. As with pod resource limits, a best practice is to define pod disruption budgets on applications that require a minimum number of replicas to always be present.
+Deployments are typically created and managed with `kubectl create` or `kubectl apply`. Create a deployment by defining a manifest file in the YAML format.
-Deployments are typically created and managed with `kubectl create` or `kubectl apply`. To create a deployment, you define a manifest file in the YAML (YAML Ain't Markup Language) format. The following example creates a basic deployment of the NGINX web server. The deployment specifies *three (3)* replicas to be created, and requires port *80* to be open on the container. Resource requests and limits are also defined for CPU and memory.
+The following example creates a basic deployment of the NGINX web server. The deployment specifies *three (3)* replicas to be created, and requires port *80* to be open on the container. Resource requests and limits are also defined for CPU and memory.
```yaml apiVersion: apps/v1
spec:
memory: 256Mi ```
-More complex applications can be created by also including services such as load balancers within the YAML manifest.
+More complex applications can be created by including services (such as load balancers) within the YAML manifest.
For more information, see [Kubernetes deployments][kubernetes-deployments]. ### Package management with Helm
-A common approach to managing applications in Kubernetes is with [Helm][helm]. You can build and use existing public Helm *charts* that contain a packaged version of application code and Kubernetes YAML manifests to deploy resources. These Helm charts can be stored locally, or often in a remote repository, such as an [Azure Container Registry Helm chart repo][acr-helm].
+[Helm][helm] is commonly used to manage applications in Kubernetes. You can deploy resources by building and using existing public Helm *charts* that contain a packaged version of application code and Kubernetes YAML manifests. You can store Helm charts either locally or in a remote repository, such as an [Azure Container Registry Helm chart repo][acr-helm].
-To use Helm, install the Helm client on your computer, or use the Helm client in the [Azure Cloud Shell][azure-cloud-shell]. You can search for or create Helm charts with the client, and then install them to your Kubernetes cluster. For more information, see [Install existing applications with Helm in AKS][aks-helm].
+To use Helm, install the Helm client on your computer, or use the Helm client in the [Azure Cloud Shell][azure-cloud-shell]. Search for or create Helm charts, and then install them to your Kubernetes cluster. For more information, see [Install existing applications with Helm in AKS][aks-helm].
## StatefulSets and DaemonSets
-The Deployment Controller uses the Kubernetes Scheduler to run a given number of replicas on any available node with available resources. This approach of using deployments may be sufficient for stateless applications, but not for applications that require a persistent naming convention or storage. For applications that require a replica to exist on each node, or selected nodes, within a cluster, the Deployment Controller doesn't look at how replicas are distributed across the nodes.
+Using the Kubernetes Scheduler, the Deployment Controller runs replicas on any available node with available resources. While this approach may be sufficient for stateless applications, The Deployment Controller is not ideal for applications that require:
+* A persistent naming convention or storage.
+* A replica to exist on each select node within a cluster.
-There are two Kubernetes resources that let you manage these types of applications:
+Two Kubernetes resources, however, let you manage these types of applications:
-- *StatefulSets* - Maintain the state of applications beyond an individual pod lifecycle, such as storage.-- *DaemonSets* - Ensure a running instance on each node, early in the Kubernetes bootstrap process.
+- *StatefulSets* maintain the state of applications beyond an individual pod lifecycle, such as storage.
+- *DaemonSets* ensure a running instance on each node, early in the Kubernetes bootstrap process.
### StatefulSets
-Modern application development often aims for stateless applications, but *StatefulSets* can be used for stateful applications, such as applications that include database components. A StatefulSet is similar to a deployment in that one or more identical pods are created and managed. Replicas in a StatefulSet follow a graceful, sequential approach to deployment, scale, upgrades, and terminations. With a StatefulSet (as replicas are rescheduled) the naming convention, network names, and storage persist.
+Modern application development often aims for stateless applications. For stateful applications, like those that include database components, you can use *StatefulSets*. Like deployments, a StatefulSet creates and manages at least one identical pod. Replicas in a StatefulSet follow a graceful, sequential approach to deployment, scale, upgrade, and termination. The naming convention, network names, and storage persist as replicas are rescheduled with a StatefulSet.
-You define the application in YAML format using `kind: StatefulSet`, and the StatefulSet Controller then handles the deployment and management of the required replicas. Data is written to persistent storage, provided by Azure Managed Disks or Azure Files. With StatefulSets, the underlying persistent storage remains even when the StatefulSet is deleted.
+Define the application in YAML format using `kind: StatefulSet`. From there, the StatefulSet Controller handles the deployment and management of the required replicas. Data is written to persistent storage, provided by Azure Managed Disks or Azure Files. With StatefulSets, the underlying persistent storage remains, even when the StatefulSet is deleted.
For more information, see [Kubernetes StatefulSets][kubernetes-statefulsets].
-Replicas in a StatefulSet are scheduled and run across any available node in an AKS cluster. If you need to ensure that at least one pod in your Set runs on a node, you can instead use a DaemonSet.
+Replicas in a StatefulSet are scheduled and run across any available node in an AKS cluster. To ensure at least one pod in your set runs on a node, you use a DaemonSet instead.
### DaemonSets
-For specific log collection or monitoring needs, you may need to run a given pod on all, or selected, nodes. A *DaemonSet* is again used to deploy one or more identical pods, but the DaemonSet Controller ensures that each node specified runs an instance of the pod.
+For specific log collection or monitoring, you may need to run a pod on all, or selected, nodes. You can use *DaemonSet* deploy one or more identical pods, but the DaemonSet Controller ensures that each node specified runs an instance of the pod.
The DaemonSet Controller can schedule pods on nodes early in the cluster boot process, before the default Kubernetes scheduler has started. This ability ensures that the pods in a DaemonSet are started before traditional pods in a Deployment or StatefulSet are scheduled.
For more information, see [Kubernetes DaemonSets][kubernetes-daemonset].
## Namespaces
-Kubernetes resources, such as pods and Deployments, are logically grouped into a *namespace*. These groupings provide a way to logically divide an AKS cluster and restrict access to create, view, or manage resources. You can create namespaces to separate business groups, for example. Users can only interact with resources within their assigned namespaces.
+Kubernetes resources, such as pods and deployments, are logically grouped into a *namespace* to divide an AKS cluster and restrict create, view, or manage access to resources. For example, you can create namespaces to separate business groups. Users can only interact with resources within their assigned namespaces.
![Kubernetes namespaces to logically divide resources and applications](media/concepts-clusters-workloads/namespaces.png) When you create an AKS cluster, the following namespaces are available: -- *default* - This namespace is where pods and deployments are created by default when none is provided. In smaller environments, you can deploy applications directly into the default namespace without creating additional logical separations. When you interact with the Kubernetes API, such as with `kubectl get pods`, the default namespace is used when none is specified.-- *kube-system* - This namespace is where core resources exist, such as network features like DNS and proxy, or the Kubernetes dashboard. You typically don't deploy your own applications into this namespace.-- *kube-public* - This namespace is typically not used, but can be used for resources to be visible across the whole cluster, and can be viewed by any user.
+| Namespace | Description |
+| -- | - |
+| *default* | Where pods and deployments are created by default when none is provided. In smaller environments, you can deploy applications directly into the default namespace without creating additional logical separations. When you interact with the Kubernetes API, such as with `kubectl get pods`, the default namespace is used when none is specified. |
+| *kube-system* | Where core resources exist, such as network features like DNS and proxy, or the Kubernetes dashboard. You typically don't deploy your own applications into this namespace. |
+| *kube-public* | Typically not used, but can be used for resources to be visible across the whole cluster, and can be viewed by any user. |
+ For more information, see [Kubernetes namespaces][kubernetes-namespaces].
aks Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-identity.md
Title: Concepts - Access and identity in Azure Kubernetes Services (AKS)
description: Learn about access and identity in Azure Kubernetes Service (AKS), including Azure Active Directory integration, Kubernetes role-based access control (Kubernetes RBAC), and roles and bindings. Previously updated : 07/07/2020 Last updated : 03/24/2021
# Access and identity options for Azure Kubernetes Service (AKS)
-There are different ways to authenticate, control access/authorize and secure Kubernetes clusters. Using Kubernetes role-based access control (Kubernetes RBAC), you can grant users, groups, and service accounts access to only the resources they need. With Azure Kubernetes Service (AKS), you can further enhance the security and permissions structure by using Azure Active Directory and Azure RBAC. These approaches help you secure your cluster access and provide only the minimum required permissions to developers and operators.
+You can authenticate, authorize, secure, and control access to Kubernetes clusters in a variety of ways.
+* Using Kubernetes role-based access control (Kubernetes RBAC), you can grant users, groups, and service accounts access to only the resources they need.
+* With Azure Kubernetes Service (AKS), you can further enhance the security and permissions structure via Azure Active Directory and Azure RBAC.
+
+Kubernetes RBAC and AKS help you secure your cluster access and provide only the minimum required permissions to developers and operators.
This article introduces the core concepts that help you authenticate and assign permissions in AKS. ## AKS service permissions
-When creating a cluster, AKS creates or modifies resources it needs to create and run the cluster, such as VMs and NICs, on behalf of the user creating the cluster. This identity is distinct from the cluster's identity permission, which is created during cluster creation.
+When creating a cluster, AKS generates or modifies resources it needs (like VMs and NICs) to create and run the cluster on behalf of the user. This identity is distinct from the cluster's identity permission, which is created during cluster creation.
### Identity creating and operating the cluster permissions
The following permissions are needed by the identity creating and operating the
| Permission | Reason | |||
-| Microsoft.Compute/diskEncryptionSets/read | Required to read disk encryption set ID. |
-| Microsoft.Compute/proximityPlacementGroups/write | Required for updating proximity placement groups. |
-| Microsoft.Network/applicationGateways/read <br/> Microsoft.Network/applicationGateways/write <br/> Microsoft.Network/virtualNetworks/subnets/join/action | Required to configure application gateways and join the subnet. |
-| Microsoft.Network/virtualNetworks/subnets/join/action | Required to configure the Network Security Group for the subnet when using a custom VNET.|
-| Microsoft.Network/publicIPAddresses/join/action <br/> Microsoft.Network/publicIPPrefixes/join/action | Required to configure the outbound public IPs on the Standard Load Balancer. |
-| Microsoft.OperationalInsights/workspaces/sharedkeys/read <br/> Microsoft.OperationalInsights/workspaces/read <br/> Microsoft.OperationsManagement/solutions/write <br/> Microsoft.OperationsManagement/solutions/read <br/> Microsoft.ManagedIdentity/userAssignedIdentities/assign/action | Required to create and update Log Analytics workspaces and Azure monitoring for containers. |
+| `Microsoft.Compute/diskEncryptionSets/read` | Required to read disk encryption set ID. |
+| `Microsoft.Compute/proximityPlacementGroups/write` | Required for updating proximity placement groups. |
+| `Microsoft.Network/applicationGateways/read` <br/> `Microsoft.Network/applicationGateways/write` <br/> `Microsoft.Network/virtualNetworks/subnets/join/action` | Required to configure application gateways and join the subnet. |
+| `Microsoft.Network/virtualNetworks/subnets/join/action` | Required to configure the Network Security Group for the subnet when using a custom VNET.|
+| `Microsoft.Network/publicIPAddresses/join/action` <br/> `Microsoft.Network/publicIPPrefixes/join/action` | Required to configure the outbound public IPs on the Standard Load Balancer. |
+| `Microsoft.OperationalInsights/workspaces/sharedkeys/read` <br/> `Microsoft.OperationalInsights/workspaces/read` <br/> `Microsoft.OperationsManagement/solutions/write` <br/> `Microsoft.OperationsManagement/solutions/read` <br/> `Microsoft.ManagedIdentity/userAssignedIdentities/assign/action` | Required to create and update Log Analytics workspaces and Azure monitoring for containers. |
### AKS cluster identity permissions
-The following permissions are used by the AKS cluster identity, which is created and associated with the AKS cluster when the cluster is created. Each permission is used for the reasons below:
+The following permissions are used by the AKS cluster identity, which is created and associated with the AKS cluster. Each permission is used for the reasons below:
| Permission | Reason | |||
-| Microsoft.ContainerService/managedClusters/* <br/> | Required for creating users and operating the cluster
-| Microsoft.Network/loadBalancers/delete <br/> Microsoft.Network/loadBalancers/read <br/> Microsoft.Network/loadBalancers/write | Required to configure the load balancer for a LoadBalancer service. |
-| Microsoft.Network/publicIPAddresses/delete <br/> Microsoft.Network/publicIPAddresses/read <br/> Microsoft.Network/publicIPAddresses/write | Required to find and configure public IPs for a LoadBalancer service. |
-| Microsoft.Network/publicIPAddresses/join/action | Required for configuring public IPs for a LoadBalancer service. |
-| Microsoft.Network/networkSecurityGroups/read <br/> Microsoft.Network/networkSecurityGroups/write | Required to create or delete security rules for a LoadBalancer service. |
-| Microsoft.Compute/disks/delete <br/> Microsoft.Compute/disks/read <br/> Microsoft.Compute/disks/write <br/> Microsoft.Compute/locations/DiskOperations/read | Required to configure AzureDisks. |
-| Microsoft.Storage/storageAccounts/delete <br/> Microsoft.Storage/storageAccounts/listKeys/action <br/> Microsoft.Storage/storageAccounts/read <br/> Microsoft.Storage/storageAccounts/write <br/> Microsoft.Storage/operations/read | Required to configure storage accounts for AzureFile or AzureDisk. |
-| Microsoft.Network/routeTables/read <br/> Microsoft.Network/routeTables/routes/delete <br/> Microsoft.Network/routeTables/routes/read <br/> Microsoft.Network/routeTables/routes/write <br/> Microsoft.Network/routeTables/write | Required to configure route tables and routes for nodes. |
-| Microsoft.Compute/virtualMachines/read | Required to find information for virtual machines in a VMAS, such as zones, fault domain, size, and data disks. |
-| Microsoft.Compute/virtualMachines/write | Required to attach AzureDisks to a virtual machine in a VMAS. |
-| Microsoft.Compute/virtualMachineScaleSets/read <br/> Microsoft.Compute/virtualMachineScaleSets/virtualMachines/read <br/> Microsoft.Compute/virtualMachineScaleSets/virtualmachines/instanceView/read | Required to find information for virtual machines in a virtual machine scale set, such as zones, fault domain, size, and data disks. |
-| Microsoft.Network/networkInterfaces/write | Required to add a virtual machine in a VMAS to a load balancer backend address pool. |
-| Microsoft.Compute/virtualMachineScaleSets/write | Required to add a virtual machine scale set to a load balancer backend address pools and scale out nodes in a virtual machine scale set. |
-| Microsoft.Compute/virtualMachineScaleSets/virtualmachines/write | Required to attach AzureDisks and add a virtual machine from a virtual machine scale set to the load balancer. |
-| Microsoft.Network/networkInterfaces/read | Required to search internal IPs and load balancer backend address pools for virtual machines in a VMAS. |
-| Microsoft.Compute/virtualMachineScaleSets/virtualMachines/networkInterfaces/read | Required to search internal IPs and load balancer backend address pools for a virtual machine in a virtual machine scale set. |
-| Microsoft.Compute/virtualMachineScaleSets/virtualMachines/networkInterfaces/ ipconfigurations/publicipaddresses/read | Required to find public IPs for a virtual machine in a virtual machine scale set. |
-| Microsoft.Network/virtualNetworks/read <br/> Microsoft.Network/virtualNetworks/subnets/read | Required to verify if a subnet exists for the internal load balancer in another resource group. |
-| Microsoft.Compute/snapshots/delete <br/> Microsoft.Compute/snapshots/read <br/> Microsoft.Compute/snapshots/write | Required to configure snapshots for AzureDisk. |
-| Microsoft.Compute/locations/vmSizes/read <br/> Microsoft.Compute/locations/operations/read | Required to find virtual machine sizes for finding AzureDisk volume limits. |
+| `Microsoft.ContainerService/managedClusters/*` <br/> | Required for creating users and operating the cluster
+| `Microsoft.Network/loadBalancers/delete` <br/> `Microsoft.Network/loadBalancers/read` <br/> `Microsoft.Network/loadBalancers/write` | Required to configure the load balancer for a LoadBalancer service. |
+| `Microsoft.Network/publicIPAddresses/delete` <br/> `Microsoft.Network/publicIPAddresses/read` <br/> `Microsoft.Network/publicIPAddresses/write` | Required to find and configure public IPs for a LoadBalancer service. |
+| `Microsoft.Network/publicIPAddresses/join/action` | Required for configuring public IPs for a LoadBalancer service. |
+| `Microsoft.Network/networkSecurityGroups/read` <br/> `Microsoft.Network/networkSecurityGroups/write` | Required to create or delete security rules for a LoadBalancer service. |
+| `Microsoft.Compute/disks/delete` <br/> `Microsoft.Compute/disks/read` <br/> `Microsoft.Compute/disks/write` <br/> `Microsoft.Compute/locations/DiskOperations/read` | Required to configure AzureDisks. |
+| `Microsoft.Storage/storageAccounts/delete` <br/> `Microsoft.Storage/storageAccounts/listKeys/action` <br/> `Microsoft.Storage/storageAccounts/read` <br/> `Microsoft.Storage/storageAccounts/write` <br/> `Microsoft.Storage/operations/read` | Required to configure storage accounts for AzureFile or AzureDisk. |
+| `Microsoft.Network/routeTables/read` <br/> `Microsoft.Network/routeTables/routes/delete` <br/> `Microsoft.Network/routeTables/routes/read` <br/> `Microsoft.Network/routeTables/routes/write` <br/> `Microsoft.Network/routeTables/write` | Required to configure route tables and routes for nodes. |
+| `Microsoft.Compute/virtualMachines/read` | Required to find information for virtual machines in a VMAS, such as zones, fault domain, size, and data disks. |
+| `Microsoft.Compute/virtualMachines/write` | Required to attach AzureDisks to a virtual machine in a VMAS. |
+| `Microsoft.Compute/virtualMachineScaleSets/read` <br/> `Microsoft.Compute/virtualMachineScaleSets/virtualMachines/read` <br/> `Microsoft.Compute/virtualMachineScaleSets/virtualmachines/instanceView/read` | Required to find information for virtual machines in a virtual machine scale set, such as zones, fault domain, size, and data disks. |
+| `Microsoft.Network/networkInterfaces/write` | Required to add a virtual machine in a VMAS to a load balancer backend address pool. |
+| `Microsoft.Compute/virtualMachineScaleSets/write` | Required to add a virtual machine scale set to a load balancer backend address pools and scale out nodes in a virtual machine scale set. |
+| `Microsoft.Compute/virtualMachineScaleSets/virtualmachines/write` | Required to attach AzureDisks and add a virtual machine from a virtual machine scale set to the load balancer. |
+| `Microsoft.Network/networkInterfaces/read` | Required to search internal IPs and load balancer backend address pools for virtual machines in a VMAS. |
+| `Microsoft.Compute/virtualMachineScaleSets/virtualMachines/networkInterfaces/read` | Required to search internal IPs and load balancer backend address pools for a virtual machine in a virtual machine scale set. |
+| `Microsoft.Compute/virtualMachineScaleSets/virtualMachines/networkInterfaces/ipconfigurations/publicipaddresses/read` | Required to find public IPs for a virtual machine in a virtual machine scale set. |
+| `Microsoft.Network/virtualNetworks/read` <br/> `Microsoft.Network/virtualNetworks/subnets/read` | Required to verify if a subnet exists for the internal load balancer in another resource group. |
+| `Microsoft.Compute/snapshots/delete` <br/> `Microsoft.Compute/snapshots/read` <br/> `Microsoft.Compute/snapshots/write` | Required to configure snapshots for AzureDisk. |
+| `Microsoft.Compute/locations/vmSizes/read` <br/> `Microsoft.Compute/locations/operations/read` | Required to find virtual machine sizes for finding AzureDisk volume limits. |
### Additional cluster identity permissions
-The following additional permissions are needed by the cluster identity when creating a cluster with specific attributes. These permissions are not automatically assigned so you must add these permissions to the cluster identity after its created.
+When creating a cluster with specific attributes, you will need the following additional permissions for the cluster identity. Since these permissions are not automatically assigned, you must add them to the cluster identity after it's created.
| Permission | Reason | |||
-| Microsoft.Network/networkSecurityGroups/write <br/> Microsoft.Network/networkSecurityGroups/read | Required if using a network security group in another resource group. Required to configure security rules for a LoadBalancer service. |
-| Microsoft.Network/virtualNetworks/subnets/read <br/> Microsoft.Network/virtualNetworks/subnets/join/action | Required if using a subnet in another resource group such as a custom VNET. |
-| Microsoft.Network/routeTables/routes/read <br/> Microsoft.Network/routeTables/routes/write | Required if using a subnet associated with a route table in another resource group such as a custom VNET with a custom route table. Required to verify if a subnet already exists for the subnet in the other resource group. |
-| Microsoft.Network/virtualNetworks/subnets/read | Required if using an internal load balancer in another resource group. Required to verify if a subnet already exists for the internal load balancer in the resource group. |
-| Microsoft.Network/privatednszones/* | Required if using a private DNS zone in another resource group such as a custom privateDNSZone. |
+| `Microsoft.Network/networkSecurityGroups/write` <br/> `Microsoft.Network/networkSecurityGroups/read` | Required if using a network security group in another resource group. Required to configure security rules for a LoadBalancer service. |
+| `Microsoft.Network/virtualNetworks/subnets/read` <br/> `Microsoft.Network/virtualNetworks/subnets/join/action` | Required if using a subnet in another resource group such as a custom VNET. |
+| `Microsoft.Network/routeTables/routes/read` <br/> `Microsoft.Network/routeTables/routes/write` | Required if using a subnet associated with a route table in another resource group such as a custom VNET with a custom route table. Required to verify if a subnet already exists for the subnet in the other resource group. |
+| `Microsoft.Network/virtualNetworks/subnets/read` | Required if using an internal load balancer in another resource group. Required to verify if a subnet already exists for the internal load balancer in the resource group. |
+| `Microsoft.Network/privatednszones/*` | Required if using a private DNS zone in another resource group such as a custom privateDNSZone. |
-## Kubernetes role-based access control (Kubernetes RBAC)
+## Kubernetes RBAC
-To provide granular filtering of the actions that users can do, Kubernetes uses Kubernetes role-based access control (Kubernetes RBAC). This control mechanism lets you assign users, or groups of users, permission to do things like create or modify resources, or view logs from running application workloads. These permissions can be scoped to a single namespace, or granted across the entire AKS cluster. With Kubernetes RBAC, you create *roles* to define permissions, and then assign those roles to users with *role bindings*.
+Kubernetes RBAC provides granular filtering of user actions. With this control mechanism:
+* You assign users or user groups permission to create and modify resources or view logs from running application workloads.
+* You can scope permissions to a single namespace or across the entire AKS cluster.
+* You create *roles* to define permissions, and then assign those roles to users with *role bindings*.
For more information, see [Using Kubernetes RBAC authorization][kubernetes-rbac]. ### Roles and ClusterRoles
-Before you assign permissions to users with Kubernetes RBAC, you first define those permissions as a *Role*. Kubernetes roles *grant* permissions. There's no concept of a *deny* permission.
+#### Roles
+Before assigning permissions to users with Kubernetes RBAC, you'll define user permissions as a *Role*. Grant permissions within a namespace using roles.
+
+> [!NOTE]
+> Kubernetes roles *grant* permissions; they don't *deny* permissions.
+
+To grant permissions across the entire cluster or to cluster resources outside a given namespace, you can instead use *ClusterRoles*.
-Roles are used to grant permissions within a namespace. If you need to grant permissions across the entire cluster, or to cluster resources outside a given namespace, you can instead use *ClusterRoles*.
+#### ClusterRoles
-A ClusterRole works in the same way to grant permissions to resources, but can be applied to resources across the entire cluster, not a specific namespace.
+A ClusterRole grants and applies permissions to resources across the entire cluster, not a specific namespace.
### RoleBindings and ClusterRoleBindings
-Once roles are defined to grant permissions to resources, you assign those Kubernetes RBAC permissions with a *RoleBinding*. If your AKS cluster [integrates with Azure Active Directory (Azure AD)](#azure-active-directory-integration), bindings are how those Azure AD users are granted permissions to perform actions within the cluster, see how in [Control access to cluster resources using Kubernetes role-based access control and Azure Active Directory identities](azure-ad-rbac.md).
+Once you've defined roles to grant permissions to resources, you assign those Kubernetes RBAC permissions with a *RoleBinding*. If your AKS cluster [integrates with Azure Active Directory (Azure AD)](#azure-ad-integration), RoleBindings grant permissions to Azure AD users to perform actions within the cluster. See how in [Control access to cluster resources using Kubernetes role-based access control and Azure Active Directory identities](azure-ad-rbac.md).
+
+#### RoleBindings
+
+Assign roles to users for a given namespace using RoleBindings. With RoleBindings, you can logically segregate a single AKS cluster, only enabling users to access the application resources in their assigned namespace.
-Role bindings are used to assign roles for a given namespace. This approach lets you logically segregate a single AKS cluster, with users only able to access the application resources in their assigned namespace. If you need to bind roles across the entire cluster, or to cluster resources outside a given namespace, you can instead use *ClusterRoleBindings*.
+To bind roles across the entire cluster, or to cluster resources outside a given namespace, you instead use *ClusterRoleBindings*.
-A ClusterRoleBinding works in the same way to bind roles to users, but can be applied to resources across the entire cluster, not a specific namespace. This approach lets you grant administrators or support engineers access to all resources in the AKS cluster.
+#### ClusterRoleBinding
+
+With a ClusterRoleBinding, you bind roles to users and apply to resources across the entire cluster, not a specific namespace. This approach lets you grant administrators or support engineers access to all resources in the AKS cluster.
> [!NOTE]
-> Any cluster actions taken by Microsoft/AKS are made with user consent under a built-in Kubernetes role `aks-service` and built-in role binding `aks-service-rolebinding`. This role enables AKS to troubleshoot and diagnose cluster issues, but can't modify permissions nor create roles or role bindings, or other high privilege actions. Role access is only enabled under active support tickets with just-in-time (JIT) access. Read more about [AKS support policies](support-policies.md).
+> Microsoft/AKS performs any cluster actions with user consent under a built-in Kubernetes role `aks-service` and built-in role binding `aks-service-rolebinding`.
+>
+> This role enables AKS to troubleshoot and diagnose cluster issues, but can't modify permissions nor create roles or role bindings, or other high privilege actions. Role access is only enabled under active support tickets with just-in-time (JIT) access. Read more about [AKS support policies](support-policies.md).
### Kubernetes service accounts
-One of the primary user types in Kubernetes is a *service account*. A service account exists in, and is managed by, the Kubernetes API. The credentials for service accounts are stored as Kubernetes secrets, which allows them to be used by authorized pods to communicate with the API Server. Most API requests provide an authentication token for a service account or a normal user account.
+*Service accounts* are one of the primary user types in Kubernetes. The Kubernetes API holds and manages service accounts. Service account credentials are stored as Kubernetes secrets, allowing them to be used by authorized pods to communicate with the API Server. Most API requests provide an authentication token for a service account or a normal user account.
-Normal user accounts allow more traditional access for human administrators or developers, not just services, and processes. Kubernetes itself doesn't provide an identity management solution where regular user accounts and passwords are stored. Instead, external identity solutions can be integrated into Kubernetes. For AKS clusters, this integrated identity solution is Azure Active Directory.
+Normal user accounts allow more traditional access for human administrators or developers, not just services and processes. While Kubernetes doesn't provide an identity management solution to store regular user accounts and passwords, you can integrate external identity solutions into Kubernetes. For AKS clusters, this integrated identity solution is Azure AD.
For more information on the identity options in Kubernetes, see [Kubernetes authentication][kubernetes-authentication].
-## Azure Active Directory integration
+## Azure AD integration
-The security of AKS clusters can be enhanced with the integration of Azure Active Directory (AD). Built on decades of enterprise identity management, Azure AD is a multi-tenant, cloud-based directory, and identity management service that combines core directory services, application access management, and identity protection. With Azure AD, you can integrate on-premises identities into AKS clusters to provide a single source for account management and security.
+Enhance your AKS cluster security with Azure AD integration. Built on decades of enterprise identity management, Azure AD is a multi-tenant, cloud-based directory and identity management service that combines core directory services, application access management, and identity protection. With Azure AD, you can integrate on-premises identities into AKS clusters to provide a single source for account management and security.
![Azure Active Directory integration with AKS clusters](media/concepts-identity/aad-integration.png)
-With Azure AD-integrated AKS clusters, you can grant users or groups access to Kubernetes resources within a namespace or across the cluster. To obtain a `kubectl` configuration context, a user can run the [az aks get-credentials][az-aks-get-credentials] command. When a user then interacts with the AKS cluster with `kubectl`, they're prompted to sign in with their Azure AD credentials. This approach provides a single source for user account management and password credentials. The user can only access the resources as defined by the cluster administrator.
+With Azure AD-integrated AKS clusters, you can grant users or groups access to Kubernetes resources within a namespace or across the cluster.
+
+1. To obtain a `kubectl` configuration context, a user runs the [az aks get-credentials][az-aks-get-credentials] command.
+1. When a user interacts with the AKS cluster with `kubectl`, they're prompted to sign in with their Azure AD credentials.
+
+This approach provides a single source for user account management and password credentials. The user can only access the resources as defined by the cluster administrator.
Azure AD authentication is provided to AKS clusters with OpenID Connect. OpenID Connect is an identity layer built on top of the OAuth 2.0 protocol. For more information on OpenID Connect, see the [Open ID connect documentation][openid-connect]. From inside of the Kubernetes cluster, [Webhook Token Authentication][webhook-token-docs] is used to verify authentication tokens. Webhook token authentication is configured and managed as part of the AKS cluster.
Azure AD authentication is provided to AKS clusters with OpenID Connect. OpenID
As shown in the graphic above, the API server calls the AKS webhook server and performs the following steps:
-1. The Azure AD client application is used by kubectl to sign in users with [OAuth 2.0 device authorization grant flow](../active-directory/develop/v2-oauth2-device-code.md).
+1. `kubectl` uses the Azure AD client application to sign in users with [OAuth 2.0 device authorization grant flow](../active-directory/develop/v2-oauth2-device-code.md).
2. Azure AD provides an access_token, id_token, and a refresh_token.
-3. The user makes a request to kubectl with an access_token from kubeconfig.
-4. Kubectl sends the access_token to API Server.
+3. The user makes a request to `kubectl` with an access_token from `kubeconfig`.
+4. `kubectl` sends the access_token to API Server.
5. The API Server is configured with the Auth WebHook Server to perform validation. 6. The authentication webhook server confirms the JSON Web Token signature is valid by checking the Azure AD public signing key. 7. The server application uses user-provided credentials to query group memberships of the logged-in user from the MS Graph API. 8. A response is sent to the API Server with user information such as the user principal name (UPN) claim of the access token, and the group membership of the user based on the object ID. 9. The API performs an authorization decision based on the Kubernetes Role/RoleBinding.
-10. Once authorized, the API server returns a response to kubectl.
-11. Kubectl provides feedback to the user.
+10. Once authorized, the API server returns a response to `kubectl`.
+11. `kubectl` provides feedback to the user.
-**Learn how to integrate AKS with AAD [here](managed-aad.md).**
+Learn how to integrate AKS with Azure AD with our [AKS-managed Azure AD integration how-to guide](managed-aad.md).
-## Azure role-based access control (Azure RBAC)
+## Azure role-based access control
-Azure RBAC is an authorization system built on [Azure Resource Manager](../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources.
+Azure role-based access control (RBAC) is an authorization system built on [Azure Resource Manager](../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources.
- Azure RBAC is designed to work on resources within your Azure subscription while Kubernetes RBAC is designed to work on Kubernetes resources within your AKS cluster.
+| RBAC system | Description |
+|||
+| Kubernetes RBAC | Designed to work on Kubernetes resources within your AKS cluster. |
+| Azure RBAC | Designed to work on resources within your Azure subscription. |
-With Azure RBAC, you create a *role definition* that outlines the permissions to be applied. A user or group is then assigned this role definition via a *role assignment* for a particular *scope*, which could be an individual resource, a resource group, or across the subscription.
+With Azure RBAC, you create a *role definition* that outlines the permissions to be applied. You then assign a user or group this role definition via a *role assignment* for a particular *scope*. The scope can be an individual resource, a resource group, or across the subscription.
For more information, see [What is Azure role-based access control (Azure RBAC)?][azure-rbac] There are two levels of access needed to fully operate an AKS cluster:
-1. [Access the AKS resource in your Azure subscription](#azure-rbac-to-authorize-access-to-the-aks-resource). This process allows you to control things scaling or upgrading your cluster using the AKS APIs as well as pull your kubeconfig.
-2. Access to the Kubernetes API. This access is controlled either by [Kubernetes RBAC](#kubernetes-role-based-access-control-kubernetes-rbac) (traditionally) or by [integrating Azure RBAC with AKS for Kubernetes authorization](#azure-rbac-for-kubernetes-authorization-preview)
+* [Access the AKS resource in your Azure subscription](#azure-rbac-to-authorize-access-to-the-aks-resource).
+ * Control scaling or upgrading your cluster using the AKS APIs.
+ * Pull your `kubeconfig`.
+* Access to the Kubernetes API. This access is controlled by either:
+ * [Kubernetes RBAC](#kubernetes-rbac) (traditionally).
+ * [Integrating Azure RBAC with AKS for Kubernetes authorization](#azure-rbac-for-kubernetes-authorization-preview).
### Azure RBAC to authorize access to the AKS resource
-With Azure RBAC, you can provide your users (or identities) with granular access to AKS resources across one or more subscriptions. For example, you could have the [Azure Kubernetes Service Contributor role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-contributor-role) that allows you to do actions like scale and upgrade your cluster. While another user could have the [Azure Kubernetes Service Cluster Admin role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) that only gives permission to pull the Admin kubeconfig.
+With Azure RBAC, you can provide your users (or identities) with granular access to AKS resources across one or more subscriptions. For example, you could use the [Azure Kubernetes Service Contributor role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-contributor-role) to scale and upgrade your cluster. Meanwhile, another user with the [Azure Kubernetes Service Cluster Admin role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) only has permission to pull the Admin `kubeconfig`.
-Alternatively you could give your user the general [Contributor](../role-based-access-control/built-in-roles.md#contributor) role, which would encompass the above permissions and every action possible on the AKS resource with the exception of managing permissions itself.
+Alternatively, you could give your user the general [Contributor](../role-based-access-control/built-in-roles.md#contributor) role. With the general Contributor role, users can perform the above permissions and every action possible on the AKS resource, except managing permissions.
-See more how to use Azure RBAC to secure the access to the kubeconfig file that gives access to the Kubernetes API [here](control-kubeconfig-access.md).
+[Use Azure RBAC to define access to the Kubernetes configuration file in AKS](control-kubeconfig-access.md).
### Azure RBAC for Kubernetes Authorization (Preview)
-With the Azure RBAC integration, AKS will use a Kubernetes Authorization webhook server to enable you to manage permissions and assignments of Azure AD-integrated K8s cluster resources using Azure role definition and role assignments.
+With the Azure RBAC integration, AKS will use a Kubernetes Authorization webhook server so you can manage Azure AD-integrated Kubernetes cluster resource permissions and assignments using Azure role definition and role assignments.
![Azure RBAC for Kubernetes authorization flow](media/concepts-identity/azure-rbac-k8s-authz-flow.png)
-As shown on the above diagram, when using the Azure RBAC integration all requests to the Kubernetes API will follow the same authentication flow as explained on the [Azure Active Directory integration section](#azure-active-directory-integration).
+As shown in the above diagram, when using the Azure RBAC integration, all requests to the Kubernetes API will follow the same authentication flow as explained on the [Azure Active Directory integration section](#azure-ad-integration).
-But after that, instead of solely relying on Kubernetes RBAC for Authorization, the request is actually going to be authorized by Azure, as long as the identity that made the request exists in AAD. If the identity doesn't exist in AAD, for example a Kubernetes service account, then the Azure RBAC won't kick in, and it will be the normal Kubernetes RBAC.
+If the identity making the request exists in Azure AD, Azure will team with Kubernetes RBAC to authorize the request. If the identity exists outside of Azure AD (i.e., a Kubernetes service account), authorization will deter to the normal Kubernetes RBAC.
-In this scenario you could give users one of the four built-in roles, or create custom roles as you would do with Kubernetes roles but in this case using the Azure RBAC mechanisms and APIs.
+In this scenario, you use Azure RBAC mechanisms and APIs to assign users built-in roles or create custom roles, just as you would with Kubernetes roles.
-This feature will allow you to, for example, not only give users permissions to the AKS resource across subscriptions but set up and give them the role and permissions that they will have inside each of those clusters that controls the access to the Kubernetes API. For example, you can grant the `Azure Kubernetes Service RBAC Viewer` role on the subscription scope and its recipient will be able to list and get all Kubernetes objects from all clusters, but not modify them.
+With this feature, you not only give users permissions to the AKS resource across subscriptions, but you also configure the role and permissions for inside each of those clusters controlling Kubernetes API access. For example, you can grant the `Azure Kubernetes Service RBAC Viewer` role on the subscription scope. The role recipient will be able to list and get all Kubernetes objects from all clusters without modifying them.
> [!IMPORTANT]
-> Please note that you need to enable Azure RBAC for Kubernetes authorization before using this feature. For more details and step by step guidance, [see here](manage-azure-rbac.md).
+> You need to enable Azure RBAC for Kubernetes authorization before using this feature. For more details and step by step guidance, follow our [Use Azure RBAC for Kubernetes Authorization](manage-azure-rbac.md) how-to guide.
#### Built-in roles
-AKS provides the following four built-in roles. They are similar to the [Kubernetes built-in roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) but with a few differences like supporting CRDs. For the full list of actions allowed by each built-in role, see [here](../role-based-access-control/built-in-roles.md).
+AKS provides the following four built-in roles. They are similar to the [Kubernetes built-in roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) with a few differences, like supporting CRDs. See the full list of actions allowed by each [Azure built-in role](../role-based-access-control/built-in-roles.md).
| Role | Description | |-|--|
-| Azure Kubernetes Service RBAC Viewer | Allows read-only access to see most objects in a namespace. It doesn't allow viewing roles or role bindings. This role doesn't allow viewing `Secrets`, since reading the contents of Secrets enables access to `ServiceAccount` credentials in the namespace, which would allow API access as any `ServiceAccount` in the namespace (a form of privilege escalation) |
-| Azure Kubernetes Service RBAC Writer | Allows read/write access to most objects in a namespace. This role doesn't allow viewing or modifying roles or role bindings. However, this role allows accessing `Secrets` and running Pods as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. |
-| Azure Kubernetes Service RBAC Admin | Allows admin access, intended to be granted within a namespace. Allows read/write access to most resources in a namespace (or cluster scope), including the ability to create roles and role bindings within the namespace. This role doesn't allow write access to resource quota or to the namespace itself. |
-| Azure Kubernetes Service RBAC Cluster Admin | Allows super-user access to perform any action on any resource. It gives full control over every resource in the cluster and in all namespaces. |
+| Azure Kubernetes Service RBAC Viewer | Allows read-only access to see most objects in a namespace. <br> Doesn't allow viewing roles or role bindings.<br> Doesn't allow viewing `Secrets`. Reading the `Secrets` contents enables access to `ServiceAccount` credentials in the namespace, which would allow API access as any `ServiceAccount` in the namespace (a form of privilege escalation). |
+| Azure Kubernetes Service RBAC Writer | Allows read/write access to most objects in a namespace. <br> Doesn't allow viewing or modifying roles, or role bindings. <br> Allows accessing `Secrets` and running pods as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. |
+| Azure Kubernetes Service RBAC Admin | Allows admin access, intended to be granted within a namespace. <br> Allows read/write access to most resources in a namespace (or cluster scope), including the ability to create roles and role bindings within the namespace. <br> Doesn't allow write access to resource quota or to the namespace itself. |
+| Azure Kubernetes Service RBAC Cluster Admin | Allows super-user access to perform any action on any resource. <br> Gives full control over every resource in the cluster and in all namespaces. |
## Summary
-This table summarizes the ways users can authenticate to Kubernetes when Azure AD integration is enabled. In all cases, the user's sequence of commands is:
+View the table for a quick summary of how users can authenticate to Kubernetes when Azure AD integration is enabled. In all cases, the user's sequence of commands is:
1. Run `az login` to authenticate to Azure. 1. Run `az aks get-credentials` to download credentials for the cluster into `.kube/config`.
-1. Run `kubectl` commands (the first of which may trigger browser-based authentication to authenticate to the cluster, as described in the following table).
+1. Run `kubectl` commands.
+ * The first command may trigger browser-based authentication to authenticate to the cluster, as described in the following table.
+
+In the Azure portal, you can find:
+* The *Role Grant* (Azure RBAC role grant) referred to in the second column is shown on the **Access Control** tab.
+* The Cluster Admin Azure AD Group is shown on the **Configuration** tab.
+ * Also found with parameter name `--aad-admin-group-object-ids` in the Azure CLI.
-The Role Grant referred to in the second column is the Azure RBAC role grant shown on the **Access Control** tab in the Azure portal. The Cluster Admin Azure AD Group is shown on the **Configuration** tab in the portal (or with parameter name `--aad-admin-group-object-ids` in the Azure CLI).
| Description | Role grant required| Cluster admin Azure AD group(s) | When to use | | -||-|-| | Legacy admin login using client certificate| **Azure Kubernetes Admin Role**. This role allows `az aks get-credentials` to be used with the `--admin` flag, which downloads a [legacy (non-Azure AD) cluster admin certificate](control-kubeconfig-access.md) into the user's `.kube/config`. This is the only purpose of "Azure Kubernetes Admin Role".|n/a|If you're permanently blocked by not having access to a valid Azure AD group with access to your cluster.| | Azure AD with manual (Cluster)RoleBindings| **Azure Kubernetes User Role**. The "User" role allows `az aks get-credentials` to be used without the `--admin` flag. (This is the only purpose of "Azure Kubernetes User Role".) The result, on an Azure AD-enabled cluster, is the download of [an empty entry](control-kubeconfig-access.md) into `.kube/config`, which triggers browser-based authentication when it's first used by `kubectl`.| User is not in any of these groups. Because the user is not in any Cluster Admin groups, their rights will be controlled entirely by any RoleBindings or ClusterRoleBindings that have been set up by cluster admins. The (Cluster)RoleBindings [nominate Azure AD users or Azure AD groups](azure-ad-rbac.md) as their `subjects`. If no such bindings have been set up, the user will not be able to excute any `kubectl` commands.|If you want fine-grained access control, and you're not using Azure RBAC for Kubernetes Authorization. Note that the user who sets up the bindings must log in by one of the other methods listed in this table.| | Azure AD by member of admin group| Same as above|User is a member of one of the groups listed here. AKS automatically generates a ClusterRoleBinding that binds all of the listed groups to the `cluster-admin` Kubernetes role. So users in these groups can run all `kubectl` commands as `cluster-admin`.|If you want to conveniently grant users full admin rights, and are _not_ using Azure RBAC for Kubernetes authorization.|
-| Azure AD with Azure RBAC for Kubernetes Authorization|Two roles: First, **Azure Kubernetes User Role** (as above). Second, one of the "Azure Kubernetes Service **RBAC**..." roles listed above, or your own custom alternative.|The admin roles field on the Configuration tab is irrelevant when Azure RBAC for Kubernetes Authorization is enabled.|You are using Azure RBAC for Kubernetes authorization. This approach gives you fine-grained control, without the need to set up RoleBindings or ClusterRoleBindings.|
+| Azure AD with Azure RBAC for Kubernetes Authorization|Two roles: <br> First, **Azure Kubernetes User Role** (as above). <br> Second, one of the "Azure Kubernetes Service **RBAC**..." roles listed above, or your own custom alternative.|The admin roles field on the Configuration tab is irrelevant when Azure RBAC for Kubernetes Authorization is enabled.|You are using Azure RBAC for Kubernetes authorization. This approach gives you fine-grained control, without the need to set up RoleBindings or ClusterRoleBindings.|
## Next steps - To get started with Azure AD and Kubernetes RBAC, see [Integrate Azure Active Directory with AKS][aks-aad]. - For associated best practices, see [Best practices for authentication and authorization in AKS][operator-best-practices-identity]. - To get started with Azure RBAC for Kubernetes Authorization, see [Use Azure RBAC to authorize access within the Azure Kubernetes Service (AKS) Cluster](manage-azure-rbac.md).-- To get started securing your kubeconfig file, see [Limit access to cluster configuration file](control-kubeconfig-access.md)
+- To get started securing your `kubeconfig` file, see [Limit access to cluster configuration file](control-kubeconfig-access.md)
For more information on core Kubernetes and AKS concepts, see the following articles:
aks Concepts Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-network.md
Title: Concepts - Networking in Azure Kubernetes Services (AKS) description: Learn about networking in Azure Kubernetes Service (AKS), including kubenet and Azure CNI networking, ingress controllers, load balancers, and static IP addresses. Previously updated : 06/11/2020 Last updated : 03/11/2021 # Network concepts for applications in Azure Kubernetes Service (AKS)
-In a container-based microservices approach to application development, application components must work together to process their tasks. Kubernetes provides various resources that enable this application communication. You can connect to and expose applications internally or externally. To build highly available applications, you can load balance your applications. More complex applications may require configuration of ingress traffic for SSL/TLS termination or routing of multiple components. For security reasons, you may also need to restrict the flow of network traffic into or between pods and nodes.
+In a container-based, microservices approach to application development, application components work together to process their tasks. Kubernetes provides various resources enabling this cooperation:
+* You can connect to and expose applications internally or externally.
+* You can build highly available applications by load balancing your applications.
+* For your more complex applications, you can configure ingress traffic for SSL/TLS termination or routing of multiple components.
+* For security reasons, you can restrict the flow of network traffic into or between pods and nodes.
This article introduces the core concepts that provide networking to your applications in AKS:
This article introduces the core concepts that provide networking to your applic
## Kubernetes basics
-To allow access to your applications, or for application components to communicate with each other, Kubernetes provides an abstraction layer to virtual networking. Kubernetes nodes are connected to a virtual network, and can provide inbound and outbound connectivity for pods. The *kube-proxy* component runs on each node to provide these network features.
+To allow access to your applications or between application components, Kubernetes provides an abstraction layer to virtual networking. Kubernetes nodes connect to a virtual network, providing inbound and outbound connectivity for pods. The *kube-proxy* component runs on each node to provide these network features.
-In Kubernetes, *Services* logically group pods to allow for direct access via an IP address or DNS name and on a specific port. You can also distribute traffic using a *load balancer*. More complex routing of application traffic can also be achieved with *Ingress Controllers*. Security and filtering of the network traffic for pods is possible with Kubernetes *network policies*.
+In Kubernetes:
+* *Services* logically group pods to allow for direct access on a specific port via an IP address or DNS name.
+* You can distribute traffic using a *load balancer*.
+* More complex routing of application traffic can also be achieved with *Ingress Controllers*.
+* Security and filtering of the network traffic for pods is possible with Kubernetes *network policies*.
-The Azure platform also helps to simplify virtual networking for AKS clusters. When you create a Kubernetes load balancer, the underlying Azure load balancer resource is created and configured. As you open network ports to pods, the corresponding Azure network security group rules are configured. For HTTP application routing, Azure can also configure *external DNS* as new ingress routes are configured.
+The Azure platform also simplifies virtual networking for AKS clusters. When you create a Kubernetes load balancer, you also create and configure the underlying Azure load balancer resource. As you open network ports to pods, the corresponding Azure network security group rules are configured. For HTTP application routing, Azure can also configure *external DNS* as new ingress routes are configured.
## Services To simplify the network configuration for application workloads, Kubernetes uses *Services* to logically group a set of pods together and provide network connectivity. The following Service types are available: -- **Cluster IP** - Creates an internal IP address for use within the AKS cluster. Good for internal-only applications that support other workloads within the cluster.
+- **Cluster IP**
+
+ Creates an internal IP address for use within the AKS cluster. Good for internal-only applications that support other workloads within the cluster.
![Diagram showing Cluster IP traffic flow in an AKS cluster][aks-clusterip] -- **NodePort** - Creates a port mapping on the underlying node that allows the application to be accessed directly with the node IP address and port.
+- **NodePort**
+
+ Creates a port mapping on the underlying node that allows the application to be accessed directly with the node IP address and port.
![Diagram showing NodePort traffic flow in an AKS cluster][aks-nodeport] -- **LoadBalancer** - Creates an Azure load balancer resource, configures an external IP address, and connects the requested pods to the load balancer backend pool. To allow customers' traffic to reach the application, load balancing rules are created on the desired ports.
+- **LoadBalancer**
+
+ Creates an Azure load balancer resource, configures an external IP address, and connects the requested pods to the load balancer backend pool. To allow customers' traffic to reach the application, load balancing rules are created on the desired ports.
![Diagram showing Load Balancer traffic flow in an AKS cluster][aks-loadbalancer]
- For additional control and routing of the inbound traffic, you may instead use an [Ingress controller](#ingress-controllers).
+ For extra control and routing of the inbound traffic, you may instead use an [Ingress controller](#ingress-controllers).
-- **ExternalName** - Creates a specific DNS entry for easier application access.
+- **ExternalName**
-The IP address for load balancers and services can be dynamically assigned, or you can specify an existing static IP address to use. Both internal and external static IP addresses can be assigned. This existing static IP address is often tied to a DNS entry.
+ Creates a specific DNS entry for easier application access.
-Both *internal* and *external* load balancers can be created. Internal load balancers are only assigned a private IP address, so they can't be accessed from the Internet.
+Either the load balancers and services IP address can be dynamically assigned, or you can specify an existing static IP address. You can assign both internal and external static IP addresses. Existing static IP addresses are often tied to a DNS entry.
+
+You can create both *internal* and *external* load balancers. Internal load balancers are only assigned a private IP address, so they can't be accessed from the Internet.
## Azure virtual networks In AKS, you can deploy a cluster that uses one of the following two network models: -- *Kubenet* networking - The network resources are typically created and configured as the AKS cluster is deployed.-- *Azure Container Networking Interface (CNI)* networking - The AKS cluster is connected to existing virtual network resources and configurations.
+- *Kubenet* networking
+
+ The network resources are typically created and configured as the AKS cluster is deployed.
+
+- *Azure Container Networking Interface (CNI)* networking
+
+ The AKS cluster is connected to existing virtual network resources and configurations.
### Kubenet (basic) networking
-The *kubenet* networking option is the default configuration for AKS cluster creation. With *kubenet*, nodes get an IP address from the Azure virtual network subnet. Pods receive an IP address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT'd to the node's primary IP address.
+The *kubenet* networking option is the default configuration for AKS cluster creation. With *kubenet*:
+1. Nodes receive an IP address from the Azure virtual network subnet.
+1. Pods receive an IP address from a logically different address space than the nodes' Azure virtual network subnet.
+1. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network.
+1. The source IP address of the traffic is translated to the node's primary IP address.
+
+Nodes use the [kubenet][kubenet] Kubernetes plugin. You can:
+* Let the Azure platform create and configure the virtual networks for you, or
+* Choose to deploy your AKS cluster into an existing virtual network subnet.
-Nodes use the [kubenet][kubenet] Kubernetes plugin. You can let the Azure platform create and configure the virtual networks for you, or choose to deploy your AKS cluster into an existing virtual network subnet. Again, only the nodes receive a routable IP address, and the pods use NAT to communicate with other resources outside the AKS cluster. This approach greatly reduces the number of IP addresses that you need to reserve in your network space for pods to use.
+Remember, only the nodes receive a routable IP address. The pods use NAT to communicate with other resources outside the AKS cluster. This approach reduces the number of IP addresses you need to reserve in your network space for pods to use.
For more information, see [Configure kubenet networking for an AKS cluster][aks-configure-kubenet-networking]. ### Azure CNI (advanced) networking
-With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be unique across your network space, and must be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, as can otherwise lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.
+With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. Without planning, this approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.
Unlike kubenet, traffic to endpoints in the same virtual network isn't NAT'd to the node's primary IP. The source address for traffic inside the virtual network is the pod IP. Traffic that's external to the virtual network still NATs to the node's primary IP.
-Nodes use the [Azure Container Networking Interface (CNI)][cni-networking] Kubernetes plugin.
+Nodes use the [Azure CNI][cni-networking] Kubernetes plugin.
![Diagram showing two nodes with bridges connecting each to a single Azure VNet][advanced-networking-diagram]
Both kubenet and Azure CNI provide network connectivity for your AKS clusters. H
* **kubenet** * Conserves IP address space. * Uses Kubernetes internal or external load balancer to reach pods from outside of the cluster.
- * You must manually manage and maintain user-defined routes (UDRs).
+ * You manually manage and maintain user-defined routes (UDRs).
* Maximum of 400 nodes per cluster. * **Azure CNI** * Pods get full virtual network connectivity and can be directly reached via their private IP address from connected networks.
The following behavior differences exist between kubenet and Azure CNI:
| Expose Kubernetes services using a load balancer service, App Gateway, or ingress controller | Supported | Supported | | Default Azure DNS and Private Zones | Supported | Supported |
-Regarding DNS, with both kubenet and Azure CNI plugins DNS is offered by CoreDNS, a deployment running in AKS with its own autoscaler. For more information on CoreDNS on Kubernetes see [Customizing DNS Service](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/). CoreDNS is configured per default to forward unknown domains to the node DNS servers, in other words, to the DNS functionality of the Azure Virtual Network where the AKS cluster is deployed. Hence, Azure DNS and Private Zones will work for pods running in AKS.
+Regarding DNS, with both kubenet and Azure CNI plugins DNS are offered by CoreDNS, a deployment running in AKS with its own autoscaler. For more information on CoreDNS on Kubernetes, see [Customizing DNS Service](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/). CoreDNS by default is configured to forward unknown domains to the DNS functionality of the Azure Virtual Network where the AKS cluster is deployed. Hence, Azure DNS and Private Zones will work for pods running in AKS.
### Support scope between network models
-Regardless of the network model you use, both kubenet and Azure CNI can be deployed in one of the following ways:
+Whatever network model you use, both kubenet and Azure CNI can be deployed in one of the following ways:
* The Azure platform can automatically create and configure the virtual network resources when you create an AKS cluster. * You can manually create and configure the virtual network resources and attach to those resources when you create your AKS cluster.
Regardless of the network model you use, both kubenet and Azure CNI can be deplo
Although capabilities like service endpoints or UDRs are supported with both kubenet and Azure CNI, the [support policies for AKS][support-policies] define what changes you can make. For example: * If you manually create the virtual network resources for an AKS cluster, you're supported when configuring your own UDRs or service endpoints.
-* If the Azure platform automatically creates the virtual network resources for your AKS cluster, it isn't supported to manually change those AKS-managed resources to configure your own UDRs or service endpoints.
+* If the Azure platform automatically creates the virtual network resources for your AKS cluster, you can't manually change those AKS-managed resources to configure your own UDRs or service endpoints.
## Ingress controllers
-When you create a LoadBalancer type Service, an underlying Azure load balancer resource is created. The load balancer is configured to distribute traffic to the pods in your Service on a given port. The LoadBalancer only works at layer 4 - the Service is unaware of the actual applications, and can't make any additional routing considerations.
+When you create a LoadBalancer-type Service, you also create an underlying Azure load balancer resource. The load balancer is configured to distribute traffic to the pods in your Service on a given port.
-*Ingress controllers* work at layer 7, and can use more intelligent rules to distribute application traffic. A common use of an Ingress controller is to route HTTP traffic to different applications based on the inbound URL.
+The LoadBalancer only works at layer 4. At layer 4, the Service is unaware of the actual applications, and can't make any more routing considerations.
+
+*Ingress controllers* work at layer 7, and can use more intelligent rules to distribute application traffic. Ingress controllers typically route HTTP traffic to different applications based on the inbound URL.
![Diagram showing Ingress traffic flow in an AKS cluster][aks-ingress]
-In AKS, you can create an Ingress resource using something like NGINX, or use the AKS HTTP application routing feature. When you enable HTTP application routing for an AKS cluster, the Azure platform creates the Ingress controller and an *External-DNS* controller. As new Ingress resources are created in Kubernetes, the required DNS A records are created in a cluster-specific DNS zone. For more information, see [deploy HTTP application routing][aks-http-routing].
+### Create an ingress resource
+In AKS, you can create an Ingress resource using NGINX, a similar tool, or the AKS HTTP application routing feature. When you enable HTTP application routing for an AKS cluster, the Azure platform creates the Ingress controller and an *External-DNS* controller. As new Ingress resources are created in Kubernetes, the required DNS A records are created in a cluster-specific DNS zone.
+
+For more information, see [Deploy HTTP application routing][aks-http-routing].
-The Application Gateway Ingress Controller (AGIC) add-on allows AKS customers to leverage Azure's native Application Gateway level 7 load-balancer to expose cloud software to the Internet. AGIC monitors the Kubernetes cluster it is hosted on and continuously updates an Application Gateway, so that selected services are exposed to the Internet. To learn more about the AGIC add-on for AKS, see [What is Application Gateway Ingress Controller?][agic-overview]
+### Application Gateway Ingress Controller (AGIC)
-Another common feature of Ingress is SSL/TLS termination. On large web applications accessed via HTTPS, the TLS termination can be handled by the Ingress resource rather than within the application itself. To provide automatic TLS certification generation and configuration, you can configure the Ingress resource to use providers such as Let's Encrypt. For more information on configuring an NGINX Ingress controller with Let's Encrypt, see [Ingress and TLS][aks-ingress-tls].
+With the Application Gateway Ingress Controller (AGIC) add-on, AKS customers leverage Azure's native Application Gateway level 7 load-balancer to expose cloud software to the Internet. AGIC monitors the host Kubernetes cluster and continuously updates an Application Gateway, exposing selected services to the Internet.
-You can also configure your ingress controller to preserve the client source IP on requests to containers in your AKS cluster. When a client's request is routed to a container in your AKS cluster via your ingress controller, the original source IP of that request won't be available to the target container. When you enable *client source IP preservation*, the source IP for the client is available in the request header under *X-Forwarded-For*. If you're using client source IP preservation on your ingress controller, you can't use TLS pass-through. Client source IP preservation and TLS pass-through can be used with other services, such as the *LoadBalancer* type.
+To learn more about the AGIC add-on for AKS, see [What is Application Gateway Ingress Controller?][agic-overview].
+
+### SSL/TLS termination
+
+SSL/TLS termination is another common feature of Ingress. On large web applications accessed via HTTPS, the Ingress resource handles the TLS termination rather than within the application itself. To provide automatic TLS certification generation and configuration, you can configure the Ingress resource to use providers such as "Let's Encrypt".
+
+For more information on configuring an NGINX Ingress controller with Let's Encrypt, see [Ingress and TLS][aks-ingress-tls].
+
+### Client source IP preservation
+
+Configure your ingress controller to preserve the client source IP on requests to containers in your AKS cluster. When your ingress controller routes a client's request to a container in your AKS cluster, the original source IP of that request is unavailable to the target container. When you enable *client source IP preservation*, the source IP for the client is available in the request header under *X-Forwarded-For*.
+
+If you're using client source IP preservation on your ingress controller, you can't use TLS pass-through. Client source IP preservation and TLS pass-through can be used with other services, such as the *LoadBalancer* type.
## Network security groups
-A network security group filters traffic for VMs, such as the AKS nodes. As you create Services, such as a LoadBalancer, the Azure platform automatically configures any network security group rules that are needed. Don't manually configure network security group rules to filter traffic for pods in an AKS cluster. Define any required ports and forwarding as part of your Kubernetes Service manifests, and let the Azure platform create or update the appropriate rules. You can also use network policies, as discussed in the next section, to automatically apply traffic filter rules to pods.
+A network security group filters traffic for VMs like the AKS nodes. As you create Services, such as a LoadBalancer, the Azure platform automatically configures any necessary network security group rules.
+
+You don't need to manually configure network security group rules to filter traffic for pods in an AKS cluster. Simply define any required ports and forwarding as part of your Kubernetes Service manifests. Let the Azure platform create or update the appropriate rules.
+
+You can also use network policies to automatically apply traffic filter rules to pods.
## Network policies
-By default, all pods in an AKS cluster can send and receive traffic without limitations. For improved security, you may want to define rules that control the flow of traffic. Backend applications are often only exposed to required frontend services, or database components are only accessible to the application tiers that connect to them.
+By default, all pods in an AKS cluster can send and receive traffic without limitations. For improved security, define rules that control the flow of traffic, like:
+* Backend applications are only exposed to required frontend services.
+* Database components are only accessible to the application tiers that connect to them.
-Network policy is a Kubernetes feature available in AKS that lets you control the traffic flow between pods. You can choose to allow or deny traffic based on settings such as assigned labels, namespace, or traffic port. Network security groups are more for the AKS nodes, not pods. The use of network policies is a more suitable, cloud-native way to control the flow of traffic. As pods are dynamically created in an AKS cluster, the required network policies can be automatically applied.
+Network policy is a Kubernetes feature available in AKS that lets you control the traffic flow between pods. You allow or deny traffic to the pod based on settings such as assigned labels, namespace, or traffic port. While network security groups are better for AKS nodes, network policies are a more suited, cloud-native way to control the flow of traffic for pods. As pods are dynamically created in an AKS cluster, required network policies can be automatically applied.
For more information, see [Secure traffic between pods using network policies in Azure Kubernetes Service (AKS)][use-network-policies].
To get started with AKS networking, create and configure an AKS cluster with you
For associated best practices, see [Best practices for network connectivity and security in AKS][operator-best-practices-network].
-For additional information on core Kubernetes and AKS concepts, see the following articles:
+For more information on core Kubernetes and AKS concepts, see the following articles:
- [Kubernetes / AKS clusters and workloads][aks-concepts-clusters-workloads] - [Kubernetes / AKS access and identity][aks-concepts-identity]
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-security.md
description: Learn about security in Azure Kubernetes Service (AKS), including m
Previously updated : 07/01/2020 Last updated : 03/11/2021 # Security concepts for applications and clusters in Azure Kubernetes Service (AKS)
-To protect your customer data as you run application workloads in Azure Kubernetes Service (AKS), the security of your cluster is a key consideration. Kubernetes includes security components such as *network policies* and *Secrets*. Azure then adds in components such as network security groups and orchestrated cluster upgrades. These security components are combined to keep your AKS cluster running the latest OS security updates and Kubernetes releases, and with secure pod traffic and access to sensitive credentials.
+Cluster security protects your customer data as you run application workloads in Azure Kubernetes Service (AKS).
+
+Kubernetes includes security components, such as *network policies* and *Secrets*. Meanwhile, Azure includes components like network security groups and orchestrated cluster upgrades. AKS combines these security components to:
+* Keep your AKS cluster running the latest OS security updates and Kubernetes releases.
+* Provide secure pod traffic and access to sensitive credentials.
This article introduces the core concepts that secure your applications in AKS:
This article introduces the core concepts that secure your applications in AKS:
## Master security
-In AKS, the Kubernetes master components are part of the managed service provided by Microsoft. Each AKS cluster has its own single-tenanted, dedicated Kubernetes master to provide the API Server, Scheduler, etc. This master is managed and maintained by Microsoft.
+In AKS, the Kubernetes master components are part of the managed service provided, managed, and maintained by Microsoft. Each AKS cluster has its own single-tenanted, dedicated Kubernetes master to provide the API Server, Scheduler, etc.
By default, the Kubernetes API server uses a public IP address and a fully qualified domain name (FQDN). You can limit access to the API server endpoint using [authorized IP ranges][authorized-ip-ranges]. You can also create a fully [private cluster][private-clusters] to limit API server access to your virtual network.
You can control access to the API server using Kubernetes role-based access cont
## Node security
-AKS nodes are Azure virtual machines that you manage and maintain. Linux nodes run an optimized Ubuntu distribution using the `containerd` or Moby container runtime. Windows Server nodes run an optimized Windows Server 2019 release and also use the `containerd` or Moby container runtime. When an AKS cluster is created or scaled up, the nodes are automatically deployed with the latest OS security updates and configurations.
+AKS nodes are Azure virtual machines (VMs) that you manage and maintain.
+* Linux nodes run an optimized Ubuntu distribution using the `containerd` or Moby container runtime.
+* Windows Server nodes run an optimized Windows Server 2019 release using the `containerd` or Moby container runtime.
+
+When an AKS cluster is created or scaled up, the nodes are automatically deployed with the latest OS security updates and configurations.
> [!NOTE]
-> AKS clusters using Kubernetes version 1.19 node pools and greater use `containerd` as its container runtime. AKS clusters using Kubernetes prior to v1.19 for node pools use [Moby](https://mobyproject.org/) (upstream docker) as its container runtime.
+> AKS clusters using:
+> * Kubernetes version 1.19 node pools and greater use `containerd` as its container runtime.
+> * Kubernetes prior to v1.19 node pools use [Moby](https://mobyproject.org/) (upstream docker) as its container runtime.
-The Azure platform automatically applies OS security patches to Linux nodes on a nightly basis. If a Linux OS security update requires a host reboot, that reboot is not automatically performed. You can manually reboot the Linux nodes, or a common approach is to use [Kured][kured], an open-source reboot daemon for Kubernetes. Kured runs as a [DaemonSet][aks-daemonsets] and monitors each node for the presence of a file indicating that a reboot is required. Reboots are managed across the cluster using the same [cordon and drain process](#cordon-and-drain) as a cluster upgrade.
+### Node security patches
-For Windows Server nodes, Windows Update does not automatically run and apply the latest updates. On a regular schedule around the Windows Update release cycle and your own validation process, you should perform an upgrade on the Windows Server node pool(s) in your AKS cluster. This upgrade process creates nodes that run the latest Windows Server image and patches, then removes the older nodes. For more information on this process, see [Upgrade a node pool in AKS][nodepool-upgrade].
+#### Linux nodes
+The Azure platform automatically applies OS security patches to Linux nodes on a nightly basis. If a Linux OS security update requires a host reboot, it won't automatically reboot. You can either:
+* Manually reboot the Linux nodes.
+* Use [Kured][kured], an open-source reboot daemon for Kubernetes. Kured runs as a [DaemonSet][aks-daemonsets] and monitors each node for a file indicating that a reboot is required.
-Nodes are deployed into a private virtual network subnet, with no public IP addresses assigned. For troubleshooting and management purposes, SSH is enabled by default. This SSH access is only available using the internal IP address.
+Reboots are managed across the cluster using the same [cordon and drain process](#cordon-and-drain) as a cluster upgrade.
-To provide storage, the nodes use Azure Managed Disks. For most VM node sizes, these are Premium disks backed by high-performance SSDs. The data stored on managed disks is automatically encrypted at rest within the Azure platform. To improve redundancy, these disks are also securely replicated within the Azure datacenter.
+#### Windows Server nodes
-Kubernetes environments, in AKS or elsewhere, currently aren't completely safe for hostile multi-tenant usage. Additional security features like *Pod Security Policies*, or more fine-grained Kubernetes role-based access control (Kubernetes RBAC) for nodes, make exploits more difficult. However, for true security when running hostile multi-tenant workloads, a hypervisor is the only level of security that you should trust. The security domain for Kubernetes becomes the entire cluster, not an individual node. For these types of hostile multi-tenant workloads, you should use physically isolated clusters. For more information on ways to isolate workloads, see [Best practices for cluster isolation in AKS][cluster-isolation].
+For Windows Server nodes, Windows Update doesn't automatically run and apply the latest updates. Schedule Windows Server node pool upgrades in your AKS cluster around the regular Windows Update release cycle and your own validation process. This upgrade process creates nodes that run the latest Windows Server image and patches, then removes the older nodes. For more information on this process, see [Upgrade a node pool in AKS][nodepool-upgrade].
-### Compute isolation
+### Node deployment
+Nodes are deployed into a private virtual network subnet, with no public IP addresses assigned. For troubleshooting and management purposes, SSH is enabled by default and only accessible using the internal IP address.
- Certain workloads may require a high degree of isolation from other customer workloads due to compliance or regulatory requirements. For these workloads, Azure provides [isolated virtual machines](../virtual-machines/isolation.md), which can be used as the agent nodes in an AKS cluster. These isolated virtual machines are isolated to a specific hardware type and dedicated to a single customer.
+### Node storage
+To provide storage, the nodes use Azure Managed Disks. For most VM node sizes, Azure Managed Disks are Premium disks backed by high-performance SSDs. The data stored on managed disks is automatically encrypted at rest within the Azure platform. To improve redundancy, Azure Managed Disks are securely replicated within the Azure datacenter.
- To use these isolated virtual machines with an AKS cluster, select one of the isolated virtual machines sizes listed [here](../virtual-machines/isolation.md) as the **Node size** when creating an AKS cluster or adding a node pool.
+### Hostile multi-tenant workloads
+Currently, Kubernetes environments aren't safe for hostile multi-tenant usage. Extra security features, like *Pod Security Policies* or Kubernetes RBAC for nodes, efficiently block exploits. For true security when running hostile multi-tenant workloads, only trust a hypervisor. The security domain for Kubernetes becomes the entire cluster, not an individual node.
+
+For these types of hostile multi-tenant workloads, you should use physically isolated clusters. For more information on ways to isolate workloads, see [Best practices for cluster isolation in AKS][cluster-isolation].
+
+### Compute isolation
+
+Because of compliance or regulatory requirements, certain workloads may require a high degree of isolation from other customer workloads. For these workloads, Azure provides [isolated VMs](../virtual-machines/isolation.md) to use as the agent nodes in an AKS cluster. These VMs are isolated to a specific hardware type and dedicated to a single customer.
+
+Select [one of the isolated VMs sizes](../virtual-machines/isolation.md) as the **node size** when creating an AKS cluster or adding a node pool.
## Cluster upgrades
-For security and compliance, or to use the latest features, Azure provides tools to orchestrate the upgrade of an AKS cluster and components. This upgrade orchestration includes both the Kubernetes master and agent components. You can view a [list of available Kubernetes versions](supported-kubernetes-versions.md) for your AKS cluster. To start the upgrade process, you specify one of these available versions. Azure then safely cordons and drains each AKS node and performs the upgrade.
+Azure provides upgrade orchestration tools to upgrade of an AKS cluster and components, maintain security and compliance, and access the latest features. This upgrade orchestration includes both the Kubernetes master and agent components.
+
+To start the upgrade process, specify one of the [listed available Kubernetes versions](supported-kubernetes-versions.md). Azure then safely cordons and drains each AKS node and upgrades.
### Cordon and drain
-During the upgrade process, AKS nodes are individually cordoned from the cluster so new pods aren't scheduled on them. The nodes are then drained and upgraded as follows:
+During the upgrade process, AKS nodes are individually cordoned from the cluster to prevent new pods from being scheduled on them. The nodes are then drained and upgraded as follows:
-- A new node is deployed into the node pool. This node runs the latest OS image and patches.-- One of the existing nodes is identified for upgrade. Pods on this node are gracefully terminated and scheduled on the other nodes in the node pool.-- This existing node is deleted from the AKS cluster.-- The next node in the cluster is cordoned and drained using the same process until all nodes are successfully replaced as part of the upgrade process.
+1. A new node is deployed into the node pool.
+ * This node runs the latest OS image and patches.
+1. One of the existing nodes is identified for upgrade.
+1. Pods on the identified node are gracefully terminated and scheduled on the other nodes in the node pool.
+1. The emptied node is deleted from the AKS cluster.
+1. Steps 1-4 are repeated until all nodes are successfully replaced as part of the upgrade process.
For more information, see [Upgrade an AKS cluster][aks-upgrade-cluster]. ## Network security
-For connectivity and security with on-premises networks, you can deploy your AKS cluster into existing Azure virtual network subnets. These virtual networks may have an Azure Site-to-Site VPN or Express Route connection back to your on-premises network. Kubernetes ingress controllers can be defined with private, internal IP addresses so services are only accessible over this internal network connection.
+For connectivity and security with on-premises networks, you can deploy your AKS cluster into existing Azure virtual network subnets. These virtual networks connect back to your on-premises network using Azure Site-to-Site VPN or Express Route. Define Kubernetes ingress controllers with private, internal IP addresses to limit services access to the internal network connection.
### Azure network security groups
-To filter the flow of traffic in virtual networks, Azure uses network security group rules. These rules define the source and destination IP ranges, ports, and protocols that are allowed or denied access to resources. Default rules are created to allow TLS traffic to the Kubernetes API server. As you create services with load balancers, port mappings, or ingress routes, AKS automatically modifies the network security group for traffic to flow appropriately.
+To filter virtual network traffic flow, Azure uses network security group rules. These rules define the source and destination IP ranges, ports, and protocols allowed or denied access to resources. Default rules are created to allow TLS traffic to the Kubernetes API server. You create services with load balancers, port mappings, or ingress routes. AKS automatically modifies the network security group for traffic flow.
-In cases where you provide your own subnet for your AKS cluster and you wish to modify the flow of traffic, do not modify the subnet-level network security group managed by AKS. You may create additional subnet-level network security groups to modify the flow of traffic as long as they do not interfere with traffic needed for managing the cluster, such as load balancer access, communication with the control plane, and [egress][aks-limit-egress-traffic].
+If you provide your own subnet for your AKS cluster, **do not** modify the subnet-level network security group managed by AKS. Instead, create more subnet-level network security groups to modify the flow of traffic. Make sure they don't interfere with necessary traffic managing the cluster, such as load balancer access, communication with the control plane, and [egress][aks-limit-egress-traffic].
### Kubernetes network policy
-To limit network traffic between pods in your cluster, AKS offers support for [Kubernetes network policies][network-policy]. With network policies, you can choose to allow or deny specific network paths within the cluster based on namespaces and label selectors.
+To limit network traffic between pods in your cluster, AKS offers support for [Kubernetes network policies][network-policy]. With network policies, you can allow or deny specific network paths within the cluster based on namespaces and label selectors.
## Kubernetes Secrets
-A Kubernetes *Secret* is used to inject sensitive data into pods, such as access credentials or keys. You first create a Secret using the Kubernetes API. When you define your pod or deployment, a specific Secret can be requested. Secrets are only provided to nodes that have a scheduled pod that requires it, and the Secret is stored in *tmpfs*, not written to disk. When the last pod on a node that requires a Secret is deleted, the Secret is deleted from the node's tmpfs. Secrets are stored within a given namespace and can only be accessed by pods within the same namespace.
+With a Kubernetes *Secret*, you inject sensitive data into pods, such as access credentials or keys.
+1. Create a Secret using the Kubernetes API.
+1. Define your pod or deployment and request a specific Secret.
+ * Secrets are only provided to nodes with a scheduled pod that requires them.
+ * The Secret is stored in *tmpfs*, not written to disk.
+1. When you delete the last pod on a node requiring a Secret, the Secret is deleted from the node's tmpfs.
+ * Secrets are stored within a given namespace and can only be accessed by pods within the same namespace.
-The use of Secrets reduces the sensitive information that is defined in the pod or service YAML manifest. Instead, you request the Secret stored in Kubernetes API Server as part of your YAML manifest. This approach only provides the specific pod access to the Secret. Please note: the raw secret manifest files contains the secret data in base64 format (see the [official documentation][secret-risks] for more details). Therefore, this file should be treated as sensitive information, and never committed to source control.
+Using Secrets reduces the sensitive information defined in the pod or service YAML manifest. Instead, you request the Secret stored in Kubernetes API Server as part of your YAML manifest. This approach only provides the specific pod access to the Secret.
+
+> [!NOTE]
+> The raw secret manifest files contain the secret data in base64 format (see the [official documentation][secret-risks] for more details). Treat these files as sensitive information, and never commit them to source control.
Kubernetes secrets are stored in etcd, a distributed key-value store. Etcd store is fully managed by AKS and [data is encrypted at rest within the Azure platform][encryption-atrest].
To get started with securing your AKS clusters, see [Upgrade an AKS cluster][aks
For associated best practices, see [Best practices for cluster security and upgrades in AKS][operator-best-practices-cluster-security] and [Best practices for pod security in AKS][developer-best-practices-pod-security].
-For additional information on core Kubernetes and AKS concepts, see the following articles:
+For more information on core Kubernetes and AKS concepts, see:
- [Kubernetes / AKS clusters and workloads][aks-concepts-clusters-workloads] - [Kubernetes / AKS identity][aks-concepts-identity]
aks Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-storage.md
Title: Concepts - Storage in Azure Kubernetes Services (AKS)
description: Learn about storage in Azure Kubernetes Service (AKS), including volumes, persistent volumes, storage classes, and claims Previously updated : 08/17/2020 Last updated : 03/11/2021 # Storage options for applications in Azure Kubernetes Service (AKS)
-Applications that run in Azure Kubernetes Service (AKS) may need to store and retrieve data. For some application workloads, this data storage can use local, fast storage on the node that is no longer needed when the pods are deleted. Other application workloads may require storage that persists on more regular data volumes within the Azure platform. Multiple pods may need to share the same data volumes, or reattach data volumes if the pod is rescheduled on a different node. Finally, you may need to inject sensitive data or application configuration information into pods.
+Applications running in Azure Kubernetes Service (AKS) may need to store and retrieve data. While some application workloads can use local, fast storage on unneeded, emptied nodes, others require storage that persists on more regular data volumes within the Azure platform.
-![Storage options for applications in an Azure Kubernetes Services (AKS) cluster](media/concepts-storage/aks-storage-options.png)
+Multiple pods may need to:
+* Share the same data volumes.
+* Reattach data volumes if the pod is rescheduled on a different node.
+
+Finally, you may need to inject sensitive data or application configuration information into pods.
This article introduces the core concepts that provide storage to your applications in AKS:
This article introduces the core concepts that provide storage to your applicati
- [Storage classes](#storage-classes) - [Persistent volume claims](#persistent-volume-claims)
+![Storage options for applications in an Azure Kubernetes Services (AKS) cluster](media/concepts-storage/aks-storage-options.png)
+ ## Volumes
-Applications often need to be able to store and retrieve data. As Kubernetes typically treats individual pods as ephemeral, disposable resources, different approaches are available for applications to use and persist data as necessary. A *volume* represents a way to store, retrieve, and persist data across pods and through the application lifecycle.
+Kubernetes typically treats individual pods as ephemeral, disposable resources. Applications have different approaches available to them for using and persisting data. A *volume* represents a way to store, retrieve, and persist data across pods and through the application lifecycle.
+
+Traditional volumes are created as Kubernetes resources backed by Azure Storage. You can manually create data volumes to be assigned to pods directly, or have Kubernetes automatically create them. Data volumes can use Azure Disks or Azure Files.
+
+### Azure Disks
+
+Use *Azure Disks* to create a Kubernetes *DataDisk* resource. Disks can use:
+* Azure Premium storage, backed by high-performance SSDs, or
+* Azure Standard storage, backed by regular HDDs.
+
+> [!TIP]
+>For most production and development workloads, use Premium storage.
+
+Since Azure Disks are mounted as *ReadWriteOnce*, they're only available to a single pod. For storage volumes that can be accessed by multiple pods simultaneously, use Azure Files.
+
+### Azure Files
+Use *Azure Files* to mount an SMB 3.0 share backed by an Azure Storage account to pods. Files let you share data across multiple nodes and pods and can use:
+* Azure Premium storage, backed by high-performance SSDs, or
+* Azure Standard storage backed by regular HDDs.
-Traditional volumes to store and retrieve data are created as Kubernetes resources backed by Azure Storage. You can manually create these data volumes to be assigned to pods directly, or have Kubernetes automatically create them. These data volumes can use Azure Disks or Azure Files:
+### Volume types
+Kubernetes volumes represent more than just a traditional disk for storing and retrieving information. Kubernetes volumes can also be used as a way to inject data into a pod for use by the containers.
-- *Azure Disks* can be used to create a Kubernetes *DataDisk* resource. Disks can use Azure Premium storage, backed by high-performance SSDs, or Azure Standard storage, backed by regular HDDs. For most production and development workloads, use Premium storage. Azure Disks are mounted as *ReadWriteOnce*, so are only available to a single pod. For storage volumes that can be accessed by multiple pods simultaneously, use Azure Files.-- *Azure Files* can be used to mount an SMB 3.0 share backed by an Azure Storage account to pods. Files let you share data across multiple nodes and pods. Files can use Azure Standard storage backed by regular HDDs, or Azure Premium storage, backed by high-performance SSDs.
+Common volume types in Kubernetes include:
-In Kubernetes, volumes can represent more than just a traditional disk where information can be stored and retrieved. Kubernetes volumes can also be used as a way to inject data into a pod for use by the containers. Common additional volume types in Kubernetes include:
+#### emptyDir
-- *emptyDir* - This volume is commonly used as temporary space for a pod. All containers within a pod can access the data on the volume. Data written to this volume type persists only for the lifespan of the pod - when the pod is deleted, the volume is deleted. This volume typically uses the underlying local node disk storage, though it can also exist only in the node's memory.-- *secret* - This volume is used to inject sensitive data into pods, such as passwords. You first create a Secret using the Kubernetes API. When you define your pod or deployment, a specific Secret can be requested. Secrets are only provided to nodes that have a scheduled pod that requires it, and the Secret is stored in *tmpfs*, not written to disk. When the last pod on a node that requires a Secret is deleted, the Secret is deleted from the node's tmpfs. Secrets are stored within a given namespace and can only be accessed by pods within the same namespace.-- *configMap* - This volume type is used to inject key-value pair properties into pods, such as application configuration information. Rather than defining application configuration information within a container image, you can define it as a Kubernetes resource that can be easily updated and applied to new instances of pods as they are deployed. Like using a Secret, you first create a ConfigMap using the Kubernetes API. This ConfigMap can then be requested when you define a pod or deployment. ConfigMaps are stored within a given namespace and can only be accessed by pods within the same namespace.
+Commonly used as temporary space for a pod. All containers within a pod can access the data on the volume. Data written to this volume type persists only for the lifespan of the pod. Once you delete the pod, the volume is deleted. This volume typically uses the underlying local node disk storage, though it can also exist only in the node's memory.
+
+#### secret
+
+You can use *secret* volumes to inject sensitive data into pods, such as passwords.
+1. Create a Secret using the Kubernetes API.
+1. Define your pod or deployment and request a specific Secret.
+ * Secrets are only provided to nodes with a scheduled pod that requires them.
+ * The Secret is stored in *tmpfs*, not written to disk.
+1. When you delete the last pod on a node requiring a Secret, the Secret is deleted from the node's tmpfs.
+ * Secrets are stored within a given namespace and can only be accessed by pods within the same namespace.
+
+#### configMap
+
+You can use *configMap* to inject key-value pair properties into pods, such as application configuration information. Define application configuration information as a Kubernetes resource, easily updated and applied to new instances of pods as they're deployed.
+
+Like using a Secret:
+1. Create a ConfigMap using the Kubernetes API.
+1. Request the ConfigMap when you define a pod or deployment.
+ * ConfigMaps are stored within a given namespace and can only be accessed by pods within the same namespace.
## Persistent volumes
-Volumes that are defined and created as part of the pod lifecycle only exist until the pod is deleted. Pods often expect their storage to remain if a pod is rescheduled on a different host during a maintenance event, especially in StatefulSets. A *persistent volume* (PV) is a storage resource created and managed by the Kubernetes API that can exist beyond the lifetime of an individual pod.
+Volumes defined and created as part of the pod lifecycle only exist until you delete the pod. Pods often expect their storage to remain if a pod is rescheduled on a different host during a maintenance event, especially in StatefulSets. A *persistent volume* (PV) is a storage resource created and managed by the Kubernetes API that can exist beyond the lifetime of an individual pod.
-Azure Disks or Files are used to provide the PersistentVolume. As noted in the previous section on Volumes, the choice of Disks or Files is often determined by the need for concurrent access to the data or the performance tier.
+You can use Azure Disks or Files to provide the PersistentVolume. As noted in the [Volumes](#volumes) section, the choice of Disks or Files is often determined by the need for concurrent access to the data or the performance tier.
![Persistent volumes in an Azure Kubernetes Services (AKS) cluster](media/concepts-storage/persistent-volumes.png)
-A PersistentVolume can be *statically* created by a cluster administrator, or *dynamically* created by the Kubernetes API server. If a pod is scheduled and requests storage that is not currently available, Kubernetes can create the underlying Azure Disk or Files storage and attach it to the pod. Dynamic provisioning uses a *StorageClass* to identify what type of Azure storage needs to be created.
+A PersistentVolume can be *statically* created by a cluster administrator, or *dynamically* created by the Kubernetes API server. If a pod is scheduled and requests currently unavailable storage, Kubernetes can create the underlying Azure Disk or Files storage and attach it to the pod. Dynamic provisioning uses a *StorageClass* to identify what type of Azure storage needs to be created.
## Storage classes
-To define different tiers of storage, such as Premium and Standard, you can create a *StorageClass*. The StorageClass also defines the *reclaimPolicy*. This reclaimPolicy controls the behavior of the underlying Azure storage resource when the pod is deleted and the persistent volume may no longer be required. The underlying storage resource can be deleted, or retained for use with a future pod.
+To define different tiers of storage, such as Premium and Standard, you can create a *StorageClass*.
+
+The StorageClass also defines the *reclaimPolicy*. When you delete the pod and the persistent volume is no longer required, the reclaimPolicy controls the behavior of the underlying Azure storage resource. The underlying storage resource can either be deleted or kept for use with a future pod.
In AKS, four initial `StorageClasses` are created for cluster using the in-tree storage plugins: -- `default` - Uses Azure StandardSSD storage to create a Managed Disk. The reclaim policy ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted.-- `managed-premium` - Uses Azure Premium storage to create a Managed Disk. The reclaim policy again ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted.-- `azurefile` - Uses Azure Standard storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted.-- `azurefile-premium` - Uses Azure Premium storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted.
+| Permission | Reason |
+|||
+| `default` | Uses Azure StandardSSD storage to create a Managed Disk. The reclaim policy ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted. |
+| `managed-premium` | Uses Azure Premium storage to create a Managed Disk. The reclaim policy again ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted. |
+| `azurefile` | Uses Azure Standard storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted. |
+| `azurefile-premium` | Uses Azure Premium storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted.|
+
+For clusters using the new Container Storage Interface (CSI) external plugins (preview) the following extra `StorageClasses` are created:
+
+| Permission | Reason |
+|||
+| `managed-csi` | Uses Azure StandardSSD locally redundant storage (LRS) to create a Managed Disk. The reclaim policy ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted. The storage class also configures the persistent volumes to be expandable, you just need to edit the persistent volume claim with the new size. |
+| `managed-csi-premium` | Uses Azure Premium locally redundant storage (LRS) to create a Managed Disk. The reclaim policy again ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted. Similarly, this storage class allows for persistent volumes to be expanded. |
+| `azurefile-csi` | Uses Azure Standard storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted. |
+| `azurefile-csi-premium` | Uses Azure Premium storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted.|
-For clusters using the new Container Storage Interface (CSI) external plugins (preview) the following additional`StorageClasses` are created:
-- `managed-csi` - Uses Azure StandardSSD locally redundant storage (LRS) to create a Managed Disk. The reclaim policy ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted. The storage class also configures the persistent volumes to be expandable, you just need to edit the persistent volume claim with the new size.-- `managed-csi-premium` - Uses Azure Premium locally redundant storage (LRS) to create a Managed Disk. The reclaim policy again ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted. Similarly, this storage class allows for persistent volumes to be expanded.-- `azurefile-csi` - Uses Azure Standard storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted.-- `azurefile-csi-premium` - Uses Azure Premium storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted.
+Unless you specify a StorageClass for a persistent volume, the default StorageClass will be used. Ensure volumes use the appropriate storage you need when requesting persistent volumes.
-If no StorageClass is specified for a persistent volume, the default StorageClass is used. Take care when requesting persistent volumes so that they use the appropriate storage you need. You can create a StorageClass for additional needs using `kubectl`. The following example uses Premium Managed Disks and specifies that the underlying Azure Disk should be *retained* when the pod is deleted:
+You can create a StorageClass for additional needs using `kubectl`. The following example uses Premium Managed Disks and specifies that the underlying Azure Disk should be *retained* when you delete the pod:
```yaml kind: StorageClass
parameters:
## Persistent volume claims
-A PersistentVolumeClaim requests either Disk or File storage of a particular StorageClass, access mode, and size. The Kubernetes API server can dynamically provision the underlying storage resource in Azure if there is no existing resource to fulfill the claim based on the defined StorageClass. The pod definition includes the volume mount once the volume has been connected to the pod.
+A PersistentVolumeClaim requests either Disk or File storage of a particular StorageClass, access mode, and size. The Kubernetes API server can dynamically provision the underlying Azure storage resource if no existing resource can fulfill the claim based on the defined StorageClass.
+
+The pod definition includes the volume mount once the volume has been connected to the pod.
![Persistent volume claims in an Azure Kubernetes Services (AKS) cluster](media/concepts-storage/persistent-volume-claims.png)
-A PersistentVolume is *bound* to a PersistentVolumeClaim once an available storage resource has been assigned to the pod requesting it. There is a 1:1 mapping of persistent volumes to claims.
+Once an available storage resource has been assigned to the pod requesting storage, PersistentVolume is *bound* to a PersistentVolumeClaim. Persistent volumes are 1:1 mapped to claims.
The following example YAML manifest shows a persistent volume claim that uses the *managed-premium* StorageClass and requests a Disk *5Gi* in size:
spec:
storage: 5Gi ```
-When you create a pod definition, the persistent volume claim is specified to request the desired storage. You also then specify the *volumeMount* for your applications to read and write data. The following example YAML manifest shows how the previous persistent volume claim can be used to mount a volume at */mnt/azure*:
+When you create a pod definition, you also specify:
+* The persistent volume claim to request the desired storage.
+* The *volumeMount* for your applications to read and write data.
+
+The following example YAML manifest shows how the previous persistent volume claim can be used to mount a volume at */mnt/azure*:
```yaml kind: Pod
To see how to create dynamic and static volumes that use Azure Disks or Azure Fi
- [Create a dynamic volume using Azure Disks][aks-dynamic-disks] - [Create a dynamic volume using Azure Files][aks-dynamic-files]
-For additional information on core Kubernetes and AKS concepts, see the following articles:
+For more information on core Kubernetes and AKS concepts, see the following articles:
- [Kubernetes / AKS clusters and workloads][aks-concepts-clusters-workloads] - [Kubernetes / AKS identity][aks-concepts-identity]
aks Developer Best Practices Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/developer-best-practices-resource-management.md
description: Learn the application developer best practices for resource managem
Previously updated : 11/13/2019 Last updated : 03/15/2021 # Best practices for application developers to manage resources in Azure Kubernetes Service (AKS)
-As you develop and run applications in Azure Kubernetes Service (AKS), there are a few key areas to consider. How you manage your application deployments can negatively impact the end-user experience of services that you provide. To help you succeed, keep in mind some best practices you can follow as you develop and run applications in AKS.
+As you develop and run applications in Azure Kubernetes Service (AKS), there are a few key areas to consider. How you manage your application deployments can negatively impact the end-user experience of services that you provide. To succeed, keep in mind some best practices you can follow as you develop and run applications in AKS.
-This best practices article focuses on how to run your cluster and workloads from an application developer perspective. For information about administrative best practices, see [Cluster operator best practices for isolation and resource management in Azure Kubernetes Service (AKS)][operator-best-practices-isolation]. In this article, you learn:
+This article focuses on running your cluster and workloads from an application developer perspective. For information about administrative best practices, see [Cluster operator best practices for isolation and resource management in Azure Kubernetes Service (AKS)][operator-best-practices-isolation]. In this article, you learn:
> [!div class="checklist"]
-> * What are pod resource requests and limits
-> * Ways to develop and deploy applications with Bridge to Kubernetes and Visual Studio Code
-> * How to use the `kube-advisor` tool to check for issues with deployments
+> * Pod resource requests and limits.
+> * Ways to develop and deploy applications with Bridge to Kubernetes and Visual Studio Code.
+> * How to use the `kube-advisor` tool to check for issues with deployments.
## Define pod resource requests and limits
-**Best practice guidance** - Set pod requests and limits on all pods in your YAML manifests. If the AKS cluster uses *resource quotas*, your deployment may be rejected if you don't define these values.
+> **Best practice guidance**
+>
+> Set pod requests and limits on all pods in your YAML manifests. If the AKS cluster uses *resource quotas* and you don't define these values, your deployment may be rejected.
-A primary way to manage the compute resources within an AKS cluster is to use pod requests and limits. These requests and limits let the Kubernetes scheduler know what compute resources a pod should be assigned.
+Use pod requests and limits to manage the compute resources within an AKS cluster. Pod requests and limits inform the Kubernetes scheduler which compute resources to assign to a pod.
-* **Pod CPU/Memory requests** define a set amount of CPU and memory that the pod needs on a regular basis.
- * When the Kubernetes scheduler tries to place a pod on a node, the pod requests are used to determine which node has sufficient resources available for scheduling.
- * Not setting a pod request will default it to the limit defined.
- * It is very important to monitor the performance of your application to adjust these requests. If insufficient pod resource requests are made, your application may receive degraded performance due to over scheduling a node. If requests are overestimated, your application may have increased difficulty getting scheduled.
-* **Pod CPU/Memory limits** are the maximum amount of CPU and memory that a pod can use. Memory limits help define which pods should be killed in the event of node instability due to insufficient resources. Without proper limits set pods will be killed until resource pressure is lifted. A pod may or may not be able to exceed the CPU limit for a period of time, but the pod will not be killed for exceeding the CPU limit.
- * Pod limits help define when a pod has lost control of resource consumption. When a limit is exceeded, the pod is prioritized for killing to maintain node health and minimize impact to pods sharing the node.
- * Not setting a pod limit defaults it to the highest available value on a given node.
- * Don't set a pod limit higher than your nodes can support. Each AKS node reserves a set amount of CPU and memory for the core Kubernetes components. Your application may try to consume too many resources on the node for other pods to successfully run.
- * Again, it is very important to monitor the performance of your application at different times during the day or week. Determine when the peak demand is, and align the pod limits to the resources required to meet the application's max needs.
+### Pod CPU/Memory requests
+*Pod requests* define a set amount of CPU and memory that the pod needs regularly.
In your pod specifications, it's **best practice and very important** to define these requests and limits based on the above information. If you don't include these values, the Kubernetes scheduler cannot take into account the resources your applications require to aid in scheduling decisions.
-If the scheduler places a pod on a node with insufficient resources, application performance will be degraded. It is highly recommended for cluster administrators to set *resource quotas* on a namespace that requires you to set resource requests and limits. For more information, see [resource quotas on AKS clusters][resource-quotas].
+Monitor the performance of your application to adjust pod requests.
+* If you underestimate pod requests, your application may receive degraded performance due to over-scheduling a node.
+* If requests are overestimated, your application may have increased difficulty getting scheduled.
+
+### Pod CPU/Memory limits**
+*Pod limits* set the maximum amount of CPU and memory that a pod can use.
+
+* *Memory limits* define which pods should be killed when nodes are unstable due to insufficient resources. Without proper limits set, pods will be killed until resource pressure is lifted.
+* While a pod may exceed the *CPU limit* periodically, the pod will not be killed for exceeding the CPU limit.
+
+Pod limits define when a pod has lost control of resource consumption. When it exceeds the limit, the pod is marked for killing. This behavior maintains node health and minimizes impact to pods sharing the node. Not setting a pod limit defaults it to the highest available value on a given node.
+
+Avoid setting a pod limit higher than your nodes can support. Each AKS node reserves a set amount of CPU and memory for the core Kubernetes components. Your application may try to consume too many resources on the node for other pods to successfully run.
+
+Monitor the performance of your application at different times during the day or week. Determine peak demand times and align the pod limits to the resources required to meet maximum needs.
+
+> [!IMPORTANT]
+>
+> In your pod specifications, define these requests and limits based on the above information. Failing to include these values prevents the Kubernetes scheduler from accounting for resources your applications require to aid in scheduling decisions.
+
+If the scheduler places a pod on a node with insufficient resources, application performance will be degraded. Cluster administrators **must** set *resource quotas* on a namespace that requires you to set resource requests and limits. For more information, see [resource quotas on AKS clusters][resource-quotas].
When you define a CPU request or limit, the value is measured in CPU units. * *1.0* CPU equates to one underlying virtual CPU core on the node.
-* The same measurement is used for GPUs.
+ * The same measurement is used for GPUs.
* You can define fractions measured in millicores. For example, *100m* is *0.1* of an underlying vCPU core. In the following basic example for a single NGINX pod, the pod requests *100m* of CPU time, and *128Mi* of memory. The resource limits for the pod are set to *250m* CPU and *256Mi* memory:
For more information about resource measurements and assignments, see [Managing
## Develop and debug applications against an AKS cluster
-**Best practice guidance** - Development teams should deploy and debug against an AKS cluster using Bridge to Kubernetes.
+> **Best practice guidance**
+>
+> Development teams should deploy and debug against an AKS cluster using Bridge to Kubernetes.
-With Bridge to Kubernetes, you can develop, debug, and test applications directly against an AKS cluster. Developers within a team work together to build and test throughout the application lifecycle. You can continue to use existing tools such as Visual Studio or Visual Studio Code. An extension is installed for Bridge to Kubernetes that allows you to develop directly in an AKS cluster.
+With Bridge to Kubernetes, you can develop, debug, and test applications directly against an AKS cluster. Developers within a team collaborate to build and test throughout the application lifecycle. You can continue to use existing tools such as Visual Studio or Visual Studio Code with the Bridge to Kubernetes extension.
-This integrated development and test process with Bridge to Kubernetes reduces the need for local test environments, such as [minikube][minikube]. Instead, you develop and test against an AKS cluster. This cluster can be secured and isolated as noted in previous section on the use of namespaces to logically isolate a cluster.
+Using integrated development and test process with Bridge to Kubernetes reduces the need for local test environments like [minikube][minikube]. Instead, you develop and test against an AKS cluster, even secured and isolated clusters.
-Bridge to Kubernetes is intended for use with applications that run on Linux pods and nodes.
+> [!NOTE]
+> Bridge to Kubernetes is intended for use with applications that run on Linux pods and nodes.
-## Use the Visual Studio Code extension for Kubernetes
+## Use the Visual Studio Code (VS Code) extension for Kubernetes
-**Best practice guidance** - Install and use the VS Code extension for Kubernetes when you write YAML manifests. You can also use the extension for integrated deployment solution, which may help application owners that infrequently interact with the AKS cluster.
+> **Best practice guidance**
+>
+> Install and use the VS Code extension for Kubernetes when you write YAML manifests. You can also use the extension for integrated deployment solution, which may help application owners that infrequently interact with the AKS cluster.
-The [Visual Studio Code extension for Kubernetes][vscode-kubernetes] helps you develop and deploy applications to AKS. The extension provides intellisense for Kubernetes resources, and Helm charts and templates. You can also browse, deploy, and edit Kubernetes resources from within VS Code. The extension also provides an intellisense check for resource requests or limits being set in the pod specifications:
+The [Visual Studio Code extension for Kubernetes][vscode-kubernetes] helps you develop and deploy applications to AKS. The extension provides:
+* Intellisense for Kubernetes resources, Helm charts, and templates.
+* Browse, deploy, and edit capabilities for Kubernetes resources from within VS Code.
+* An intellisense check for resource requests or limits being set in the pod specifications:
-![VS Code extension for Kubernetes warning about missing memory limits](media/developer-best-practices-resource-management/vs-code-kubernetes-extension.png)
+ ![VS Code extension for Kubernetes warning about missing memory limits](media/developer-best-practices-resource-management/vs-code-kubernetes-extension.png)
## Regularly check for application issues with kube-advisor
-**Best practice guidance** - Regularly run the latest version of `kube-advisor` open source tool to detect issues in your cluster. If you apply resource quotas on an existing AKS cluster, run `kube-advisor` first to find pods that don't have resource requests and limits defined.
+> **Best practice guidance**
+>
+> Regularly run the latest version of `kube-advisor` open-source tool to detect issues in your cluster. Run `kube-advisor` before applying resource quotas on an existing AKS cluster to find pods that don't have resource requests and limits defined.
-The [kube-advisor][kube-advisor] tool is an associated AKS open source project that scans a Kubernetes cluster and reports on issues that it finds. One useful check is to identify pods that don't have resource requests and limits in place.
+The [kube-advisor][kube-advisor] tool is an associated AKS open-source project that scans a Kubernetes cluster and reports on identified issues. One useful check is to identify pods without resource requests and limits in place.
-The kube-advisor tool can report on resource request and limits missing in PodSpecs for Windows applications as well as Linux applications, but the kube-advisor tool itself must be scheduled on a Linux pod. You can schedule a pod to run on a node pool with a specific OS using a [node selector][k8s-node-selector] in the pod's configuration.
+While the `kube-advisor` tool can report on resource requests and limits missing in PodSpecs for Windows and Linux applications, `kube-advisor` itself must be scheduled on a Linux pod. Use a [node selector][k8s-node-selector] in the pod's configuration to schedule a pod to run on a node pool with a specific OS.
-In an AKS cluster that hosts many development teams and applications, it can be hard to track pods without these resource requests and limits set. As a best practice, regularly run `kube-advisor` on your AKS clusters.
+In an AKS cluster that hosts many development teams and applications, you'll find it easier to track pods using resource requests and limits. As a best practice, regularly run `kube-advisor` on your AKS clusters.
## Next steps
-This best practices article focused on how to run your cluster and workloads from a cluster operator perspective. For information about administrative best practices, see [Cluster operator best practices for isolation and resource management in Azure Kubernetes Service (AKS)][operator-best-practices-isolation].
+This article focused on how to run your cluster and workloads from a cluster operator perspective. For information about administrative best practices, see [Cluster operator best practices for isolation and resource management in Azure Kubernetes Service (AKS)][operator-best-practices-isolation].
To implement some of these best practices, see the following articles:
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/intro-kubernetes.md
Learn more about deploying and managing AKS with the Azure CLI Quickstart.
[aks-master-logs]: ./view-control-plane-logs.md [aks-supported versions]: supported-kubernetes-versions.md [concepts-clusters-workloads]: concepts-clusters-workloads.md
-[kubernetes-rbac]: concepts-identity.md#kubernetes-role-based-access-control-kubernetes-rbac
+[kubernetes-rbac]: concepts-identity.md#kubernetes-rbac
[concepts-identity]: concepts-identity.md [concepts-storage]: concepts-storage.md [conf-com-node]: ../confidential-computing/confidential-nodes-aks-overview.md
aks Managed Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/managed-aad.md
AKS-managed Azure AD integration is designed to simplify the Azure AD integratio
Cluster administrators can configure Kubernetes role-based access control (Kubernetes RBAC) based on a user's identity or directory group membership. Azure AD authentication is provided to AKS clusters with OpenID Connect. OpenID Connect is an identity layer built on top of the OAuth 2.0 protocol. For more information on OpenID Connect, see the [Open ID connect documentation][open-id-connect].
-Learn more about the Azure AD integration flow on the [Azure Active Directory integration concepts documentation](concepts-identity.md#azure-active-directory-integration).
+Learn more about the Azure AD integration flow on the [Azure Active Directory integration concepts documentation](concepts-identity.md#azure-ad-integration).
## Limitations
aks Node Auto Repair https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/node-auto-repair.md
Title: Automatically repairing Azure Kubernetes Service (AKS) nodes
description: Learn about node auto-repair functionality, and how AKS fixes broken worker nodes. Previously updated : 08/24/2020 Last updated : 03/11/2021 # Azure Kubernetes Service (AKS) node auto-repair
-AKS continuously checks the health state of worker nodes and performs automatic repair of the nodes if they become unhealthy. This document informs operators about how automatic node repair functionality behaves for both Windows and Linux nodes. In addition to AKS repairs, the Azure VM platform [performs maintenance on Virtual Machines][vm-updates] that experience issues as well. AKS and Azure VMs work together to minimize service disruptions for clusters.
+AKS continuously monitors the health state of worker nodes and performs automatic node repair if they become unhealthy. The Azure virtual machine (VM) platform [performs maintenance on VMs][vm-updates] experiencing issues.
-## How AKS checks for unhealthy nodes
+AKS and Azure VMs work together to minimize service disruptions for clusters.
+
+In this document, you'll learn how automatic node repair functionality behaves for both Windows and Linux nodes.
-AKS uses rules to determine if a node is unhealthy and needs repair. AKS uses the following rules to determine if automatic repair is needed.
+## How AKS checks for unhealthy nodes
-* The node reports status of **NotReady** on consecutive checks within a 10-minute timeframe
-* The node doesn't report a status within 10 minutes
+AKS uses the following rules to determine if a node is unhealthy and needs repair:
+* The node reports **NotReady** status on consecutive checks within a 10-minute timeframe.
+* The node doesn't report any status within 10 minutes.
You can manually check the health state of your nodes with kubectl.
kubectl get nodes
> [!Note] > AKS initiates repair operations with the user account **aks-remediator**.
-If a node is unhealthy based on the rules above and remains unhealthy for 10 consecutive minutes, the following actions are taken.
+If AKS identifies an unhealthy node that remains unhealthy for 10 minutes, AKS takes the following actions:
+
+1. Reboot the node.
+1. If the reboot is unsuccessful, reimage the node.
+1. If the reimage is unsuccessful, create and reimage a new node.
-1. Reboot the node
-1. If the reboot is unsuccessful, reimage the node
-1. If the reimage is unsuccessful, create and reimage a new node
+Alternative remediations are investigated by AKS engineers if auto-repair is unsuccessful.
-If none of the actions are successful, additional remediations are investigated by AKS engineers. If multiple nodes are unhealthy during a health check, each node is repaired individually before another repair begins.
+If AKS finds multiple unhealthy nodes during a health check, each node is repaired individually before another repair begins.
## Next steps
aks Operator Best Practices Advanced Scheduler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-advanced-scheduler.md
description: Learn the cluster operator best practices for using advanced scheduler features such as taints and tolerations, node selectors and affinity, or inter-pod affinity and anti-affinity in Azure Kubernetes Service (AKS) Previously updated : 11/26/2018 Last updated : 03/09/2021 # Best practices for advanced scheduler features in Azure Kubernetes Service (AKS)
-As you manage clusters in Azure Kubernetes Service (AKS), you often need to isolate teams and workloads. The Kubernetes scheduler provides advanced features that let you control which pods can be scheduled on certain nodes, or how multi-pod applications can be appropriately distributed across the cluster.
+As you manage clusters in Azure Kubernetes Service (AKS), you often need to isolate teams and workloads. Advanced features provided by the Kubernetes scheduler let you control:
+* Which pods can be scheduled on certain nodes.
+* How multi-pod applications can be appropriately distributed across the cluster.
This best practices article focuses on advanced Kubernetes scheduling features for cluster operators. In this article, you learn how to: > [!div class="checklist"]
-> * Use taints and tolerations to limit what pods can be scheduled on nodes
-> * Give preference to pods to run on certain nodes with node selectors or node affinity
-> * Split apart or group together pods with inter-pod affinity or anti-affinity
+> * Use taints and tolerations to limit what pods can be scheduled on nodes.
+> * Give preference to pods to run on certain nodes with node selectors or node affinity.
+> * Split apart or group together pods with inter-pod affinity or anti-affinity.
## Provide dedicated nodes using taints and tolerations
-**Best practice guidance** - Limit access for resource-intensive applications, such as ingress controllers, to specific nodes. Keep node resources available for workloads that require them, and don't allow scheduling of other workloads on the nodes.
+> **Best practice guidance:**
+>
+> Limit access for resource-intensive applications, such as ingress controllers, to specific nodes. Keep node resources available for workloads that require them, and don't allow scheduling of other workloads on the nodes.
-When you create your AKS cluster, you can deploy nodes with GPU support or a large number of powerful CPUs. These nodes are often used for large data processing workloads such as machine learning (ML) or artificial intelligence (AI). As this type of hardware is typically an expensive node resource to deploy, limit the workloads that can be scheduled on these nodes. You may instead wish to dedicate some nodes in the cluster to run ingress services, and prevent other workloads.
+When you create your AKS cluster, you can deploy nodes with GPU support or a large number of powerful CPUs. You can use these nodes for large data processing workloads such as machine learning (ML) or artificial intelligence (AI).
+
+Since this node resource hardware is typically expensive to deploy, limit the workloads that can be scheduled on these nodes. Instead, you'd dedicate some nodes in the cluster to run ingress services and prevent other workloads.
This support for different nodes is provided by using multiple node pools. An AKS cluster provides one or more node pools.
-The Kubernetes scheduler can use taints and tolerations to restrict what workloads can run on nodes.
+The Kubernetes scheduler uses taints and tolerations to restrict what workloads can run on nodes.
-* A **taint** is applied to a node that indicates only specific pods can be scheduled on them.
-* A **toleration** is then applied to a pod that allows them to *tolerate* a node's taint.
+* Apply a **taint** to a node to indicate only specific pods can be scheduled on them.
+* Then apply a **toleration** to a pod, allowing them to *tolerate* a node's taint.
-When you deploy a pod to an AKS cluster, Kubernetes only schedules pods on nodes where a toleration is aligned with the taint. As an example, assume you have a node pool in your AKS cluster for nodes with GPU support. You define name, such as *gpu*, then a value for scheduling. If you set this value to *NoSchedule*, the Kubernetes scheduler can't schedule pods on the node if the pod doesn't define the appropriate toleration.
+When you deploy a pod to an AKS cluster, Kubernetes only schedules pods on nodes whose taint aligns with the toleration. For example, assume you have a node pool in your AKS cluster for nodes with GPU support. You define name, such as *gpu*, then a value for scheduling. Setting this value to *NoSchedule* restricts the Kubernetes scheduler from scheduling pods with undefined toleration on the node.
```console kubectl taint node aks-nodepool1 sku=gpu:NoSchedule ```
-With a taint applied to nodes, you then define a toleration in the pod specification that allows scheduling on the nodes. The following example defines the `sku: gpu` and `effect: NoSchedule` to tolerate the taint applied to the node in the previous step:
+With a taint applied to nodes, you'll define a toleration in the pod specification that allows scheduling on the nodes. The following example defines the `sku: gpu` and `effect: NoSchedule` to tolerate the taint applied to the node in the previous step:
```yaml kind: Pod
spec:
effect: "NoSchedule" ```
-When this pod is deployed, such as using `kubectl apply -f gpu-toleration.yaml`, Kubernetes can successfully schedule the pod on the nodes with the taint applied. This logical isolation lets you control access to resources within a cluster.
+When this pod is deployed using `kubectl apply -f gpu-toleration.yaml`, Kubernetes can successfully schedule the pod on the nodes with the taint applied. This logical isolation lets you control access to resources within a cluster.
When you apply taints, work with your application developers and owners to allow them to define the required tolerations in their deployments.
For more information about how to use multiple node pools in AKS, see [Create an
When you upgrade a node pool in AKS, taints and tolerations follow a set pattern as they're applied to new nodes: -- **Default clusters that use virtual machine scale sets**
- - You can [taint a nodepool][taint-node-pool] from the AKS API, to have newly scaled out nodes receive API specified node taints.
- - Let's assume you have a two-node cluster - *node1* and *node2*. You upgrade the node pool.
- - Two additional nodes are created, *node3* and *node4*, and the taints are passed on respectively.
- - The original *node1* and *node2* are deleted.
+#### Default clusters that use VM scale sets
+You can [taint a node pool][taint-node-pool] from the AKS API to have newly scaled out nodes receive API specified node taints.
+
+Let's assume:
+1. You begin with a two-node cluster: *node1* and *node2*.
+1. You upgrade the node pool.
+1. Two additional nodes are created: *node3* and *node4*.
+1. The taints are passed on respectively.
+1. The original *node1* and *node2* are deleted.
+
+#### Clusters without VM scale set support
+
+Again, let's assume:
+1. You have a two-node cluster: *node1* and *node2*.
+1. You upgrade then node pool.
+1. An additional node is created: *node3*.
+1. The taints from *node1* are applied to *node3*.
+1. *node1* is deleted.
+1. A new *node1* is created to replace to original *node1*.
+1. The *node2* taints are applied to the new *node1*.
+1. *node2* is deleted.
-- **Clusters without virtual machine scale set support**
- - Again, let's assume you have a two-node cluster - *node1* and *node2*. When you upgrade, an additional node (*node3*) is created.
- - The taints from *node1* are applied to *node3*, then *node1* is then deleted.
- - Another new node is created (named *node1*, since the previous *node1* was deleted), and the *node2* taints are applied to the new *node1*. Then, *node2* is deleted.
- - In essence *node1* becomes *node3*, and *node2* becomes *node1*.
+In essence *node1* becomes *node3*, and *node2* becomes the new *node1*.
When you scale a node pool in AKS, taints and tolerations do not carry over by design. ## Control pod scheduling using node selectors and affinity
-**Best practice guidance** - Control the scheduling of pods on nodes using node selectors, node affinity, or inter-pod affinity. These settings allow the Kubernetes scheduler to logically isolate workloads, such as by hardware in the node.
+> **Best practice guidance**
+>
+> Control the scheduling of pods on nodes using node selectors, node affinity, or inter-pod affinity. These settings allow the Kubernetes scheduler to logically isolate workloads, such as by hardware in the node.
-Taints and tolerations are used to logically isolate resources with a hard cut-off - if the pod doesn't tolerate a node's taint, it isn't scheduled on the node. An alternate approach is to use node selectors. You label nodes, such as to indicate locally attached SSD storage or a large amount of memory, and then define in the pod specification a node selector. Kubernetes then schedules those pods on a matching node. Unlike tolerations, pods without a matching node selector can be scheduled on labeled nodes. This behavior allows unused resources on the nodes to consume, but gives priority to pods that define the matching node selector.
+Taints and tolerations logically isolate resources with a hard cut-off. If the pod doesn't tolerate a node's taint, it isn't scheduled on the node.
-Let's look at an example of nodes with a high amount of memory. These nodes can give preference to pods that request a high amount of memory. To make sure that the resources don't sit idle, they also allow other pods to run.
+Alternatively, you can use node selectors. For example, you label nodes to indicate locally attached SSD storage or a large amount of memory, and then define in the pod specification a node selector. Kubernetes schedules those pods on a matching node.
+
+Unlike tolerations, pods without a matching node selector can still be scheduled on labeled nodes. This behavior allows unused resources on the nodes to consume, but prioritizes pods that define the matching node selector.
+
+Let's look at an example of nodes with a high amount of memory. These nodes prioritize pods that request a high amount of memory. To ensure the resources don't sit idle, they also allow other pods to run.
```console kubectl label node aks-nodepool1 hardware=highmem
For more information about using node selectors, see [Assigning Pods to Nodes][k
### Node affinity
-A node selector is a basic way to assign pods to a given node. More flexibility is available using *node affinity*. With node affinity, you define what happens if the pod can't be matched with a node. You can *require* that Kubernetes scheduler matches a pod with a labeled host. Or, you can *prefer* a match but allow the pod to be scheduled on a different host if no match is available.
+A node selector is a basic solution for assigning pods to a given node. *Node affinity* provides more flexibility, allowing you to define what happens if the pod can't be matched with a node. You can:
+* *Require* that Kubernetes scheduler matches a pod with a labeled host. Or,
+* *Prefer* a match but allow the pod to be scheduled on a different host if no match is available.
-The following example sets the node affinity to *requiredDuringSchedulingIgnoredDuringExecution*. This affinity requires the Kubernetes schedule to use a node with a matching label. If no node is available, the pod has to wait for scheduling to continue. To allow the pod to be scheduled on a different node, you can instead set the value to *preferredDuringSchedulingIgnoreDuringExecution*:
+The following example sets the node affinity to *requiredDuringSchedulingIgnoredDuringExecution*. This affinity requires the Kubernetes schedule to use a node with a matching label. If no node is available, the pod has to wait for scheduling to continue. To allow the pod to be scheduled on a different node, you can instead set the value to ***preferred**DuringSchedulingIgnoreDuringExecution*:
```yaml kind: Pod
spec:
values: highmem ```
-The *IgnoredDuringExecution* part of the setting indicates that if the node labels change, the pod shouldn't be evicted from the node. The Kubernetes scheduler only uses the updated node labels for new pods being scheduled, not pods already scheduled on the nodes.
+The *IgnoredDuringExecution* part of the setting indicates that the pod shouldn't be evicted from the node if the node labels change. The Kubernetes scheduler only uses the updated node labels for new pods being scheduled, not pods already scheduled on the nodes.
For more information, see [Affinity and anti-affinity][k8s-affinity]. ### Inter-pod affinity and anti-affinity
-One final approach for the Kubernetes scheduler to logically isolate workloads is using inter-pod affinity or anti-affinity. The settings define that pods *shouldn't* be scheduled on a node that has an existing matching pod, or that they *should* be scheduled. By default, the Kubernetes scheduler tries to schedule multiple pods in a replica set across nodes. You can define more specific rules around this behavior.
+One final approach for the Kubernetes scheduler to logically isolate workloads is using inter-pod affinity or anti-affinity. These settings define that pods either *shouldn't* or *should* be scheduled on a node that has an existing matching pod. By default, the Kubernetes scheduler tries to schedule multiple pods in a replica set across nodes. You can define more specific rules around this behavior.
-A good example is a web application that also uses an Azure Cache for Redis. You can use pod anti-affinity rules to request that the Kubernetes scheduler distributes replicas across nodes. You can then use affinity rules to make sure that each web app component is scheduled on the same host as a corresponding cache. The distribution of pods across nodes looks like the following example:
+For example, you have a web application that also uses an Azure Cache for Redis.
+1. You use pod anti-affinity rules to request that the Kubernetes scheduler distributes replicas across nodes.
+1. You use affinity rules to ensure each web app component is scheduled on the same host as a corresponding cache.
+
+The distribution of pods across nodes looks like the following example:
| **Node 1** | **Node 2** | **Node 3** | |||| | webapp-1 | webapp-2 | webapp-3 | | cache-1 | cache-2 | cache-3 |
-This example is a more complex deployment than the use of node selectors or node affinity. The deployment gives you control over how Kubernetes schedules pods on nodes and can logically isolate resources. For a complete example of this web application with Azure Cache for Redis example, see [Co-locate pods on the same node][k8s-pod-affinity].
+Inter-pod affinity and anti-affinity provide a more complex deployment than node selectors or node affinity. With the deployment, you logically isolate resources and control how Kubernetes schedules pods on nodes.
+
+For a complete example of this web application with Azure Cache for Redis example, see [Co-locate pods on the same node][k8s-pod-affinity].
## Next steps
aks Operator Best Practices Cluster Isolation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-cluster-isolation.md
description: Learn the cluster operator best practices for isolation in Azure Kubernetes Service (AKS) Previously updated : 11/26/2018 Last updated : 03/09/2021 # Best practices for cluster isolation in Azure Kubernetes Service (AKS)
-As you manage clusters in Azure Kubernetes Service (AKS), you often need to isolate teams and workloads. AKS provides flexibility in how you can run multi-tenant clusters and isolate resources. To maximize your investment in Kubernetes, these multi-tenancy and isolation features should be understood and implemented.
+As you manage clusters in Azure Kubernetes Service (AKS), you often need to isolate teams and workloads. AKS provides flexibility in how you can run multi-tenant clusters and isolate resources. To maximize your investment in Kubernetes, first understand and implement AKS multi-tenancy and isolation features.
This best practices article focuses on isolation for cluster operators. In this article, you learn how to:
This best practices article focuses on isolation for cluster operators. In this
## Design clusters for multi-tenancy
-Kubernetes provides features that let you logically isolate teams and workloads in the same cluster. The goal should be to provide the least number of privileges, scoped to the resources each team needs. A [Namespace][k8s-namespaces] in Kubernetes creates a logical isolation boundary. Additional Kubernetes features and considerations for isolation and multi-tenancy include the following areas:
+Kubernetes lets you logically isolate teams and workloads in the same cluster. The goal is to provide the least number of privileges, scoped to the resources each team needs. A Kubernetes [Namespace][k8s-namespaces] creates a logical isolation boundary. Additional Kubernetes features and considerations for isolation and multi-tenancy include the following areas:
-* **Scheduling** includes the use of basic features such as resource quotas and pod disruption budgets. For more information about these features, see [Best practices for basic scheduler features in AKS][aks-best-practices-scheduler].
- * More advanced scheduler features include taints and tolerations, node selectors, and node and pod affinity or anti-affinity. For more information about these features, see [Best practices for advanced scheduler features in AKS][aks-best-practices-advanced-scheduler].
-* **Networking** includes the use of network policies to control the flow of traffic in and out of pods.
-* **Authentication and authorization** include the user of role-based access control (RBAC) and Azure Active Directory (AD) integration, pod identities, and secrets in Azure Key Vault. For more information about these features, see [Best practices for authentication and authorization in AKS][aks-best-practices-identity].
-* **Containers** includes the Azure Policy Add-on for AKS to enforce pod security, the use of pod security contexts, and scanning both images and the runtime for vulnerabilities. Also involves using App Armor or Seccomp (Secure Computing) to restrict container access to the underlying node.
+### Scheduling
+
+*Scheduling* uses basic features such as resource quotas and pod disruption budgets. For more information about these features, see [Best practices for basic scheduler features in AKS][aks-best-practices-scheduler].
+
+More advanced scheduler features include:
+* Taints and tolerations
+* Node selectors
+* Node and pod affinity or anti-affinity.
+
+For more information about these features, see [Best practices for advanced scheduler features in AKS][aks-best-practices-advanced-scheduler].
+
+### Networking
+
+*Networking* uses network policies to control the flow of traffic in and out of pods.
+
+### Authentication and authorization
+
+*Authentication and authorization* uses:
+* Role-based access control (RBAC)
+* Azure Active Directory (AD) integration
+* Pod identities
+* Secrets in Azure Key Vault
+
+For more information about these features, see [Best practices for authentication and authorization in AKS][aks-best-practices-identity].
+
+### Containers
+*Containers* include:
+* The Azure Policy Add-on for AKS to enforce pod security.
+* The use of pod security contexts.
+* Scanning both images and the runtime for vulnerabilities.
+* Using App Armor or Seccomp (Secure Computing) to restrict container access to the underlying node.
## Logically isolate clusters
-**Best practice guidance** - Use logical isolation to separate teams and projects. Try to minimize the number of physical AKS clusters you deploy to isolate teams or applications.
+> **Best practice guidance**
+>
+> Separate teams and projects using *logical isolation*. Minimize the number of physical AKS clusters you deploy to isolate teams or applications.
With logical isolation, a single AKS cluster can be used for multiple workloads, teams, or environments. Kubernetes [Namespaces][k8s-namespaces] form the logical isolation boundary for workloads and resources. ![Logical isolation of a Kubernetes cluster in AKS](media/operator-best-practices-cluster-isolation/logical-isolation.png)
-Logical separation of clusters usually provides a higher pod density than physically isolated clusters. There's less excess compute capacity that sits idle in the cluster. When combined with the Kubernetes cluster autoscaler, you can scale the number of nodes up or down to meet demands. This best practice approach to autoscaling lets you run only the number of nodes required and minimizes costs.
+Logical separation of clusters usually provides a higher pod density than physically isolated clusters, with less excess compute capacity sitting idle in the cluster. When combined with the Kubernetes cluster autoscaler, you can scale the number of nodes up or down to meet demands. This best practice approach to autoscaling minimizes costs by running only the number of nodes required.
+
+Currently, Kubernetes environments aren't completely safe for hostile multi-tenant usage. In a multi-tenant environment, multiple tenants are working on a common, shared infrastructure. If all tenants cannot be trusted, you will need extra planning to prevent tenants from impacting the security and service of others.
+
+Additional security features, like *Pod Security Policies* or Kubernetes RBAC for nodes, efficiently block exploits. For true security when running hostile multi-tenant workloads, you should only trust a hypervisor. The security domain for Kubernetes becomes the entire cluster, not an individual node.
-Kubernetes environments, in AKS or elsewhere, aren't completely safe for hostile multi-tenant usage. In a multi-tenant environment multiple tenants are working on a common, shared infrastructure. As a result if all tenants cannot be trusted, you need to do additional planning to avoid one tenant impacting the security and service of another. Additional security features such as *Pod Security Policy* and more fine-grained role-based access control (RBAC) for nodes make exploits more difficult. However, for true security when running hostile multi-tenant workloads, a hypervisor is the only level of security that you should trust. The security domain for Kubernetes becomes the entire cluster, not an individual node. For these types of hostile multi-tenant workloads, you should use physically isolated clusters.
+For these types of hostile multi-tenant workloads, you should use physically isolated clusters.
## Physically isolate clusters
-**Best practice guidance** - Minimize the use of physical isolation for each separate team or application deployment. Instead, use *logical* isolation, as discussed in the previous section.
+> **Best practice guidance**
+>
+> Minimize the use of physical isolation for each separate team or application deployment. Instead, use *logical* isolation, as discussed in the previous section.
-A common approach to cluster isolation is to use physically separate AKS clusters. In this isolation model, teams or workloads are assigned their own AKS cluster. This approach often looks like the easiest way to isolate workloads or teams, but adds additional management and financial overhead. You now have to maintain these multiple clusters, and have to individually provide access and assign permissions. You're also billed for all the individual nodes.
+Physically separating AKS clusters is a common approach to cluster isolation. In this isolation model, teams or workloads are assigned their own AKS cluster. While physical isolation might look like the easiest way to isolate workloads or teams, it adds management and financial overhead. Now, you must maintain these multiple clusters and individually provide access and assign permissions. You'll also be billed for each the individual node.
![Physical isolation of individual Kubernetes clusters in AKS](media/operator-best-practices-cluster-isolation/physical-isolation.png)
-Physically separate clusters usually have a low pod density. As each team or workload has their own AKS cluster, the cluster is often over-provisioned with compute resources. Often, a small number of pods are scheduled on those nodes. Unused capacity on the nodes can't be used for applications or services in development by other teams. These excess resources contribute to the additional costs in physically separate clusters.
+Physically separate clusters usually have a low pod density. Since each team or workload has their own AKS cluster, the cluster is often over-provisioned with compute resources. Often, a small number of pods are scheduled on those nodes. Unclaimed node capacity can't be used for applications or services in development by other teams. These excess resources contribute to the additional costs in physically separate clusters.
## Next steps
aks Operator Best Practices Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-cluster-security.md
description: Learn the cluster operator best practices for how to manage cluster security and upgrades in Azure Kubernetes Service (AKS) Previously updated : 11/12/2020 Last updated : 04/07/2021 # Best practices for cluster security and upgrades in Azure Kubernetes Service (AKS)
-As you manage clusters in Azure Kubernetes Service (AKS), the security of your workloads and data is a key consideration. Especially when you run multi-tenant clusters using logical isolation, you need to secure access to resources and workloads. To minimize the risk of attack, you also need to make sure you apply the latest Kubernetes and node OS security updates.
+As you manage clusters in Azure Kubernetes Service (AKS), workload and data security is a key consideration. When you run multi-tenant clusters using logical isolation, you especially need to secure resource and workload access. Minimize the risk of attack by applying the latest Kubernetes and node OS security updates.
This article focuses on how to secure your AKS cluster. You learn how to: > [!div class="checklist"]
-> * Use Azure Active Directory and Kubernetes role-based access control (Kubernetes RBAC) to secure API server access
-> * Secure container access to node resources
-> * Upgrade an AKS cluster to the latest Kubernetes version
-> * Keep nodes up to date and automatically apply security patches
+> * Use Azure Active Directory and Kubernetes role-based access control (Kubernetes RBAC) to secure API server access.
+> * Secure container access to node resources.
+> * Upgrade an AKS cluster to the latest Kubernetes version.
+> * Keep nodes up to date and automatically apply security patches.
You can also read the best practices for [container image management][best-practices-container-image-management] and for [pod security][best-practices-pod-security].
You can also use [Azure Kubernetes Services integration with Security Center][se
## Secure access to the API server and cluster nodes
-**Best practice guidance** - Securing access to the Kubernetes API-Server is one of the most important things you can do to secure your cluster. Integrate Kubernetes role-based access control (Kubernetes RBAC) with Azure Active Directory to control access to the API server. These controls let you secure AKS the same way that you secure access to your Azure subscriptions.
+> **Best practice guidance**
+>
+> One of the most important ways to secure your cluster is to secure access to the Kubernetes API server. To control access to the API server, integrate Kubernetes RBAC with Azure Active Directory (Azure AD). With these controls,you secure AKS the same way that you secure access to your Azure subscriptions.
-The Kubernetes API server provides a single connection point for requests to perform actions within a cluster. To secure and audit access to the API server, limit access and provide the least privileged access permissions required. This approach isn't unique to Kubernetes, but is especially important when the AKS cluster is logically isolated for multi-tenant use.
+The Kubernetes API server provides a single connection point for requests to perform actions within a cluster. To secure and audit access to the API server, limit access and provide the lowest possible permission levels. while this approach isn't unique to Kubernetes, it's especially important when you've logically isolated your AKS cluster for multi-tenant use.
-Azure Active Directory (AD) provides an enterprise-ready identity management solution that integrates with AKS clusters. As Kubernetes doesn't provide an identity management solution, it can otherwise be hard to provide a granular way to restrict access to the API server. With Azure AD-integrated clusters in AKS, you use your existing user and group accounts to authenticate users to the API server.
+Azure AD provides an enterprise-ready identity management solution that integrates with AKS clusters. Since Kubernetes doesn't provide an identity management solution, you may be hard-pressed to granularly restrict access to the API server. With Azure AD-integrated clusters in AKS, you use your existing user and group accounts to authenticate users to the API server.
![Azure Active Directory integration for AKS clusters](media/operator-best-practices-cluster-security/aad-integration.png)
-Use Kubernetes RBAC and Azure AD-integration to secure the API server and provide the least number of permissions required to a scoped set of resources, such as a single namespace. Different users or groups in Azure AD can be granted different Kubernetes roles. These granular permissions let you restrict access to the API server, and provide a clear audit trail of actions performed.
+Using Kubernetes RBAC and Azure AD-integration, you can secure the API server and provide the minimum permissions required to a scoped resource set, like a single namespace. You can grant different Azure AD users or groups different Kubernetes roles. With granular permissions, you can restrict access to the API server and provide a clear audit trail of actions performed.
-The recommended best practice is to use groups to provide access to files and folders versus individual identities, use Azure AD *group* membership to bind users to Kubernetes roles rather than individual *users*. As a user's group membership changes, their access permissions on the AKS cluster would change accordingly. If you bind the user directly to a role, their job function may change. The Azure AD group memberships would update, but permissions on the AKS cluster would not reflect that. In this scenario, the user ends up being granted more permissions than a user requires.
+The recommended best practice is to use *groups* to provide access to files and folders instead of individual identities. For example, use an Azure AD *group* membership to bind users to Kubernetes roles rather than individual *users*. As a user's group membership changes, their access permissions on the AKS cluster change accordingly.
+
+Meanwhile, let's say you bind the individual user directly to a role and their job function changes. While the Azure AD group memberships update, their permissions on the AKS cluster would not. In this scenario, the user ends up with more permissions than they require.
For more information about Azure AD integration, Kubernetes RBAC, and Azure RBAC, see [Best practices for authentication and authorization in AKS][aks-best-practices-identity]. ## Secure container access to resources
-**Best practice guidance** - Limit access to actions that containers can perform. Provide the least number of permissions, and avoid the use of root / privileged escalation.
+> **Best practice guidance**
+>
+> Limit access to actions that containers can perform. Provide the least number of permissions, and avoid the use of root access or privileged escalation.
+
+In the same way that you should grant users or groups the minimum privileges required, you should also limit containers to only necessary actions and processes. To minimize the risk of attack, avoid configuring applications and containers that require escalated privileges or root access.
+
+For example, set `allowPrivilegeEscalation: false` in the pod manifest. These built-in Kubernetes *pod security contexts* let you define additional permissions, such as the user or group to run as, or the Linux capabilities to expose. For more best practices, see [Secure pod access to resources][pod-security-contexts].
-In the same way that you should grant users or groups the least number of privileges required, containers should also be limited to only the actions and processes that they need. To minimize the risk of attack, don't configure applications and containers that require escalated privileges or root access. For example, set `allowPrivilegeEscalation: false` in the pod manifest. These *pod security contexts* are built in to Kubernetes and let you define additional permissions such as the user or group to run as, or what Linux capabilities to expose. For more best practices, see [Secure pod access to resources][pod-security-contexts].
+For even more granular control of container actions, you can also use built-in Linux security features such as *AppArmor* and *seccomp*.
+1. Define Linux security features at the node level.
+1. Implement features through a pod manifest.
-For more granular control of container actions, you can also use built-in Linux security features such as *AppArmor* and *seccomp*. These features are defined at the node level, and then implemented through a pod manifest. Built-in Linux security features are only available on Linux nodes and pods.
+Built-in Linux security features are only available on Linux nodes and pods.
> [!NOTE]
-> Kubernetes environments, in AKS or elsewhere, aren't completely safe for hostile multi-tenant usage. Additional security features such as *AppArmor*, *seccomp*, *Pod Security Policies*, or more fine-grained Kubernetes role-based access control (Kubernetes RBAC) for nodes make exploits more difficult. However, for true security when running hostile multi-tenant workloads, a hypervisor is the only level of security that you should trust. The security domain for Kubernetes becomes the entire cluster, not an individual node. For these types of hostile multi-tenant workloads, you should use physically isolated clusters.
+> Currently, Kubernetes environments aren't completely safe for hostile multi-tenant usage. Additional security features, like *AppArmor*, *seccomp*,*Pod Security Policies*, or Kubernetes RBAC for nodes, efficiently block exploits.
+>
+>For true security when running hostile multi-tenant workloads, only trust a hypervisor. The security domain for Kubernetes becomes the entire cluster, not an individual node.
+>
+> For these types of hostile multi-tenant workloads, you should use physically isolated clusters.
### App Armor
-To limit the actions that containers can perform, you can use the [AppArmor][k8s-apparmor] Linux kernel security module. AppArmor is available as part of the underlying AKS node OS, and is enabled by default. You create AppArmor profiles that restrict actions such as read, write, or execute, or system functions such as mounting filesystems. Default AppArmor profiles restrict access to various `/proc` and `/sys` locations, and provide a means to logically isolate containers from the underlying node. AppArmor works for any application that runs on Linux, not just Kubernetes pods.
+To limit container actions, you can use the [AppArmor][k8s-apparmor] Linux kernel security module. AppArmor is available as part of the underlying AKS node OS, and is enabled by default. You create AppArmor profiles that restrict read, write, or execute actions, or system functions like mounting filesystems. Default AppArmor profiles restrict access to various `/proc` and `/sys` locations, and provide a means to logically isolate containers from the underlying node. AppArmor works for any application that runs on Linux, not just Kubernetes pods.
![AppArmor profiles in use in an AKS cluster to limit container actions](media/operator-best-practices-container-security/apparmor.png)
-To see AppArmor in action, the following example creates a profile that prevents writing to files. [SSH][aks-ssh] to an AKS node, then create a file named *deny-write.profile* and paste the following content:
+To see AppArmor in action, the following example creates a profile that prevents writing to files.
+1. [SSH][aks-ssh] to an AKS node.
+1. Create a file named *deny-write.profile*.
+1. Paste the following content:
-```
-#include <tunables/global>
-profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
- #include <abstractions/base>
+ ```
+ #include <tunables/global>
+ profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
+ #include <abstractions/base>
- file,
- # Deny all file writes.
- deny /** w,
-}
-```
-
-AppArmor profiles are added using the `apparmor_parser` command. Add the profile to AppArmor and specify the name of the profile created in the previous step:
-
-```console
-sudo apparmor_parser deny-write.profile
-```
-
-There's no output returned if the profile is correctly parsed and applied to AppArmor. You're returned to the command prompt.
-
-From your local machine, now create a pod manifest named *aks-apparmor.yaml* and paste the following content. This manifest defines an annotation for `container.apparmor.security.beta.kubernetes` add references the *deny-write* profile created in the previous steps:
-
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: hello-apparmor
- annotations:
- container.apparmor.security.beta.kubernetes.io/hello: localhost/k8s-apparmor-example-deny-write
-spec:
- containers:
- - name: hello
- image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
- command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
-```
-
-Deploy the sample pod using the [kubectl apply][kubectl-apply] command:
-
-```console
-kubectl apply -f aks-apparmor.yaml
-```
-
-With the pod deployed, use verify the *hello-apparmor* pod shows as *blocked*:
-
-```
-$ kubectl get pods
-
-NAME READY STATUS RESTARTS AGE
-aks-ssh 1/1 Running 0 4m2s
-hello-apparmor 0/1 Blocked 0 50s
-```
+ file,
+ # Deny all file writes.
+ deny /** w,
+ }
+ ```
+
+AppArmor profiles are added using the `apparmor_parser` command.
+1. Add the profile to AppArmor.
+1. Specify the name of the profile created in the previous step:
+
+ ```console
+ sudo apparmor_parser deny-write.profile
+ ```
+
+ If the profile is correctly parsed and applied to AppArmor, you won't see any output and you'll be returned to the command prompt.
+
+1. From your local machine, create a pod manifest named *aks-apparmor.yaml*. This manifest:
+ * Defines an annotation for `container.apparmor.security.beta.kubernetes`.
+ * References the *deny-write* profile created in the previous steps.
+
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: hello-apparmor
+ annotations:
+ container.apparmor.security.beta.kubernetes.io/hello: localhost/k8s-apparmor-example-deny-write
+ spec:
+ containers:
+ - name: hello
+ image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
+ command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
+ ```
+
+1. With the pod deployed, use verify the *hello-apparmor* pod shows as *blocked*:
+
+ ```
+ $ kubectl get pods
+
+ NAME READY STATUS RESTARTS AGE
+ aks-ssh 1/1 Running 0 4m2s
+ hello-apparmor 0/1 Blocked 0 50s
+ ```
For more information about AppArmor, see [AppArmor profiles in Kubernetes][k8s-apparmor]. ### Secure computing
-While AppArmor works for any Linux application, [seccomp (*sec*ure *comp*uting)][seccomp] works at the process level. Seccomp is also a Linux kernel security module, and is natively supported by the Docker runtime used by AKS nodes. With seccomp, the process calls that containers can perform are limited. You create filters that define what actions to allow or deny, and then use annotations within a pod YAML manifest to associate with the seccomp filter. This aligns to the best practice of only granting the container the minimal permissions that are needed to run, and no more.
+While AppArmor works for any Linux application, [seccomp (*sec*ure *comp*uting)][seccomp] works at the process level. Seccomp is also a Linux kernel security module, and is natively supported by the Docker runtime used by AKS nodes. With seccomp, you can limit container process calls. Align to the best practice of granting the container minimal permission only to run by:
+* Defining with filters what actions to allow or deny.
+* Annotating within a pod YAML manifest to associate with the seccomp filter.
-To see seccomp in action, create a filter that prevents changing permissions on a file. [SSH][aks-ssh] to an AKS node, then create a seccomp filter named */var/lib/kubelet/seccomp/prevent-chmod* and paste the following content:
+To see seccomp in action, create a filter that prevents changing permissions on a file.
+1. [SSH][aks-ssh] to an AKS node.
+1. Create a seccomp filter named */var/lib/kubelet/seccomp/prevent-chmod*.
+1. Paste the following content:
-```json
-{
- "defaultAction": "SCMP_ACT_ALLOW",
- "syscalls": [
- {
- "name": "chmod",
- "action": "SCMP_ACT_ERRNO"
- },
+ ```json
{
- "name": "fchmodat",
- "action": "SCMP_ACT_ERRNO"
- },
- {
- "name": "chmodat",
- "action": "SCMP_ACT_ERRNO"
+ "defaultAction": "SCMP_ACT_ALLOW",
+ "syscalls": [
+ {
+ "name": "chmod",
+ "action": "SCMP_ACT_ERRNO"
+ },
+ {
+ "name": "fchmodat",
+ "action": "SCMP_ACT_ERRNO"
+ },
+ {
+ "name": "chmodat",
+ "action": "SCMP_ACT_ERRNO"
+ }
+ ]
}
- ]
-}
-```
+ ```
-In version 1.19 and later, you need to configure the following:
+ In version 1.19 and later, you need to configure the following:
-```json
-{
- "defaultAction": "SCMP_ACT_ALLOW",
- "syscalls": [
+ ```json
{
- "names": ["chmod","fchmodat","chmodat"],
- "action": "SCMP_ACT_ERRNO"
+ "defaultAction": "SCMP_ACT_ALLOW",
+ "syscalls": [
+ {
+ "names": ["chmod","fchmodat","chmodat"],
+ "action": "SCMP_ACT_ERRNO"
+ }
+ ]
}
- ]
-}
-```
-
-From your local machine, now create a pod manifest named *aks-seccomp.yaml* and paste the following content. This manifest defines an annotation for `seccomp.security.alpha.kubernetes.io` and references the *prevent-chmod* filter created in the previous step:
-
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: chmod-prevented
- annotations:
- seccomp.security.alpha.kubernetes.io/pod: localhost/prevent-chmod
-spec:
- containers:
- - name: chmod
- image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
- command:
- - "chmod"
- args:
- - "777"
- - /etc/hostname
- restartPolicy: Never
-```
-
-In version 1.19 and later, you need to configure the following:
-
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: chmod-prevented
-spec:
- securityContext:
- seccompProfile:
- type: Localhost
- localhostProfile: prevent-chmod
- containers:
- - name: chmod
- image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
- command:
- - "chmod"
- args:
- - "777"
- - /etc/hostname
- restartPolicy: Never
-```
-
-Deploy the sample pod using the [kubectl apply][kubectl-apply] command:
-
-```console
-kubectl apply -f ./aks-seccomp.yaml
-```
-
-View the status of the pods using the [kubectl get pods][kubectl-get] command. The pod reports an error. The `chmod` command is prevented from running by the seccomp filter, as shown in the following example output:
-
-```
-$ kubectl get pods
-
-NAME READY STATUS RESTARTS AGE
-chmod-prevented 0/1 Error 0 7s
-```
+ ```
+
+1. From your local machine, create a pod manifest named *aks-seccomp.yaml* and paste the following content. This manifest:
+ * Defines an annotation for `seccomp.security.alpha.kubernetes.io`.
+ * References the *prevent-chmod* filter created in the previous step.
+
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: chmod-prevented
+ annotations:
+ seccomp.security.alpha.kubernetes.io/pod: localhost/prevent-chmod
+ spec:
+ containers:
+ - name: chmod
+ image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
+ command:
+ - "chmod"
+ args:
+ - "777"
+ - /etc/hostname
+ restartPolicy: Never
+ ```
+
+ In version 1.19 and later, you need to configure the following:
+
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: chmod-prevented
+ spec:
+ securityContext:
+ seccompProfile:
+ type: Localhost
+ localhostProfile: prevent-chmod
+ containers:
+ - name: chmod
+ image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
+ command:
+ - "chmod"
+ args:
+ - "777"
+ - /etc/hostname
+ restartPolicy: Never
+ ```
+
+1. Deploy the sample pod using the [kubectl apply][kubectl-apply] command:
+
+ ```console
+ kubectl apply -f ./aks-seccomp.yaml
+ ```
+
+1. View pod status using the [kubectl get pods][kubectl-get] command.
+ * The pod reports an error.
+ * The `chmod` command is prevented from running by the seccomp filter, as shown in the following example output:
+
+ ```
+ $ kubectl get pods
+
+ NAME READY STATUS RESTARTS AGE
+ chmod-prevented 0/1 Error 0 7s
+ ```
For more information about available filters, see [Seccomp security profiles for Docker][seccomp]. ## Regularly update to the latest version of Kubernetes
-**Best practice guidance** - To stay current on new features and bug fixes, regularly upgrade the Kubernetes version in your AKS cluster.
+> **Best practice guidance**
+>
+> To stay current on new features and bug fixes, regularly upgrade the Kubernetes version in your AKS cluster.
-Kubernetes releases new features at a quicker pace than more traditional infrastructure platforms. Kubernetes updates include new features, and bug or security fixes. New features typically move through an *alpha* and then *beta* status before they become *stable* and are generally available and recommended for production use. This release cycle should allow you to update Kubernetes without regularly encountering breaking changes or adjusting your deployments and templates.
+Kubernetes releases new features at a quicker pace than more traditional infrastructure platforms. Kubernetes updates include:
+* New features
+* Bug or security fixes
-AKS supports three minor versions of Kubernetes. This means that when a new minor patch version is introduced, the oldest minor version and patch releases supported are retired. Minor updates to Kubernetes happen on a periodic basis. Make sure that you have a governance process to check and upgrade as needed so you don't fall out of support. For more information, see [Supported Kubernetes versions AKS][aks-supported-versions].
+New features typically move through *alpha* and *beta* status before they become *stable*. Once stable, are generally available and recommended for production use. Kubernetes new feature release cycle allows you to update Kubernetes without regularly encountering breaking changes or adjusting your deployments and templates.
+
+AKS supports three minor versions of Kubernetes. Once a new minor patch version is introduced, the oldest minor version and patch releases supported are retired. Minor Kubernetes updates happen on a periodic basis. To stay within support, ensure you have a governance process to check for necessary upgrades. For more information, see [Supported Kubernetes versions AKS][aks-supported-versions].
To check the versions that are available for your cluster, use the [az aks get-upgrades][az-aks-get-upgrades] command as shown in the following example:
To check the versions that are available for your cluster, use the [az aks get-u
az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster ```
-You can then upgrade your AKS cluster using the [az aks upgrade][az-aks-upgrade] command. The upgrade process safely cordons and drains one node at a time, schedules pods on remaining nodes, and then deploys a new node running the latest OS and Kubernetes versions.
+You can then upgrade your AKS cluster using the [az aks upgrade][az-aks-upgrade] command. The upgrade process safely:
+* Cordons and drains one node at a time.
+* Schedules pods on remaining nodes.
+* Deploys a new node running the latest OS and Kubernetes versions.
-It is highly recommended to test new minor versions in a dev test environment so you can validate your workload continues healthy operation with the new Kubernetes version. Kubernetes may deprecate APIs, such as in version 1.16, which could be relied on by your workloads. When bringing new versions into production, consider using [multiple node pools on separate versions](use-multiple-node-pools.md) and upgrade individual pools one at a time to progressively roll the update across a cluster. If running multiple clusters, upgrade one cluster at a time to progressively monitor for impact or changes.
-
-```azurecli-interactive
-az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version KUBERNETES_VERSION
-```
+>[!IMPORTANT]
+> Test new minor versions in a dev test environment and validate that your workload remains healthy with the new Kubernetes version.
+>
+> Kubernetes may deprecate APIs (like in version 1.16) that your workloads rely on. When bringing new versions into production, consider using [multiple node pools on separate versions](use-multiple-node-pools.md) and upgrade individual pools one at a time to progressively roll the update across a cluster. If running multiple clusters, upgrade one cluster at a time to progressively monitor for impact or changes.
+>
+>```azurecli-interactive
+>az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version KUBERNETES_VERSION
+>```
For more information about upgrades in AKS, see [Supported Kubernetes versions in AKS][aks-supported-versions] and [Upgrade an AKS cluster][aks-upgrade]. ## Process Linux node updates and reboots using kured
-**Best practice guidance** - AKS automatically downloads and installs security fixes on each Linux nodes, but does not automatically reboot if necessary. Use `kured` to watch for pending reboots, then safely cordon and drain the node to allow the node to reboot, apply the updates and be as secure as possible with respect to the OS. For Windows Server nodes, regularly perform an AKS upgrade operation to safely cordon and drain pods and deploy updated nodes.
+> **Best practice guidance**
+>
+> While AKS automatically downloads and installs security fixes on each Linux node, it does not automatically reboot.
+> 1. Use `kured` to watch for pending reboots.
+> 1. Safely cordon and drain the node to allow the node to reboot.
+> 1. Apply the updates.
+> 1. Be as secure as possible with respect to the OS.
+
+For Windows Server nodes, regularly perform an AKS upgrade operation to safely cordon and drain pods and deploy updated nodes.
-Each evening, Linux nodes in AKS get security patches available through their distro update channel. This behavior is configured automatically as the nodes are deployed in an AKS cluster. To minimize disruption and potential impact to running workloads, nodes are not automatically rebooted if a security patch or kernel update requires it.
+Each evening, Linux nodes in AKS get security patches through their distro update channel. This behavior is automatically configured as the nodes are deployed in an AKS cluster. To minimize disruption and potential impact to running workloads, nodes are not automatically rebooted if a security patch or kernel update requires it.
-The open-source [kured (KUbernetes REboot Daemon)][kured] project by Weaveworks watches for pending node reboots. When a Linux node applies updates that require a reboot, the node is safely cordoned and drained to move and schedule the pods on other nodes in the cluster. Once the node is rebooted, it is added back into the cluster and Kubernetes resumes scheduling pods on it. To minimize disruption, only one node at a time is permitted to be rebooted by `kured`.
+The open-source [kured (KUbernetes REboot Daemon)][kured] project by Weaveworks watches for pending node reboots. When a Linux node applies updates that require a reboot, the node is safely cordoned and drained to move and schedule the pods on other nodes in the cluster. Once the node is rebooted, it is added back into the cluster and Kubernetes resumes pod scheduling. To minimize disruption, only one node at a time is permitted to be rebooted by `kured`.
![The AKS node reboot process using kured](media/operator-best-practices-cluster-security/node-reboot-process.png)
-If you want finer grain control over when reboots happen, `kured` can integrate with Prometheus to prevent reboots if there are other maintenance events or cluster issues in progress. This integration minimizes additional complications by rebooting nodes while you are actively troubleshooting other issues.
+If you want even closer control over reboots, `kured` can integrate with Prometheus to prevent reboots if there are other maintenance events or cluster issues in progress. This integration reduces complication by rebooting nodes while you are actively troubleshooting other issues.
For more information about how to handle node reboots, see [Apply security and kernel updates to nodes in AKS][aks-kured].
aks Operator Best Practices Container Image Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-container-image-management.md
Title: Operator best practices - Container image management in Azure Kubernetes
description: Learn the cluster operator best practices for how to manage and secure container images in Azure Kubernetes Service (AKS) Previously updated : 12/06/2018 Last updated : 03/11/2021 # Best practices for container image management and security in Azure Kubernetes Service (AKS)
-As you develop and run applications in Azure Kubernetes Service (AKS), the security of your containers and container images is a key consideration. Containers that include out of date base images or unpatched application runtimes introduce a security risk and possible attack vector. To minimize these risks, you should integrate tools that scan for and remediate issues in your containers at build time as well as runtime. The earlier in the process the vulnerability or out of date base image is caught, the more secure the cluster. In this article, *containers* means both the container images stored in a container registry, and the running containers.
+Container and container image security is a major priority while you develop and run applications in Azure Kubernetes Service (AKS). Containers with outdated base images or unpatched application runtimes introduce a security risk and possible attack vector.
+
+Minimize risks by integrating and running scan and remediation tools in your containers at build and runtime. The earlier you catch the vulnerability or outdated base image, the more secure your cluster.
+
+In this article, *"containers"* means both:
+* The container images stored in a container registry.
+* The running containers.
This article focuses on how to secure your containers in AKS. You learn how to: > [!div class="checklist"]
-> * Scan for and remediate image vulnerabilities
-> * Automatically trigger and redeploy container images when a base image is updated
+> * Scan for and remediate image vulnerabilities.
+> * Automatically trigger and redeploy container images when a base image is updated.
You can also read the best practices for [cluster security][best-practices-cluster-security] and for [pod security][best-practices-pod-security].
-You can also use [Container security in Security Center][security-center-containers] to help scan your containers for vulnerabilities. There is also [Azure Container Registry integration][security-center-acr] with Security Center to help protect your images and registry from vulnerabilities.
+You can also use [Container security in Security Center][security-center-containers] to help scan your containers for vulnerabilities. [Azure Container Registry integration][security-center-acr] with Security Center helps protect your images and registry from vulnerabilities.
## Secure the images and run time
-**Best practice guidance** - Scan your container images for vulnerabilities, and only deploy images that have passed validation. Regularly update the base images and application runtime, then redeploy workloads in the AKS cluster.
+> **Best practice guidance**
+>
+> Scan your container images for vulnerabilities. Only deploy validated images. Regularly update the base images and application runtime. Redeploy workloads in the AKS cluster.
-One concern with the adoption of container-based workloads is verifying the security of images and runtime used to build your own applications. How do you make sure that you don't introduce security vulnerabilities into your deployments? Your deployment workflow should include a process to scan container images using tools such as [Twistlock][twistlock] or [Aqua][aqua], and then only allow verified images to be deployed.
+When adopting container-based workloads, you'll want to verify the security of images and runtime used to build your own applications. How do you avoid introducing security vulnerabilities into your deployments?
+* Include in your deployment workflow a process to scan container images using tools such as [Twistlock][twistlock] or [Aqua][aqua].
+* Only allow verified images to be deployed.
![Scan and remediate container images, validate, and deploy](media/operator-best-practices-container-security/scan-container-images-simplified.png)
-In a real-world example, you can use a continuous integration and continuous deployment (CI/CD) pipeline to automate the image scans, verification, and deployments. Azure Container Registry includes these vulnerabilities scanning capabilities.
+For example, you can use a continuous integration and continuous deployment (CI/CD) pipeline to automate the image scans, verification, and deployments. Azure Container Registry includes these vulnerabilities scanning capabilities.
## Automatically build new images on base image update
-**Best practice guidance** - As you use base images for application images, use automation to build new images when the base image is updated. As those base images typically include security fixes, update any downstream application container images.
+> **Best practice guidance**
+>
+> As you use base images for application images, use automation to build new images when the base image is updated. Since updated base images typically include security fixes, update any downstream application container images.
-Each time a base image is updated, any downstream container images should also be updated. This build process should be integrated into validation and deployment pipelines such as [Azure Pipelines][azure-pipelines] or Jenkins. These pipelines makes sure that your applications continue to run on the updated based images. Once your application container images are validated, the AKS deployments can then be updated to run the latest, secure images.
+Each time a base image is updated, you should also update any downstream container images. Integrate this build process into validation and deployment pipelines such as [Azure Pipelines][azure-pipelines] or Jenkins. These pipelines make sure that your applications continue to run on the updated based images. Once your application container images are validated, the AKS deployments can then be updated to run the latest, secure images.
-Azure Container Registry Tasks can also automatically update container images when the base image is updated. This feature allows you to build a small number of base images, and regularly keep them updated with bug and security fixes.
+Azure Container Registry Tasks can also automatically update container images when the base image is updated. With this feature, you build a few base images and keep them updated with bug and security fixes.
For more information about base image updates, see [Automate image builds on base image update with Azure Container Registry Tasks][acr-base-image-update].
aks Operator Best Practices Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-identity.md
description: Learn the cluster operator best practices for how to manage authentication and authorization for clusters in Azure Kubernetes Service (AKS) Previously updated : 07/07/2020 Last updated : 03/09/2021
# Best practices for authentication and authorization in Azure Kubernetes Service (AKS)
-As you deploy and maintain clusters in Azure Kubernetes Service (AKS), you need to implement ways to manage access to resources and services. Without these controls, accounts may have access to resources and services they don't need. It can also be hard to track which set of credentials were used to make changes.
+As you deploy and maintain clusters in Azure Kubernetes Service (AKS), you implement ways to manage access to resources and services. Without these controls:
+* Accounts could have access to unnecessary resources and services.
+* Tracking which set of credentials were used to make changes could be difficult.
This best practices article focuses on how a cluster operator can manage access and identity for AKS clusters. In this article, you learn how to: > [!div class="checklist"] >
-> * Authenticate AKS cluster users with Azure Active Directory
-> * Control access to resources with Kubernetes role-based access control (Kubernetes RBAC)
-> * Use Azure RBAC to granularly control access to the AKS resource and the Kubernetes API at scale, as well as to the kubeconfig.
-> * Use a managed identity to authenticate pods themselves with other services
+> * Authenticate AKS cluster users with Azure Active Directory.
+> * Control access to resources with Kubernetes role-based access control (Kubernetes RBAC).
+> * Use Azure RBAC to granularly control access to the AKS resource, the Kubernetes API at scale, and the `kubeconfig`.
+> * Use a managed identity to authenticate pods themselves with other services.
-## Use Azure Active Directory
+## Use Azure Active Directory (Azure AD)
-**Best practice guidance** - Deploy AKS clusters with Azure AD integration. Using Azure AD centralizes the identity management component. Any change in user account or group status is automatically updated in access to the AKS cluster. Use Roles or ClusterRoles and Bindings, as discussed in the next section, to scope users or groups to least amount of permissions needed.
+> **Best practice guidance**
+>
+> Deploy AKS clusters with Azure AD integration. Using Azure AD centralizes the identity management component. Any change in user account or group status is automatically updated in access to the AKS cluster. Scope users or groups to the minimum permissions amount using [Roles, ClusterRoles, or Bindings](#use-kubernetes-role-based-access-control-kubernetes-rbac).
-The developers and application owners of your Kubernetes cluster need access to different resources. Kubernetes doesn't provide an identity management solution to control which users can interact with what resources. Instead, you typically integrate your cluster with an existing identity solution. Azure Active Directory (AD) provides an enterprise-ready identity management solution, and can integrate with AKS clusters.
+Your Kubernetes cluster developers and application owners need access to different resources. Kubernetes lacks an identity management solution for you to control the resources with which users can interact. Instead, you typically integrate your cluster with an existing identity solution. Enter Azure AD: an enterprise-ready identity management solution that integrates with AKS clusters.
-With Azure AD-integrated clusters in AKS, you create *Roles* or *ClusterRoles* that define access permissions to resources. You then *bind* the roles to users or groups from Azure AD. These Kubernetes role-based access control (Kubernetes RBAC) are discussed in the next section. The integration of Azure AD and how you control access to resources can be seen in the following diagram:
+With Azure AD-integrated clusters in AKS, you create *Roles* or *ClusterRoles* defining access permissions to resources. You then *bind* the roles to users or groups from Azure AD. Learn more about these Kubernetes RBAC in [the next section](#use-kubernetes-role-based-access-control-kubernetes-rbac). Azure AD integration and how you control access to resources can be seen in the following diagram:
![Cluster-level authentication for Azure Active Directory integration with AKS](media/operator-best-practices-identity/cluster-level-authentication-flow.png) 1. Developer authenticates with Azure AD. 1. The Azure AD token issuance endpoint issues the access token.
-1. The developer does an action using the Azure AD token, such as `kubectl create pod`
-1. Kubernetes validates the token with Azure Active Directory and fetches the developer's group memberships.
-1. Kubernetes role-based access control (Kubernetes RBAC) and cluster policies are applied.
-1. Developer's request is successful or not based on previous validation of Azure AD group membership and Kubernetes RBAC and policies.
+1. The developer performs an action using the Azure AD token, such as `kubectl create pod`.
+1. Kubernetes validates the token with Azure AD and fetches the developer's group memberships.
+1. Kubernetes RBAC and cluster policies are applied.
+1. Developer's request is successful based on previous validation of Azure AD group membership and Kubernetes RBAC and policies.
To create an AKS cluster that uses Azure AD, see [Integrate Azure Active Directory with AKS][aks-aad]. ## Use Kubernetes role-based access control (Kubernetes RBAC)
-**Best practice guidance** - Use Kubernetes RBAC to define the permissions that users or groups have to resources in the cluster. Create roles and bindings that assign the least amount of permissions required. Integrate with Azure AD so any change in user status or group membership is automatically updated and access to cluster resources is current.
+> **Best practice guidance**
+>
+> Define user or group permissions to cluster resources with Kubernetes RBAC. Create roles and bindings that assign the least amount of permissions required. Integrate with Azure AD to automatically update any user status or group membership change and keep access to cluster resources current.
-In Kubernetes, you may provide granular control of access to resources in the cluster. Permissions are defined at the cluster level, or to specific namespaces. You can define what resources can be managed, and with what permissions. These roles are then applied to users or groups with a binding. For more information about *Roles*, *ClusterRoles*, and *Bindings*, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-concepts-identity].
+In Kubernetes, you provide granular access control to cluster resources. You define permissions at the cluster level, or to specific namespaces. You determine what resources can be managed and with what permissions. You then apply these roles to users or groups with a binding. For more information about *Roles*, *ClusterRoles*, and *Bindings*, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-concepts-identity].
-As an example, you can create a Role that grants full access to resources in the namespace named *finance-app*, as shown in the following example YAML manifest:
+As an example, you create a role with full access to resources in the namespace named *finance-app*, as shown in the following example YAML manifest:
```yaml kind: Role
rules:
verbs: ["*"] ```
-A RoleBinding is then created that binds the Azure AD user *developer1\@contoso.com* to the RoleBinding, as shown in the following YAML manifest:
+You then create a RoleBinding and bind the Azure AD user *developer1\@contoso.com* to the RoleBinding, as shown in the following YAML manifest:
```yaml kind: RoleBinding
roleRef:
apiGroup: rbac.authorization.k8s.io ```
-When *developer1\@contoso.com* is authenticated against the AKS cluster, they have full permissions to resources in the *finance-app* namespace. In this way, you logically separate and control access to resources. Kubernetes RBAC should be used in conjunction with Azure AD-integration, as discussed in the previous section.
+When *developer1\@contoso.com* is authenticated against the AKS cluster, they have full permissions to resources in the *finance-app* namespace. In this way, you logically separate and control access to resources. Use Kubernetes RBAC in conjunction with Azure AD-integration.
To see how to use Azure AD groups to control access to Kubernetes resources using Kubernetes RBAC, see [Control access to cluster resources using role-based access control and Azure Active Directory identities in AKS][azure-ad-rbac]. ## Use Azure RBAC
-**Best practice guidance** - Use Azure RBAC to define the minimum required permissions that users or groups have to AKS resources in one or more subscriptions.
+> **Best practice guidance**
+>
+> Use Azure RBAC to define the minimum required user and group permissions to AKS resources in one or more subscriptions.
There are two levels of access needed to fully operate an AKS cluster:
-1. Access the AKS resource on your Azure subscription. This access level allows you to control things scaling or upgrading your cluster using the AKS APIs as well as pull your kubeconfig.
-To see how to control access to the AKS resource and the kubeconfig, see [Limit access to cluster configuration file](control-kubeconfig-access.md).
+1. Access the AKS resource on your Azure subscription.
-2. Access to the Kubernetes API. This access level is controlled either by [Kubernetes RBAC](#use-kubernetes-role-based-access-control-kubernetes-rbac) (traditionally) or by integrating Azure RBAC with AKS for kubernetes authorization.
-To see how to granularly give permissions to the Kubernetes API using Azure RBAC see [Use Azure RBAC for Kubernetes authorization](manage-azure-rbac.md).
+ This access level allows you to:
+ * Control scaling or upgrading your cluster using the AKS APIs
+ * Pull your `kubeconfig`.
+
+ To see how to control access to the AKS resource and the `kubeconfig`, see [Limit access to cluster configuration file](control-kubeconfig-access.md).
+
+2. Access to the Kubernetes API.
+
+ This access level is controlled either by:
+ * [Kubernetes RBAC](#use-kubernetes-role-based-access-control-kubernetes-rbac) (traditionally) or
+ * By integrating Azure RBAC with AKS for kubernetes authorization.
+
+ To see how to granularly give permissions to the Kubernetes API using Azure RBAC, see [Use Azure RBAC for Kubernetes authorization](manage-azure-rbac.md).
## Use Pod-managed Identities
-**Best practice guidance** - Don't use fixed credentials within pods or container images, as they are at risk of exposure or abuse. Instead, use pod identities to automatically request access using a central Azure AD identity solution. Pod identities are intended for use with Linux pods and container images only.
+> **Best practice guidance**
+>
+> Don't use fixed credentials within pods or container images, as they are at risk of exposure or abuse. Instead, use *pod identities* to automatically request access using a central Azure AD identity solution.
> [!NOTE]
-> Pod-managed identities support for Windows containers is coming soon.
+> Pod identities are intended for use with Linux pods and container images only. Pod-managed identities support for Windows containers is coming soon.
+
+To access other Azure services, like Cosmos DB, Key Vault, or Blob Storage, the pod needs access credentials. You could define access credentials with the container image or inject them as a Kubernetes secret. Either way, you would need to manually create and assign them. Usually, these credentials are reused across pods and aren't regularly rotated.
-When pods need access to other Azure services, such as Cosmos DB, Key Vault, or Blob Storage, the pod needs access credentials. These access credentials could be defined with the container image or injected as a Kubernetes secret, but need to be manually created and assigned. Often, the credentials are reused across pods, and aren't regularly rotated.
+With pod-managed identities for Azure resources, you automatically request access to services through Azure AD. Pod-managed identities is now currently in preview for AKS. Please refer to the [Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview)[( https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity) documentation to get started.
-Pod-managed identities for Azure resources lets you automatically request access to services through Azure AD. Pod-managed identities is now currently in preview for Azure Kubernetes Service. Please refer to the [Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview)]( https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity) documentation to get started. With Pod-managed identities, you do not manually define credentials for pods, instead they request an access token in real time, and can use it to access only their assigned services. In AKS, there are two components that handle the operations to allow pods to use managed identities:
+Instead of manually defining credentials for pods, pod-managed identities request an access token in real time, using it to access only their assigned services. In AKS, there are two components that handle the operations to allow pods to use managed identities:
* **The Node Management Identity (NMI) server** is a pod that runs as a DaemonSet on each node in the AKS cluster. The NMI server listens for pod requests to Azure services. * **The Azure Resource Provider** queries the Kubernetes API server and checks for an Azure identity mapping that corresponds to a pod.
-When pods request access to an Azure service, network rules redirect the traffic to the Node Management Identity (NMI) server. The NMI server identifies pods that request access to Azure services based on their remote address, and queries the Azure Resource Provider. The Azure Resoruce Provider checks for Azure identity mappings in the AKS cluster, and the NMI server then requests an access token from Azure Active Directory (AD) based on the pod's identity mapping. Azure AD provides access to the NMI server, which is returned to the pod. This access token can be used by the pod to then request access to services in Azure.
+When pods request access to an Azure service, network rules redirect the traffic to the NMI server.
+1. The NMI server:
+ * Identifies pods requesting access to Azure services based on their remote address.
+ * Queries the Azure Resource Provider.
+1. The Azure Resource Provider checks for Azure identity mappings in the AKS cluster.
+1. The NMI server requests an access token from Azure AD based on the pod's identity mapping.
+1. Azure AD provides access to the NMI server, which is returned to the pod.
+ * This access token can be used by the pod to then request access to services in Azure.
In the following example, a developer creates a pod that uses a managed identity to request access to Azure SQL Database: ![Pod identities allow a pod to automatically request access to other services](media/operator-best-practices-identity/pod-identities.png)
-1. Cluster operator first creates a service account that can be used to map identities when pods request access to services.
+1. Cluster operator creates a service account to map identities when pods request access to services.
1. The NMI server is deployed to relay any pod requests, along with the Azure Resource Provider, for access tokens to Azure AD. 1. A developer deploys a pod with a managed identity that requests an access token through the NMI server. 1. The token is returned to the pod and used to access Azure SQL Database
aks Operator Best Practices Multi Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-multi-region.md
description: Learn a cluster operator's best practices to achieve maximum uptime
Previously updated : 11/28/2018 Last updated : 03/11/2021 #Customer intent: As an AKS cluster operator, I want to plan for business continuity or disaster recovery to help protect my cluster from region problems.
This article focuses on how to plan for business continuity and disaster recover
## Plan for multiregion deployment
-**Best practice**: When you deploy multiple AKS clusters, choose regions where AKS is available, and use paired regions.
+> **Best practice**
+>
+> When you deploy multiple AKS clusters, choose regions where AKS is available. Use paired regions.
-An AKS cluster is deployed into a single region. To protect your system from region failure, deploy your application into multiple AKS clusters across different regions. When you plan where to deploy your AKS cluster, consider:
+An AKS cluster is deployed into a single region. To protect your system from region failure, deploy your application into multiple AKS clusters across different regions. When planning where to deploy your AKS cluster, consider:
-* [**AKS region availability**](./quotas-skus-regions.md#region-availability): Choose regions close to your users. AKS continually expands into new regions.
-* [**Azure paired regions**](../best-practices-availability-paired-regions.md):
-For your geographic area, choose two regions that are paired with each other. Paired regions coordinate platform updates and prioritize recovery efforts where needed.
-* **Service availability**: Decide whether your paired regions should be hot/hot, hot/warm, or hot/cold. Do you want to run both regions at the same time, with one region *ready* to start serving traffic? Or do you want one region to have time to get ready to serve traffic?
+* [**AKS region availability**](./quotas-skus-regions.md#region-availability)
+ * Choose regions close to your users.
+ * AKS continually expands into new regions.
+* [**Azure paired regions**](../best-practices-availability-paired-regions.md)
+ * For your geographic area, choose two regions paired together.
+ * Paired regions coordinate platform updates and prioritize recovery efforts where needed.
+* **Service availability**
+ * Decide whether your paired regions should be hot/hot, hot/warm, or hot/cold.
+ * Do you want to run both regions at the same time, with one region *ready* to start serving traffic? Or,
+ * Do you want to give one region time to get ready to serve traffic?
-AKS region availability and paired regions are a joint consideration. Deploy your AKS clusters into paired regions that are designed to manage region disaster recovery together. For example, AKS is available in East US and West US. These regions are paired. Choose these two regions when you're creating an AKS BC/DR strategy.
+AKS region availability and paired regions are a joint consideration. Deploy your AKS clusters into paired regions designed to manage region disaster recovery together. For example, AKS is available in East US and West US. These regions are paired. Choose these two regions when you're creating an AKS BC/DR strategy.
-When you deploy your application, add another step to your CI/CD pipeline to deploy to these multiple AKS clusters. If you don't update your deployment pipelines, applications might be deployed into only one of your regions and AKS clusters. Customer traffic that's directed to a secondary region won't receive the latest code updates.
+When you deploy your application, add another step to your CI/CD pipeline to deploy to these multiple AKS clusters. Updating your deployment pipelines prevents applications from deploying into only one of your regions and AKS clusters. In that scenario, customer traffic directed to a secondary region won't receive the latest code updates.
## Use Azure Traffic Manager to route traffic
-**Best practice**: Azure Traffic Manager can direct customers to their closest AKS cluster and application instance. For the best performance and redundancy, direct all application traffic through Traffic Manager before it goes to your AKS cluster.
+> **Best practice**
+>
+> Azure Traffic Manager can direct you to your closest AKS cluster and application instance. For the best performance and redundancy, direct all application traffic through Traffic Manager before it goes to your AKS cluster.
-If you have multiple AKS clusters in different regions, use Traffic Manager to control how traffic flows to the applications that run in each cluster. [Azure Traffic Manager](../traffic-manager/index.yml) is a DNS-based traffic load balancer that can distribute network traffic across regions. Use Traffic Manager to route users based on cluster response time or based on geography.
+If you have multiple AKS clusters in different regions, use Traffic Manager to control traffic flow to the applications running in each cluster. [Azure Traffic Manager](../traffic-manager/index.yml) is a DNS-based traffic load balancer that can distribute network traffic across regions. Use Traffic Manager to route users based on cluster response time or based on geography.
![AKS with Traffic Manager](media/operator-best-practices-bc-dr/aks-azure-traffic-manager.png)
-Customers who have a single AKS cluster typically connect to the service IP or DNS name of a given application. In a multicluster deployment, customers should connect to a Traffic Manager DNS name that points to the services on each AKS cluster. Define these services by using Traffic Manager endpoints. Each endpoint is the *service load balancer IP*. Use this configuration to direct network traffic from the Traffic Manager endpoint in one region to the endpoint in a different region.
+If you have a single AKS cluster, you typically connect to the service IP or DNS name of a given application. In a multi-cluster deployment, you should connect to a Traffic Manager DNS name that points to the services on each AKS cluster. Define these services by using Traffic Manager endpoints. Each endpoint is the *service load balancer IP*. Use this configuration to direct network traffic from the Traffic Manager endpoint in one region to the endpoint in a different region.
![Geographic routing through Traffic Manager](media/operator-best-practices-bc-dr/traffic-manager-geographic-routing.png)
-Traffic Manager performs DNS lookups and returns a user's most appropriate endpoint. Nested profiles can prioritize a primary location. For example, users should generally connect to their closest geographic region. If that region has a problem, Traffic Manager instead directs the users to a secondary region. This approach ensures that customers can connect to an application instance even if their closest geographic region is unavailable.
+Traffic Manager performs DNS lookups and returns your most appropriate endpoint. Nested profiles can prioritize a primary location. For example, you should connect to their closest geographic region. If that region has a problem, Traffic Manager directs you to a secondary region. This approach ensures that you can connect to an application instance even if your closest geographic region is unavailable.
For information on how to set up endpoints and routing, see [Configure the geographic traffic routing method by using Traffic Manager](../traffic-manager/traffic-manager-configure-geographic-routing-method.md). ### Application routing with Azure Front Door Service
-Using split TCP-based anycast protocol, [Azure Front Door Service](../frontdoor/front-door-overview.md) ensures that your end users promptly connect to the nearest Front Door POP (Point of Presence). Additional features of Azure Front Door Service include TLS termination, custom domain, web application firewall, URL Rewrite, and session affinity. Review the needs of your application traffic to understand which solution is the most suitable.
+Using split TCP-based anycast protocol, [Azure Front Door Service](../frontdoor/front-door-overview.md) promptly connects your end users to the nearest Front Door POP (Point of Presence). More features of Azure Front Door Service:
+* TLS termination
+* Custom domain
+* Web application firewall
+* URL Rewrite
+* Session affinity
+
+Review the needs of your application traffic to understand which solution is the most suitable.
### Interconnect regions with global virtual network peering
-If the clusters need to talk to each other, connecting both virtual networks to each other can be achieved through [virtual network peering](../virtual-network/virtual-network-peering-overview.md). This technology interconnects virtual networks to each other providing high bandwidth across Microsoft's backbone network, even across different geographic regions.
+Connect both virtual networks to each other through [virtual network peering](../virtual-network/virtual-network-peering-overview.md) to enable communication between clusters. Virtual network peering interconnects virtual networks, providing high bandwidth across Microsoft's backbone network - even across different geographic regions.
-A prerequisite to peer the virtual networks where AKS clusters are running is to use the standard Load Balancer in your AKS cluster, so that Kubernetes services are reachable across the virtual network peering.
+Before peering virtual networks with running AKS clusters, use the standard Load Balancer in your AKS cluster. This prerequisite makes Kubernetes services reachable across the virtual network peering.
## Enable geo-replication for container images
-**Best practice**: Store your container images in Azure Container Registry and geo-replicate the registry to each AKS region.
+> **Best practice**
+>
+> Store your container images in Azure Container Registry and geo-replicate the registry to each AKS region.
To deploy and run your applications in AKS, you need a way to store and pull the container images. Container Registry integrates with AKS, so it can securely store your container images or Helm charts. Container Registry supports multimaster geo-replication to automatically replicate your images to Azure regions around the world.
-To improve performance and availability, use Container Registry geo-replication to create a registry in each region where you have an AKS cluster. Each AKS cluster then pulls container images from the local container registry in the same region:
+To improve performance and availability:
+1. Use Container Registry geo-replication to create a registry in each region where you have an AKS cluster.
+1. Each AKS cluster then pulls container images from the local container registry in the same region:
![Container Registry geo-replication for container images](media/operator-best-practices-bc-dr/acr-geo-replication.png) When you use Container Registry geo-replication to pull images from the same region, the results are:
-* **Faster**: You pull images from high-speed, low-latency network connections within the same Azure region.
+* **Faster**: Pull images from high-speed, low-latency network connections within the same Azure region.
* **More reliable**: If a region is unavailable, your AKS cluster pulls the images from an available container registry.
-* **Cheaper**: There's no network egress charge between datacenters.
+* **Cheaper**: No network egress charge between datacenters.
-Geo-replication is a feature of *Premium* SKU container registries. For information on how to configure geo-replication, see [Container Registry geo-replication](../container-registry/container-registry-geo-replication.md).
+Geo-replication is a *Premium* SKU container registry feature. For information on how to configure geo-replication, see [Container Registry geo-replication](../container-registry/container-registry-geo-replication.md).
## Remove service state from inside containers
-**Best practice**: Where possible, don't store service state inside the container. Instead, use an Azure platform as a service (PaaS) that supports multiregion replication.
+> **Best practice**
+>
+> Avoid storing service state inside the container. Instead, use an Azure platform as a service (PaaS) that supports multi-region replication.
-*Service state* refers to the in-memory or on-disk data that a service requires to function. State includes the data structures and member variables that the service reads and writes. Depending on how the service is architected, the state might also include files or other resources that are stored on the disk. For example, the state might include the files a database uses to store data and transaction logs.
+*Service state* refers to the in-memory or on-disk data required by a service to function. State includes the data structures and member variables that the service reads and writes. Depending on how the service is architected, the state might also include files or other resources stored on the disk. For example, the state might include the files a database uses to store data and transaction logs.
-State can be either externalized or colocated with the code that manipulates the state. Typically, you externalize state by using a database or other data store that runs on different machines over the network or that runs out of process on the same machine.
+State can be either externalized or co-located with the code that manipulates the state. Typically, you externalize state by using a database or other data store that runs on different machines over the network or that runs out of process on the same machine.
-Containers and microservices are most resilient when the processes that run inside them don't retain state. Because applications almost always contain some state, use a PaaS solution such as Azure Cosmos DB, Azure Database for PostgreSQL, Azure Database for MySQL or Azure SQL Database.
+Containers and microservices are most resilient when the processes that run inside them don't retain state. Since applications almost always contain some state, use a PaaS solution, such as:
+* Azure Cosmos DB
+* Azure Database for PostgreSQL
+* Azure Database for MySQL
+* Azure SQL Database
To build portable applications, see the following guidelines:
To build portable applications, see the following guidelines:
## Create a storage migration plan
-**Best practice**: If you use Azure Storage, prepare and test how to migrate your storage from the primary region to the backup region.
+> **Best practice**
+>
+> If you use Azure Storage, prepare and test how to migrate your storage from the primary region to the backup region.
-Your applications might use Azure Storage for their data. Because your applications are spread across multiple AKS clusters in different regions, you need to keep the storage synchronized. Here are two common ways to replicate storage:
+Your applications might use Azure Storage for their data. If so, your applications are spread across multiple AKS clusters in different regions. You need to keep the storage synchronized. Here are two common ways to replicate storage:
* Infrastructure-based asynchronous replication * Application-based asynchronous replication
Your applications might use Azure Storage for their data. Because your applicati
Your applications might require persistent storage even after a pod is deleted. In Kubernetes, you can use persistent volumes to persist data storage. Persistent volumes are mounted to a node VM and then exposed to the pods. Persistent volumes follow pods even if the pods are moved to a different node inside the same cluster.
-The replication strategy you use depends on your storage solution. Common storage solutions such as [Gluster](https://docs.gluster.org/en/latest/Administrator-Guide/Geo-Replication/), [Ceph](https://docs.ceph.com/docs/master/cephfs/disaster-recovery/), [Rook](https://rook.io/docs/rook/v1.2/ceph-disaster-recovery.html), and [Portworx](https://docs.portworx.com/scheduler/kubernetes/going-production-with-k8s.html#disaster-recovery-with-cloudsnaps) provide their own guidance about disaster recovery and replication.
+The replication strategy you use depends on your storage solution. The following common storage solutions provide their own guidance about disaster recovery and replication:
+* [Gluster](https://docs.gluster.org/en/latest/Administrator-Guide/Geo-Replication/)
+* [Ceph](https://docs.ceph.com/docs/master/cephfs/disaster-recovery/)
+* [Rook](https://rook.io/docs/rook/v1.2/ceph-disaster-recovery.html)
+* [Portworx](https://docs.portworx.com/scheduler/kubernetes/going-production-with-k8s.html#disaster-recovery-with-cloudsnaps)
-The typical strategy is to provide a common storage point where applications can write their data. This data is then replicated across regions and then accessed locally.
+Typically, you provide a common storage point where applications write their data. This data is then replicated across regions and accessed locally.
![Infrastructure-based asynchronous replication](media/operator-best-practices-bc-dr/aks-infra-based-async-repl.png)
-If you use Azure Managed Disks, there are a few options you can use to handle replication and disaster recovery. [Velero on Azure][velero] and [Kasten][kasten] are back up solutions native to Kubernetes but are not supported.
+If you use Azure Managed Disks, you can use [Velero on Azure][velero] and [Kasten][kasten] to handle replication and disaster recovery. These options are back up solutions native to but unsupported by Kubernetes.
### Application-based asynchronous replication
-Kubernetes currently provides no native implementation for application-based asynchronous replication. Because containers and Kubernetes are loosely coupled, any traditional application or language approach should work. Typically, the applications themselves replicate the storage requests, which are then written to each cluster's underlying data storage.
+Kubernetes currently provides no native implementation for application-based asynchronous replication. Since containers and Kubernetes are loosely coupled, any traditional application or language approach should work. Typically, the applications themselves replicate the storage requests, which are then written to each cluster's underlying data storage.
![Application-based asynchronous replication](media/operator-best-practices-bc-dr/aks-app-based-async-repl.png)
aks Operator Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-network.md
description: Learn the cluster operator best practices for virtual network resources and connectivity in Azure Kubernetes Service (AKS) Previously updated : 12/10/2018 Last updated : 03/10/2021 # Best practices for network connectivity and security in Azure Kubernetes Service (AKS)
-As you create and manage clusters in Azure Kubernetes Service (AKS), you provide network connectivity for your nodes and applications. These network resources include IP address ranges, load balancers, and ingress controllers. To maintain a high quality of service for your applications, you need to plan for and then configure these resources.
+As you create and manage clusters in Azure Kubernetes Service (AKS), you provide network connectivity for your nodes and applications. These network resources include IP address ranges, load balancers, and ingress controllers. To maintain a high quality of service for your applications, you need to strategize and configure these resources.
This best practices article focuses on network connectivity and security for cluster operators. In this article, you learn how to: > [!div class="checklist"]
-> * Compare the kubenet and Azure Container Networking Interface (CNI) network modes in AKS
-> * Plan for required IP addressing and connectivity
-> * Distribute traffic using load balancers, ingress controllers, or a web application firewall (WAF)
-> * Securely connect to cluster nodes
+> * Compare the kubenet and Azure Container Networking Interface (CNI) network modes in AKS.
+> * Plan for required IP addressing and connectivity.
+> * Distribute traffic using load balancers, ingress controllers, or a web application firewall (WAF).
+> * Securely connect to cluster nodes.
## Choose the appropriate network model
-**Best practice guidance** - For integration with existing virtual networks or on-premises networks, use Azure CNI networking in AKS. This network model also allows greater separation of resources and controls in an enterprise environment.
+> **Best practice guidance**
+>
+> Use Azure CNI networking in AKS for integration with existing virtual networks or on-premises networks. This network model allows greater separation of resources and controls in an enterprise environment.
Virtual networks provide the basic connectivity for AKS nodes and customers to access your applications. There are two different ways to deploy AKS clusters into virtual networks:
-* **Kubenet networking** - Azure manages the virtual network resources as the cluster is deployed and uses the [kubenet][kubenet] Kubernetes plugin.
-* **Azure CNI networking** - Deploys into a virtual network, and uses the [Azure Container Networking Interface (CNI)][cni-networking] Kubernetes plugin. Pods receive individual IPs that can route to other network services or on-premises resources.
+* **Azure CNI networking**
+
+ Deploys into a virtual network and uses the [Azure CNI][cni-networking] Kubernetes plugin. Pods receive individual IPs that can route to other network services or on-premises resources.
+* **Kubenet networking**
+
+ Azure manages the virtual network resources as the cluster is deployed and uses the [kubenet][kubenet] Kubernetes plugin.
+ For production deployments, both kubenet and Azure CNI are valid options. ### CNI Networking
-The Container Networking Interface (CNI) is a vendor-neutral protocol that lets the container runtime make requests to a network provider. The Azure CNI assigns IP addresses to pods and nodes, and provides IP address management (IPAM) features as you connect to existing Azure virtual networks. Each node and pod resource receives an IP address in the Azure virtual network, and no extra routing is needed to communicate with other resources or services.
+Azure CNI is a vendor-neutral protocol that lets the container runtime make requests to a network provider. It assigns IP addresses to pods and nodes, and provides IP address management (IPAM) features as you connect to existing Azure virtual networks. Each node and pod resource receives an IP address in the Azure virtual network - no need for extra routing to communicate with other resources or services.
![Diagram showing two nodes with bridges connecting each to a single Azure VNet](media/operator-best-practices-network/advanced-networking-diagram.png)
-A notable benefit of Azure CNI networking for production is the network model allows for separation of control and management of resources. From a security perspective, you often want different teams to manage and secure those resources. Azure CNI networking lets you connect to existing Azure resources, on-premises resources, or other services directly via IP addresses assigned to each pod.
+Notably, Azure CNI networking for production allows for separation of control and management of resources. From a security perspective, you often want different teams to manage and secure those resources. With Azure CNI networking, you connect to existing Azure resources, on-premises resources, or other services directly via IP addresses assigned to each pod.
+
+When you use Azure CNI networking, the virtual network resource is in a separate resource group to the AKS cluster. Delegate permissions for the AKS cluster identity to access and manage these resources. The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) permissions on the subnet within your virtual network.
-When you use Azure CNI networking, the virtual network resource is in a separate resource group to the AKS cluster. Delegate permissions for the AKS cluster identity to access and manage these resources. The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) permissions on the subnet within your virtual network. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required:
+If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required:
* `Microsoft.Network/virtualNetworks/subnets/join/action` * `Microsoft.Network/virtualNetworks/subnets/read`
-By default, AKS uses a managed identity for its cluster identity, but you have the option to use a service principal instead. For more information about AKS service principal delegation, see [Delegate access to other Azure resources][sp-delegation]. For more information about managed identities, see [Use managed identities](use-managed-identity.md).
+By default, AKS uses a managed identity for its cluster identity. However, you are able to use a service principal instead. For more information about:
+* AKS service principal delegation, see [Delegate access to other Azure resources][sp-delegation].
+* Managed identities, see [Use managed identities](use-managed-identity.md).
-As each node and pod receive its own IP address, plan out the address ranges for the AKS subnets. The subnet must be large enough to provide IP addresses for every node, pods, and network resources that you deploy. Each AKS cluster must be placed in its own subnet. To allow connectivity to on-premises or peered networks in Azure, don't use IP address ranges that overlap with existing network resources. There are default limits to the number of pods that each node runs with both kubenet and Azure CNI networking. To handle scale out events or cluster upgrades, you also need extra IP addresses available for use in the assigned subnet. This extra address space is especially important if you use Windows Server containers, as those node pools require an upgrade to apply the latest security patches. For more information on Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade].
+As each node and pod receives its own IP address, plan out the address ranges for the AKS subnets. Keep in mind:
+* The subnet must be large enough to provide IP addresses for every node, pods, and network resource that you deploy.
+ * With both kubenet and Azure CNI networking, each node running has default limits to the number of pods.
+* Each AKS cluster must be placed in its own subnet.
+* Avoid using IP address ranges that overlap with existing network resources.
+ * Necessary to allow connectivity to on-premises or peered networks in Azure.
+* To handle scale out events or cluster upgrades, you need extra IP addresses available in the assigned subnet.
+ * This extra address space is especially important if you use Windows Server containers, as those node pools require an upgrade to apply the latest security patches. For more information on Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade].
To calculate the IP address required, see [Configure Azure CNI networking in AKS][advanced-networking].
-When you create a cluster with Azure CNI networking, you specify other address ranges for use by the cluster, such as the Docker bridge address, DNS service IP, and service address range. In general, these address ranges shouldn't overlap each other and shouldn't overlap with any networks associated with the cluster, including any virtual networks, subnets, on-premises and peered networks. For the specific details around limits and sizing for these address ranges, see [Configure Azure CNI networking in AKS][advanced-networking].
+When creating a cluster with Azure CNI networking, you specify other address ranges for the cluster, such as the Docker bridge address, DNS service IP, and service address range. In general, make sure these address ranges:
+* Don't overlap each other.
+* Don't overlap with any networks associated with the cluster, including any virtual networks, subnets, on-premises and peered networks.
+
+For the specific details around limits and sizing for these address ranges, see [Configure Azure CNI networking in AKS][advanced-networking].
### Kubenet networking
-Although kubenet doesn't require you to set up the virtual networks before the cluster is deployed, there are disadvantages:
+Although kubenet doesn't require you to set up the virtual networks before the cluster is deployed, there are disadvantages to waiting:
-* Nodes and pods are placed on different IP subnets. User Defined Routing (UDR) and IP forwarding is used to route traffic between pods and nodes. This extra routing may reduce network performance.
+* Since nodes and pods are placed on different IP subnets, User Defined Routing (UDR) and IP forwarding routes traffic between pods and nodes. This extra routing may reduce network performance.
* Connections to existing on-premises networks or peering to other Azure virtual networks can be complex.
-Kubenet is suitable for small development or test workloads, as you don't have to create the virtual network and subnets separately from the AKS cluster. Simple websites with low traffic, or to lift and shift workloads into containers, can also benefit from the simplicity of AKS clusters deployed with kubenet networking. For most production deployments, you should plan for and use Azure CNI networking.
+Since you don't create the virtual network and subnets separately from the AKS cluster, Kubenet is ideal for:
+* Small development or test workloads.
+* Simple websites with low traffic.
+* Lifting and shifting workloads into containers.
+
+For most production deployments, you should plan for and use Azure CNI networking.
-You can also [configure your own IP address ranges and virtual networks using kubenet][aks-configure-kubenet-networking]. Similar to Azure CNI networking, these address ranges shouldn't overlap each other and shouldn't overlap with any networks associated with the cluster, including any virtual networks, subnets, on-premises and peered networks. For the specific details around limits and sizing for these address ranges, see [Use kubenet networking with your own IP address ranges in AKS][aks-configure-kubenet-networking].
+You can also [configure your own IP address ranges and virtual networks using kubenet][aks-configure-kubenet-networking]. Like Azure CNI networking, these address ranges shouldn't overlap each other and shouldn't overlap with any networks associated with the cluster (virtual networks, subnets, on-premises and peered networks).
+
+For the specific details around limits and sizing for these address ranges, see [Use kubenet networking with your own IP address ranges in AKS][aks-configure-kubenet-networking].
## Distribute ingress traffic
-**Best practice guidance** - To distribute HTTP or HTTPS traffic to your applications, use ingress resources and controllers. Ingress controllers provide extra features over a regular Azure load balancer, and can be managed as native Kubernetes resources.
+> **Best practice guidance**
+>
+> To distribute HTTP or HTTPS traffic to your applications, use ingress resources and controllers. Compared to an Azure load balancer, ingress controllers provide extra features and can be managed as native Kubernetes resources.
+
+While an Azure load balancer can distribute customer traffic to applications in your AKS cluster, it's limited in understanding that traffic. A load balancer resource works at layer 4, and distributes traffic based on protocol or ports.
-An Azure load balancer can distribute customer traffic to applications in your AKS cluster, but it's limited in what it understands about that traffic. A load balancer resource works at layer 4, and distributes traffic based on protocol or ports. Most web applications that use HTTP or HTTPS should use Kubernetes ingress resources and controllers, which work at layer 7. Ingress can distribute traffic based on the URL of the application and handle TLS/SSL termination. This ability also reduces the number of IP addresses you expose and map. With a load balancer, each application typically needs a public IP address assigned and mapped to the service in the AKS cluster. With an ingress resource, a single IP address can distribute traffic to multiple applications.
+Most web applications using HTTP or HTTPS should use Kubernetes ingress resources and controllers, which work at layer 7. Ingress can distribute traffic based on the URL of the application and handle TLS/SSL termination. Ingress also reduces the number of IP addresses you expose and map.
+
+With a load balancer, each application typically needs a public IP address assigned and mapped to the service in the AKS cluster. With an ingress resource, a single IP address can distribute traffic to multiple applications.
![Diagram showing Ingress traffic flow in an AKS cluster](media/operator-best-practices-network/aks-ingress.png) There are two components for ingress:
- * An ingress *resource*, and
+ * An ingress *resource*
* An ingress *controller*
-The ingress resource is a YAML manifest of `kind: Ingress` that defines the host, certificates, and rules to route traffic to services that run in your AKS cluster. The following example YAML manifest would distribute traffic for *myapp.com* to one of two services, *blogservice* or *storeservice*. The customer is directed to one service or the other based on the URL they access.
+### Ingress resource
+
+The *ingress resource* is a YAML manifest of `kind: Ingress`. It defines the host, certificates, and rules to route traffic to services running in your AKS cluster.
+
+The following example YAML manifest would distribute traffic for *myapp.com* to one of two services, *blogservice* or *storeservice*. The customer is directed to one service or the other based on the URL they access.
```yaml kind: Ingress
spec:
servicePort: 80 ```
-An ingress controller is a daemon that runs on an AKS node and watches for incoming requests. Traffic is then distributed based on the rules defined in the ingress resource. The most common ingress controller is based on [NGINX]. AKS doesn't restrict you to a specific controller, so you can use other controllers such as [Contour][contour], [HAProxy][haproxy], or [Traefik][traefik].
+### Ingress controller
+
+An *ingress controller* is a daemon that runs on an AKS node and watches for incoming requests. Traffic is then distributed based on the rules defined in the ingress resource. While the most common ingress controller is based on [NGINX], AKS doesn't restrict you to a specific controller. You can use [Contour][contour], [HAProxy][haproxy], [Traefik][traefik], etc.
-Ingress controllers must be scheduled on a Linux node. Windows Server nodes shouldn't run the ingress controller. Use a node selector in your YAML manifest or Helm chart deployment to indicate that the resource should run on a Linux-based node. For more information, see [Use node selectors to control where pods are scheduled in AKS][concepts-node-selectors].
+Ingress controllers must be scheduled on a Linux node. Indicate that the resource should run on a Linux-based node using a node selector in your YAML manifest or Helm chart deployment. For more information, see [Use node selectors to control where pods are scheduled in AKS][concepts-node-selectors].
+
+> [!NOTE]
+> Windows Server nodes shouldn't run the ingress controller.
There are many scenarios for ingress, including the following how-to guides:
There are many scenarios for ingress, including the following how-to guides:
## Secure traffic with a web application firewall (WAF)
-**Best practice guidance** - To scan incoming traffic for potential attacks, use a web application firewall (WAF) such as [Barracuda WAF for Azure][barracuda-waf] or Azure Application Gateway. These more advanced network resources can also route traffic beyond just HTTP and HTTPS connections or basic TLS termination.
+> **Best practice guidance**
+>
+> To scan incoming traffic for potential attacks, use a web application firewall (WAF) such as [Barracuda WAF for Azure][barracuda-waf] or Azure Application Gateway. These more advanced network resources can also route traffic beyond just HTTP and HTTPS connections or basic TLS termination.
-An ingress controller that distributes traffic to services and applications is typically a Kubernetes resource in your AKS cluster. The controller runs as a daemon on an AKS node, and consumes some of the node's resources such as CPU, memory, and network bandwidth. In larger environments, you often want to offload some of this traffic routing or TLS termination to a network resource outside of the AKS cluster. You also want to scan incoming traffic for potential attacks.
+Typically, an ingress controller is a Kubernetes resource in your AKS cluster that distributes traffic to services and applications. The controller runs as a daemon on an AKS node, and consumes some of the node's resources, like CPU, memory, and network bandwidth. In larger environments, you'll want to:
+* Offload some of this traffic routing or TLS termination to a network resource outside of the AKS cluster.
+* Scan incoming traffic for potential attacks.
![A web application firewall (WAF) such as Azure App Gateway can protect and distribute traffic for your AKS cluster](media/operator-best-practices-network/web-application-firewall-app-gateway.png)
-A web application firewall (WAF) provides an extra layer of security by filtering the incoming traffic. The Open Web Application Security Project (OWASP) provides a set of rules to watch for attacks like cross site scripting or cookie poisoning. [Azure Application Gateway][app-gateway] (currently in preview in AKS) is a WAF that can integrate with AKS clusters to provide these security features, before the traffic reaches your AKS cluster and applications. Other third-party solutions also perform these functions, so you can continue to use existing investments or expertise in a given product.
+For that extra layer of security, a web application firewall (WAF) filters the incoming traffic. With a set of rules, the Open Web Application Security Project (OWASP) watches for attacks like cross-site scripting or cookie poisoning. [Azure Application Gateway][app-gateway] (currently in preview in AKS) is a WAF that integrates with AKS clusters, locking in these security features before the traffic reaches your AKS cluster and applications.
+
+Since other third-party solutions also perform these functions, you can continue to use existing investments or expertise in your preferred product.
-Load balancer or ingress resources continue to run in your AKS cluster to further refine the traffic distribution. App Gateway can be centrally managed as an ingress controller with a resource definition. To get started, [create an Application Gateway Ingress controller][app-gateway-ingress].
+Load balancer or ingress resources continually run in your AKS cluster and refine the traffic distribution. App Gateway can be centrally managed as an ingress controller with a resource definition. To get started, [create an Application Gateway Ingress controller][app-gateway-ingress].
## Control traffic flow with network policies
-**Best practice guidance** - Use network policies to allow or deny traffic to pods. By default, all traffic is allowed between pods within a cluster. For improved security, define rules that limit pod communication.
+> **Best practice guidance**
+>
+> Use network policies to allow or deny traffic to pods. By default, all traffic is allowed between pods within a cluster. For improved security, define rules that limit pod communication.
+
+Network policy is a Kubernetes feature available in AKS that lets you control the traffic flow between pods. You allow or deny traffic to the pod based on settings such as assigned labels, namespace, or traffic port. Network policies are a cloud-native way to control the flow of traffic for pods. As pods are dynamically created in an AKS cluster, required network policies can be automatically applied.
+
+To use network policy, enable the feature when you create a new AKS cluster. You can't enable network policy on an existing AKS cluster. Plan ahead to enable network policy on the necessary clusters.
-Network policy is a Kubernetes feature that lets you control the traffic flow between pods. You can choose to allow or deny traffic based on settings such as assigned labels, namespace, or traffic port. The use of network policies gives a cloud-native way to control the flow of traffic. As pods are dynamically created in an AKS cluster, the required network policies can be automatically applied. Don't use Azure network security groups to control pod-to-pod traffic, use network policies.
+>[!NOTE]
+>Network policy should only be used for Linux-based nodes and pods in AKS.
-To use network policy, the feature must be enabled when you create an AKS cluster. You can't enable network policy on an existing AKS cluster. Plan ahead to make sure that you enable network policy on clusters and can use them as needed. Network policy should only be used for Linux-based nodes and pods in AKS.
+You create a network policy as a Kubernetes resource using a YAML manifest. Policies are applied to defined pods, with ingress or egress rules defining traffic flow.
-A network policy is created as a Kubernetes resource using a YAML manifest. The policies are applied to defined pods, then ingress or egress rules define how the traffic can flow. The following example applies a network policy to pods with the *app: backend* label applied to them. The ingress rule then only allows traffic from pods with the *app: frontend* label:
+The following example applies a network policy to pods with the *app: backend* label applied to them. The ingress rule only allows traffic from pods with the *app: frontend* label:
```yaml kind: NetworkPolicy
To get started with policies, see [Secure traffic between pods using network pol
## Securely connect to nodes through a bastion host
-**Best practice guidance** - Don't expose remote connectivity to your AKS nodes. Create a bastion host, or jump box, in a management virtual network. Use the bastion host to securely route traffic into your AKS cluster to remote management tasks.
+> **Best practice guidance**
+>
+> Don't expose remote connectivity to your AKS nodes. Create a bastion host, or jump box, in a management virtual network. Use the bastion host to securely route traffic into your AKS cluster to remote management tasks.
-Most operations in AKS can be completed using the Azure management tools or through the Kubernetes API server. AKS nodes aren't connected to the public internet, and are only available on a private network. To connect to nodes and perform maintenance or troubleshoot issues, route your connections through a bastion host, or jump box. This host should be in a separate management virtual network that is securely peered to the AKS cluster virtual network.
+You can complete most operations in AKS using the Azure management tools or through the Kubernetes API server. AKS nodes are only available on a private network and aren't connected to the public internet. To connect to nodes and provide maintenance and support, route your connections through a bastion host, or jump box. Verify this host lives in a separate, securely-peered management virtual network to the AKS cluster virtual network.
![Connect to AKS nodes using a bastion host, or jump box](media/operator-best-practices-network/connect-using-bastion-host-simplified.png)
aks Operator Best Practices Scheduler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-scheduler.md
Title: Operator best practices - Basic scheduler features in Azure Kubernetes Se
description: Learn the cluster operator best practices for using basic scheduler features such as resource quotas and pod disruption budgets in Azure Kubernetes Service (AKS) Previously updated : 11/26/2018 Last updated : 03/09/2021 # Best practices for basic scheduler features in Azure Kubernetes Service (AKS)
-As you manage clusters in Azure Kubernetes Service (AKS), you often need to isolate teams and workloads. The Kubernetes scheduler provides features that let you control the distribution of compute resources, or limit the impact of maintenance events.
+As you manage clusters in Azure Kubernetes Service (AKS), you often need to isolate teams and workloads. The Kubernetes scheduler lets you control the distribution of compute resources, or limit the impact of maintenance events.
This best practices article focuses on basic Kubernetes scheduling features for cluster operators. In this article, you learn how to:
This best practices article focuses on basic Kubernetes scheduling features for
## Enforce resource quotas
-**Best practice guidance** - Plan and apply resource quotas at the namespace level. If pods don't define resource requests and limits, reject the deployment. Monitor resource usage and adjust quotas as needed.
+> **Best practice guidance**
+>
+> Plan and apply resource quotas at the namespace level. If pods don't define resource requests and limits, reject the deployment. Monitor resource usage and adjust quotas as needed.
-Resource requests and limits are placed in the pod specification. These limits are used by the Kubernetes scheduler at deployment time to find an available node in the cluster. These limits and requests work at the individual pod level. For more information about how to define these values, see [Define pod resource requests and limits][resource-limits]
+Resource requests and limits are placed in the pod specification. Limits are used by the Kubernetes scheduler at deployment time to find an available node in the cluster. Limits and requests work at the individual pod level. For more information about how to define these values, see [Define pod resource requests and limits][resource-limits]
To provide a way to reserve and limit resources across a development team or project, you should use *resource quotas*. These quotas are defined on a namespace, and can be used to set quotas on the following basis: * **Compute resources**, such as CPU and memory, or GPUs.
-* **Storage resources**, includes the total number of volumes or amount of disk space for a given storage class.
+* **Storage resources**, including the total number of volumes or amount of disk space for a given storage class.
* **Object count**, such as maximum number of secrets, services, or jobs can be created.
-Kubernetes doesn't overcommit resources. Once the cumulative total of resource requests or limits passes the assigned quota, no further deployments are successful.
+Kubernetes doesn't overcommit resources. Once your cumulative resource request total passes the assigned quota, all further deployments will be unsuccessful.
When you define resource quotas, all pods created in the namespace must provide limits or requests in their pod specifications. If they don't provide these values, you can reject the deployment. Instead, you can [configure default requests and limits for a namespace][configure-default-quotas].
For more information about available resource objects, scopes, and priorities, s
## Plan for availability using pod disruption budgets
-**Best practice guidance** - To maintain the availability of applications, define Pod Disruption Budgets (PDBs) to make sure that a minimum number of pods are available in the cluster.
+> **Best practice guidance**
+>
+> To maintain the availability of applications, define Pod Disruption Budgets (PDBs) to make sure that a minimum number of pods are available in the cluster.
There are two disruptive events that cause pods to be removed:
-* *Involuntary disruptions* are events beyond the typical control of the cluster operator or application owner.
- * These involuntary disruptions include a hardware failure on the physical machine, a kernel panic, or the deletion of a node VM
-* *Voluntary disruptions* are events requested by the cluster operator or application owner.
- * These voluntary disruptions include cluster upgrades, an updated deployment template, or accidentally deleting a pod.
+### Involuntary disruptions
-The involuntary disruptions can be mitigated by using multiple replicas of your pods in a deployment. Running multiple nodes in the AKS cluster also helps with these involuntary disruptions. For voluntary disruptions, Kubernetes provides *pod disruption budgets* that let the cluster operator define a minimum available or maximum unavailable resource count. These pod disruption budgets let you plan for how deployments or replica sets respond when a voluntary disruption event occurs.
+*Involuntary disruptions* are events beyond the typical control of the cluster operator or application owner. Include:
+* Hardware failure on the physical machine
+* Kernel panic
+* Deletion of a node VM
-If a cluster is to be upgraded or a deployment template updated, the Kubernetes scheduler makes sure additional pods are scheduled on other nodes before the voluntary disruption events can continue. The scheduler waits before a node is rebooted until the defined number of pods are successfully scheduled on other nodes in the cluster.
+Involuntary disruptions can be mitigated by:
+* Using multiple replicas of your pods in a deployment.
+* Running multiple nodes in the AKS cluster.
+
+### Voluntary disruptions
+
+*Voluntary disruptions* are events requested by the cluster operator or application owner. Include:
+* Cluster upgrades
+* Updated deployment template
+* Accidentally deleting a pod
+
+Kubernetes provides *pod disruption budgets* for voluntary disruptions,letting you plan for how deployments or replica sets respond when a voluntary disruption event occurs. Using pod disruption budgets, cluster operators can define a minimum available or maximum unavailable resource count.
+
+If you upgrade a cluster or update a deployment template, the Kubernetes scheduler will schedule extra pods on other nodes before allowing voluntary disruption events to continue. The scheduler waits to reboot a node until the defined number of pods are successfully scheduled on other nodes in the cluster.
Let's look at an example of a replica set with five pods that run NGINX. The pods in the replica set are assigned the label `app: nginx-frontend`. During a voluntary disruption event, such as a cluster upgrade, you want to make sure at least three pods continue to run. The following YAML manifest for a *PodDisruptionBudget* object defines these requirements:
For more information about using pod disruption budgets, see [Specify a disrupti
## Regularly check for cluster issues with kube-advisor
-**Best practice guidance** - Regularly run the latest version of `kube-advisor` open source tool to detect issues in your cluster. If you apply resource quotas on an existing AKS cluster, run `kube-advisor` first to find pods that don't have resource requests and limits defined.
+> **Best practice guidance**
+>
+> Regularly run the latest version of `kube-advisor` open source tool to detect issues in your cluster. If you apply resource quotas on an existing AKS cluster, run `kube-advisor` first to find pods that don't have resource requests and limits defined.
-The [kube-advisor][kube-advisor] tool is an associated AKS open source project that scans a Kubernetes cluster and reports on issues that it finds. One useful check is to identify pods that don't have resource requests and limits in place.
+The [kube-advisor][kube-advisor] tool is an associated AKS open source project that scans a Kubernetes cluster and reports identified issues. `kube-advisor` proves useful in identifying pods without resource requests and limits in place.
-The kube-advisor tool can report on resource request and limits missing in PodSpecs for Windows applications as well as Linux applications, but the kube-advisor tool itself must be scheduled on a Linux pod. You can schedule a pod to run on a node pool with a specific OS using a [node selector][k8s-node-selector] in the pod's configuration.
+While the `kube-advisor` tool can report on resource request and limits missing in PodSpecs for Windows and Linux applications, the tool itself must be scheduled on a Linux pod. Schedule a pod to run on a node pool with a specific OS using a [node selector][k8s-node-selector] in the pod's configuration.
-In an AKS cluster that hosts multiple development teams and applications, it can be hard to track pods without these resource requests and limits set. As a best practice, regularly run `kube-advisor` on your AKS clusters, especially if you don't assign resource quotas to namespaces.
+Tracking pods without set resource requests and limits in an AKS cluster hosting multiple development teams and applications can be difficult. As a best practice, regularly run `kube-advisor` on your AKS clusters, especially if you don't assign resource quotas to namespaces.
## Next steps
aks Operator Best Practices Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-storage.md
description: Learn the cluster operator best practices for storage, data encryption, and backups in Azure Kubernetes Service (AKS) Previously updated : 5/6/2019 Last updated : 03/10/2021 # Best practices for storage and backups in Azure Kubernetes Service (AKS)
-As you create and manage clusters in Azure Kubernetes Service (AKS), your applications often need storage. It's important to understand the performance needs and access methods for pods so that you can provide the appropriate storage to applications. The AKS node size may impact these storage choices. You should also plan for ways to back up and test the restore process for attached storage.
+As you create and manage clusters in Azure Kubernetes Service (AKS), your applications often need storage. Make sure you understand pod performance needs and access methods so that you can select the best storage for your application. The AKS node size may impact your storage choices. Plan for ways to back up and test the restore process for attached storage.
This best practices article focuses on storage considerations for cluster operators. In this article, you learn: > [!div class="checklist"]
-> * What types of storage are available
-> * How to correctly size AKS nodes for storage performance
-> * Differences between dynamic and static provisioning of volumes
-> * Ways to back up and secure your data volumes
+> * What types of storage are available.
+> * How to correctly size AKS nodes for storage performance.
+> * Differences between dynamic and static provisioning of volumes.
+> * Ways to back up and secure your data volumes.
## Choose the appropriate storage type
-**Best practice guidance** - Understand the needs of your application to pick the right storage. Use high performance, SSD-backed storage for production workloads. Plan for network-based storage when there is a need for multiple concurrent connections.
+> **Best practice guidance**
+>
+> Understand the needs of your application to pick the right storage. Use high performance, SSD-backed storage for production workloads. Plan for network-based storage when you need multiple concurrent connections.
-Applications often require different types and speeds of storage. Do your applications need storage that connects to individual pods, or shared across multiple pods? Is the storage for read-only access to data, or to write large amounts of structured data? These storage needs determine the most appropriate type of storage to use.
+Applications often require different types and speeds of storage. Determine the most appropriate storage type by asking the following questions.
+* Do your applications need storage that connects to individual pods?
+* Do your applications need storage shared across multiple pods?
+* Is the storage for read-only access to data?
+* Will the storage be used to write large amounts of structured data?
The following table outlines the available storage types and their capabilities:
The following table outlines the available storage types and their capabilities:
| Structured app data | Azure Disks | Yes | No | No | Yes | | Unstructured data, file system operations | [BlobFuse][blobfuse] | Yes | Yes | Yes | No |
-The two primary types of storage provided for volumes in AKS are backed by Azure Disks or Azure Files. To improve security, both types of storage use Azure Storage Service Encryption (SSE) by default that encrypts data at rest. Disks cannot currently be encrypted using Azure Disk Encryption at the AKS node level.
+AKS provides two primary types of secure storage for volumes backed by Azure Disks or Azure Files. Both use the default Azure Storage Service Encryption (SSE) that encrypts data at rest. Disks cannot be encrypted using Azure Disk Encryption at the AKS node level.
Both Azure Files and Azure Disks are available in Standard and Premium performance tiers: -- *Premium* disks are backed by high-performance solid-state disks (SSDs). Premium disks are recommended for all production workloads.-- *Standard* disks are backed by regular spinning disks (HDDs), and are good for archival or infrequently accessed data.
+- *Premium* disks
+ - Backed by high-performance solid-state disks (SSDs).
+ - Recommended for all production workloads.
+- *Standard* disks
+ - Backed by regular spinning disks (HDDs).
+ - Good for archival or infrequently accessed data.
Understand the application performance needs and access patterns to choose the appropriate storage tier. For more information about Managed Disks sizes and performance tiers, see [Azure Managed Disks overview][managed-disks] ### Create and use storage classes to define application needs
-The type of storage you use is defined using Kubernetes *storage classes*. The storage class is then referenced in the pod or deployment specification. These definitions work together to create the appropriate storage and connect it to pods. For more information, see [Storage classes in AKS][aks-concepts-storage-classes].
+Define the type of storage you want using Kubernetes *storage classes*. The storage class is then referenced in the pod or deployment specification. Storage class definitions work together to create the appropriate storage and connect it to pods.
+
+For more information, see [Storage classes in AKS][aks-concepts-storage-classes].
## Size the nodes for storage needs
-**Best practice guidance** - Each node size supports a maximum number of disks. Different node sizes also provide different amounts of local storage and network bandwidth. Plan for your application demands to deploy the appropriate size of nodes.
+> **Best practice guidance**
+>
+> Each node size supports a maximum number of disks. Different node sizes also provide different amounts of local storage and network bandwidth. Plan appropriately for your application demands to deploy the right size of nodes.
+
+AKS nodes run as various Azure VM types and sizes. Each VM size provides:
+* A different amount of core resources such as CPU and memory.
+* A maximum number of disks that can be attached.
+
+Storage performance also varies between VM sizes for the maximum local and attached disk IOPS (input/output operations per second).
-AKS nodes run as Azure VMs. Different types and sizes of VM are available. Each VM size provides a different amount of core resources such as CPU and memory. These VM sizes have a maximum number of disks that can be attached. Storage performance also varies between VM sizes for the maximum local and attached disk IOPS (input/output operations per second).
+If your applications require Azure Disks as their storage solution, strategize an appropriate node VM size. Storage capabilities and CPU and memory amounts play a major role when deciding on a VM size.
-If your applications require Azure Disks as their storage solution, plan for and choose an appropriate node VM size. The amount of CPU and memory isn't the only factor when you choose a VM size. The storage capabilities are also important. For example, both the *Standard_B2ms* and *Standard_DS2_v2* VM sizes include a similar amount of CPU and memory resources. Their potential storage performance is different, as shown in the following table:
+For example, while both the *Standard_B2ms* and *Standard_DS2_v2* VM sizes include a similar amount of CPU and memory resources, their potential storage performance is different:
| Node type and size | vCPU | Memory (GiB) | Max data disks | Max uncached disk IOPS | Max uncached throughput (MBps) | |--||--|-||--| | Standard_B2ms | 2 | 8 | 4 | 1,920 | 22.5 | | Standard_DS2_v2 | 2 | 7 | 8 | 6,400 | 96 |
-Here, the *Standard_DS2_v2* allows double the number of attached disks, and provides three to four times the amount of IOPS and disk throughput. If you only looked at the core compute resources and compared costs, you may choose the *Standard_B2ms* VM size and have poor storage performance and limitations. Work with your application development team to understand their storage capacity and performance needs. Choose the appropriate VM size for the AKS nodes to meet or exceed their performance needs. Regularly baseline applications to adjust VM size as needed.
+In this example, the *Standard_DS2_v2* offers twice as many attached disks, and three to four times the amount of IOPS and disk throughput. If you only compared core compute resources and compared costs, you might have chosen the *Standard_B2ms* VM size with poor storage performance and limitations.
+
+Work with your application development team to understand their storage capacity and performance needs. Choose the appropriate VM size for the AKS nodes to meet or exceed their performance needs. Regularly baseline applications to adjust VM size as needed.
For more information about available VM sizes, see [Sizes for Linux virtual machines in Azure][vm-sizes]. ## Dynamically provision volumes
-**Best practice guidance** - To reduce management overhead and let you scale, don't statically create and assign persistent volumes. Use dynamic provisioning. In your storage classes, define the appropriate reclaim policy to minimize unneeded storage costs once pods are deleted.
+> **Best practice guidance**
+>
+> To reduce management overhead and enable scaling, avoid statically create and assign persistent volumes. Use dynamic provisioning. In your storage classes, define the appropriate reclaim policy to minimize unneeded storage costs once pods are deleted.
-When you need to attach storage to pods, you use persistent volumes. These persistent volumes can be created manually or dynamically. Manual creation of persistent volumes adds management overhead, and limits your ability to scale. Use dynamic persistent volume provisioning to simplify storage management and allow your applications to grow and scale as needed.
+To attach storage to pods, use persistent volumes. Persistent volumes can be created manually or dynamically. Creating persistent volumes manually adds management overhead and limits your ability to scale. Instead, provision persistent volume dynamically to simplify storage management and allow your applications to grow and scale as needed.
![Persistent volume claims in an Azure Kubernetes Services (AKS) cluster](media/concepts-storage/persistent-volume-claims.png)
-A persistent volume claim (PVC) lets you dynamically create storage as needed. The underlying Azure disks are created as pods request them. In the pod definition, you request a volume to be created and attached to a designated mount path.
+A persistent volume claim (PVC) lets you dynamically create storage as needed. Underlying Azure disks are created as pods request them. In the pod definition, request a volume to be created and attached to a designated mount path.
For the concepts on how to dynamically create and use volumes, see [Persistent Volumes Claims][aks-concepts-storage-pvcs]. To see these volumes in action, see how to dynamically create and use a persistent volume with [Azure Disks][dynamic-disks] or [Azure Files][dynamic-files].
-As part of your storage class definitions, set the appropriate *reclaimPolicy*. This reclaimPolicy controls the behavior of the underlying Azure storage resource when the pod is deleted and the persistent volume may no longer be required. The underlying storage resource can be deleted, or retained for use with a future pod. The reclaimPolicy can set to *retain* or *delete*. Understand your application needs, and implement regular checks for storage that is retained to minimize the amount of un-used storage that is used and billed.
+As part of your storage class definitions, set the appropriate *reclaimPolicy*. This reclaimPolicy controls the behavior of the underlying Azure storage resource when the pod is deleted. The underlying storage resource can either be deleted or retained for future pod use. Set the reclaimPolicy to *retain* or *delete*.
+
+Understand your application needs, and implement regular checks for retained storage to minimize the amount of unused and billed storage.
For more information about storage class options, see [storage reclaim policies][reclaim-policy]. ## Secure and back up your data
-**Best practice guidance** - Back up your data using an appropriate tool for your storage type, such as Velero or Azure Backup. Verify the integrity, and security, of those backups.
+> **Best practice guidance**
+>
+> Back up your data using an appropriate tool for your storage type, such as Velero or Azure Backup. Verify the integrity and security of those backups.
-When your applications store and consume data persisted on disks or in files, you need to take regular backups or snapshots of that data. Azure Disks can use built-in snapshot technologies. You may need to look for your applications to flush writes to disk before you perform the snapshot operation. [Velero][velero] can back up persistent volumes along with additional cluster resources and configurations. If you can't [remove state from your applications][remove-state], back up the data from persistent volumes and regularly test the restore operations to verify data integrity and the processes required.
+When your applications store and consume data persisted on disks or in files, you need to take regular backups or snapshots of that data. Azure Disks can use built-in snapshot technologies. Your applications may need to flush writes-to-disk before you perform the snapshot operation. [Velero][velero] can back up persistent volumes along with additional cluster resources and configurations. If you can't [remove state from your applications][remove-state], back up the data from persistent volumes and regularly test the restore operations to verify data integrity and the processes required.
Understand the limitations of the different approaches to data backups and if you need to quiesce your data prior to snapshot. Data backups don't necessarily let you restore your application environment of cluster deployment. For more information about those scenarios, see [Best practices for business continuity and disaster recovery in AKS][best-practices-multi-region].
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-controls-policy.md
# Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) [Regulatory Compliance in Azure Policy](../governance/policy/concepts/regulatory-compliance.md)
-provides Microsoft created and managed initiative definitions, known as _built-ins_, for the
-**compliance domains** and **security controls** related to different compliance standards. This
-page lists the **compliance domains** and **security controls** for Azure Kubernetes Service (AKS).
-You can assign the built-ins for a **security control** individually to help make your Azure
-resources compliant with the specific standard.
+provides initiative definitions (*built-ins*) created and managed by Microsoft, for the compliance domains and security controls related to different compliance standards. This page lists the Azure Kubernetes Service (AKS) compliance domains and security controls.
+
+You can assign the built-ins for a **security control** individually to help make your Azure resources compliant with the specific standard.
[!INCLUDE [azure-policy-compliancecontrols-introwarning](../../includes/policy/standards/intro-warning.md)]
availability-zones Az Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-overview.md
As mentioned previously, Azure classifies services into three categories: founda
> | Azure Data Lake Storage Gen2 | Azure Active Directory Domain Services | > | Azure ExpressRoute | Azure Bastion | > | Azure Public IP | Azure Cache for Redis |
-> | Azure SQL Database | Azure Cognitive Search |
-> | Azure SQL Managed Instance | Azure Cognitive Services |
-> | Disk Storage | Azure Cognitive
-> | Event Hubs | Azure Cognitive
-> | Key Vault | Azure Cognitive
-> | Load balancer | Azure Cognitive
-> | Service Bus | Azure Cognitive
-> | Service Fabric | Azure Cognitive
-> | Storage: Hot/Cool Blob Storage Tiers | Azure Cognitive
-> | Storage: Managed Disks | Azure Cognitive
-> | Virtual Machine Scale Sets | Azure Data Explorer |
-> | Virtual Machines | Azure Data Share |
-> | Virtual Machines: Azure Dedicated Host | Azure Database for MySQL |
-> | Virtual Machines: Av2-Series | Azure Database for PostgreSQL |
-> | Virtual Machines: Bs-Series | Azure DDoS Protection |
-> | Virtual Machines: DSv2-Series | Azure Firewall |
-> | Virtual Machines: DSv3-Series | Azure Firewall Manager |
-> | Virtual Machines: Dv2-Series | Azure Functions |
-> | Virtual Machines: Dv3-Series | Azure IoT Hub |
-> | Virtual Machines: ESv3-Series | Azure Kubernetes Service (AKS) |
-> | Virtual Machines: Ev3-Series | Azure Machine Learning |
-> | Virtual Network | Azure Monitor: Application Insights |
-> | VPN Gateway | Azure Monitor: Log Analytics |
-> | | Azure Private Link |
-> | | Azure Red Hat OpenShift |
-> | | Azure Site Recovery |
-> | | Azure Stream Analytics |
-> | | Azure Synapse Analytics |
-> | | Batch |
-> | | Cloud
-> | | Container Instances |
-> | | Container Registry |
+> | Azure SQL Database | Azure Cognitive Services |
+> | Azure SQL Managed Instance | Azure Cognitive
+> | Disk Storage | Azure Cognitive
+> | Event Hubs | Azure Cognitive
+> | Key Vault | Azure Cognitive
+> | Load balancer | Azure Data Explorer |
+> | Service Bus | Azure Database for MySQL |
+> | Service Fabric | Azure Database for PostgreSQL |
+> | Storage: Hot/Cool Blob Storage Tiers | Azure DDoS Protection |
+> | Storage: Managed Disks | Azure Firewall |
+> | Virtual Machine Scale Sets | Azure Firewall Manager |
+> | Virtual Machines | Azure Functions |
+> | Virtual Machines: Azure Dedicated Host | Azure IoT Hub |
+> | Virtual Machines: Av2-Series | Azure Kubernetes Service (AKS) |
+> | Virtual Machines: Bs-Series | Azure Monitor: Application Insights |
+> | Virtual Machines: DSv2-Series | Azure Monitor: Log Analytics |
+> | Virtual Machines: DSv3-Series | Azure Private Link |
+> | Virtual Machines: Dv2-Series | Azure Site Recovery |
+> | Virtual Machines: Dv3-Series | Azure Synapse Analytics |
+> | Virtual Machines: ESv3-Series | Batch |
+> | Virtual Machines: Ev3-Series | Cloud
+> | Virtual Network | Container Instances |
+> | VPN Gateway | Container Registry |
> | | Data Factory | > | | Event Grid | > | | HDInsight | > | | Logic Apps | > | | Media Services | > | | Network Watcher |
-> | | Notification Hubs |
> | | Premium Blob Storage | > | | Premium Files Storage | > | | Virtual Machines: Ddsv4-Series |
As mentioned previously, Azure classifies services into three categories: founda
> | Azure Cognitive > | Azure Cognitive > | Azure Cognitive
+> | Azure Data Share |
> | Azure Database for MariaDB | > | Azure Database Migration Service | > | Azure Dedicated HSM |
As mentioned previously, Azure classifies services into three categories: founda
> | Azure Health Bot | > | Azure HPC Cache | > | Azure Lab Services |
+> | Azure Machine Learning Studio (classic) |
> | Azure NetApp Files |
+> | Azure Red Hat OpenShift |
> | Azure SignalR Service | > | Azure Spring Cloud Service | > | Azure Time Series Insights | > | Azure VMware Solution | > | Azure VMware Solution by CloudSimple | > | Data Lake Analytics |
-> | Azure Machine Learning Studio (classic) |
> | Spatial Anchors | > | Storage: Archive Storage | > | Ultra Disk Storage |
azure-app-configuration Howto Integrate Azure Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
Previously updated : 2/25/2020 Last updated : 04/08/2021 # Use managed identities to access App Configuration
To set up a managed identity in the portal, you first create an application and
>config.AddAzureAppConfiguration(options => > options.Connect(new Uri(settings["AppConfig:Endpoint"]), new ManagedIdentityCredential(<your_clientId>))); >```
- >As explained in the [Managed Identities for Azure resources FAQs](../active-directory/managed-identities-azure-resources/known-issues.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request), there is a default way to resolve which managed identity is used. In this case, the Azure Identity library enforces you to specify the desired identity to avoid posible runtime issues in the future (for instance, if a new user-assigned managed identity is added or if the system-assigned managed identity is enabled). So, you will need to specify the clientId even if only one user-assigned managed identity is defined, and there is no system-assigned managed identity.
+ >As explained in the [Managed Identities for Azure resources FAQs](../active-directory/managed-identities-azure-resources/managed-identities-faq.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request), there is a default way to resolve which managed identity is used. In this case, the Azure Identity library enforces you to specify the desired identity to avoid posible runtime issues in the future (for instance, if a new user-assigned managed identity is added or if the system-assigned managed identity is enabled). So, you will need to specify the clientId even if only one user-assigned managed identity is defined, and there is no system-assigned managed identity.
1. To use both App Configuration values and Key Vault references, update *Program.cs* as shown below. This code calls `SetCredential` as part of `ConfigureKeyVault` to tell the config provider what credential to use when authenticating to Key Vault.
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-guide.md
For information on workarounds to know issues running .NET isolated process func
[HttpResponseData]: /dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata?view=azure-dotnet&preserve-view=true [HttpRequest]: /dotnet/api/microsoft.aspnetcore.http.httprequest?view=aspnetcore-5.0&preserve-view=true [ObjectResult]: /dotnet/api/microsoft.aspnetcore.mvc.objectresult?view=aspnetcore-5.0&preserve-view=true
-[JsonSerializerOptions]: /api/system.text.json.jsonserializeroptions?view=net-5.0&preserve-view=true
+[JsonSerializerOptions]: /dotnet/api/system.text.json.jsonserializeroptions?view=net-5.0&preserve-view=true
azure-government Documentation Accelerate Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/documentation-accelerate-compliance.md
Microsoft is able to scale through its partners. Scale is what will allow us to
## Publishing to Azure Marketplace
-1. Join the Partner Network - ItΓÇÖs a requirement for publishing but easy to sign up. Instructions are located here: [Ensure you have a MPN ID and Partner Center Account](../../marketplace/partner-center-portal/create-account.md#create-an-account-using-the-partner-center-enrollment-page).
-2. Enable your partner center account as Publisher / Developer for Marketplace, follow the instructions [here](../../marketplace/partner-center-portal/create-account.md).
+1. Join the Partner Network - ItΓÇÖs a requirement for publishing but easy to sign up. Instructions are located here: [Ensure you have a MPN ID and Partner Center Account](/azure/marketplace/create-account.md#create-a-partner-center-account-and-enroll-in-the-commercial-marketplace).
+2. Enable your partner center account as Publisher / Developer for Marketplace, follow the instructions [here](../../marketplace/create-account.md).
3. With an enabled Partner Center Account, publish listing as a SaaS App as instructed [here](../../marketplace/create-new-saas-offer.md). For a list of existing Azure Marketplace offerings in this space, visit [this page](https://aka.ms/azclmarketplace).
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/action-groups.md
Title: Create and manage action groups in the Azure portal
description: Learn how to create and manage action groups in the Azure portal. Previously updated : 02/25/2021 Last updated : 04/07/2021 # Create and manage action groups in the Azure portal
You may have a limited number of ITSM actions in an Action Group.
You may have a limited number of Logic App actions in an Action Group. ### Secure Webhook
+The Action Groups Secure Webhook action enables you to take advantage of Azure Active Directory to secure the connection between your action group and your protected web API (webhook endpoint). The overall workflow for taking advantage of this functionality is described below. For an overview of Azure AD Applications and service principals, see [Microsoft identity platform (v2.0) overview](../../active-directory/develop/v2-overview.md).
> [!NOTE] > Using the webhook action requires that the target webhook endpoint either doesn't require details of the alert to function successfully or it's capable of parsing the alert context information that's provided as part of the POST operation. If the webhook endpoint can't handle the alert context information on its own, you can use a solution like a [Logic App action](./action-groups-logic-app.md) for a custom manipulation of the alert context information to match the webhook's expected data format.
-> User should be the **owner** of webhook service principal in order to make sure security is not violated. As any azure customer can access all object IDs through portal, without checking the owner, anyone can add the secure webhook to their own action group for azure monitor alert notification which violate security.
-
-The Action Groups Webhook action enables you to take advantage of Azure Active Directory to secure the connection between your action group and your protected web API (webhook endpoint). The overall workflow for taking advantage of this functionality is described below. For an overview of Azure AD Applications and service principals, see [Microsoft identity platform (v2.0) overview](../../active-directory/develop/v2-overview.md).
1. Create an Azure AD Application for your protected web API. See [Protected web API: App registration](../../active-directory/develop/scenario-protected-web-api-app-registration.md). - Configure your protected API to be [called by a daemon app](../../active-directory/develop/scenario-protected-web-api-app-registration.md#if-your-web-api-is-called-by-a-daemon-app).
azure-monitor Alerts Metric Near Real Time https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-metric-near-real-time.md
Previously updated : 03/11/2021 Last updated : 04/08/2021 # Supported resources for metric alerts in Azure Monitor
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Resource type |Dimensions Supported |Multi-resource alerts| Metrics Available| |||--|-|
-|Microsoft.Aadiam/azureADMetrics | Yes | No | |
+|Microsoft.Aadiam/azureADMetrics | Yes | No | [Azure AD](../essentials/metrics-supported.md#microsoftaadiamazureadmetrics) |
|Microsoft.ApiManagement/service | Yes | No | [API Management](../essentials/metrics-supported.md#microsoftapimanagementservice) | |Microsoft.AppConfiguration/configurationStores |Yes | No | [App Configuration](../essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) |
-|Microsoft.AppPlatform/Spring | Yes | No | [Azure Spring Cloud](../essentials/metrics-supported.md#microsoftappplatformspring) |
+|Microsoft.AppPlatform/spring | Yes | No | [Azure Spring Cloud](../essentials/metrics-supported.md#microsoftappplatformspring) |
|Microsoft.Automation/automationAccounts | Yes| No | [Automation Accounts](../essentials/metrics-supported.md#microsoftautomationautomationaccounts) | |Microsoft.AVS/privateClouds | No | No | [Azure VMware Solution](../essentials/metrics-supported.md#microsoftavsprivateclouds) | |Microsoft.Batch/batchAccounts | Yes | No | [Batch Accounts](../essentials/metrics-supported.md#microsoftbatchbatchaccounts) | |Microsoft.BotService/botServices | Yes | No | [Bot Services](../essentials/metrics-supported.md#microsoftbotservicebotservices) |
-|Microsoft.Cache/Redis | Yes | Yes | [Azure Cache for Redis](../essentials/metrics-supported.md#microsoftcacheredis) |
+|Microsoft.Cache/redis | Yes | Yes | [Azure Cache for Redis](../essentials/metrics-supported.md#microsoftcacheredis) |
+|microsoft.Cdn/profiles | Yes | No | [CDN Profiles](../essentials/metrics-supported.md#microsoftcdnprofiles) |
|Microsoft.ClassicCompute/domainNames/slots/roles | No | No | [Classic Cloud Services](../essentials/metrics-supported.md#microsoftclassiccomputedomainnamesslotsroles) | |Microsoft.ClassicCompute/virtualMachines | No | No | [Classic Virtual Machines](../essentials/metrics-supported.md#microsoftclassiccomputevirtualmachines) | |Microsoft.ClassicStorage/storageAccounts | Yes | No | [Storage Accounts (classic)](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccounts) |
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Compute/cloudServices | Yes | No | [Cloud Services](../essentials/metrics-supported.md#microsoftcomputecloudservices) | |Microsoft.Compute/cloudServices/roles | Yes | No | [Cloud Service Roles](../essentials/metrics-supported.md#microsoftcomputecloudservicesroles) | |Microsoft.Compute/virtualMachines | Yes | Yes<sup>1</sup> | [Virtual Machines](../essentials/metrics-supported.md#microsoftcomputevirtualmachines) |
-|Microsoft.Compute/virtualMachineScaleSets | Yes | No |[Virtual machine scale sets](../essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets) |
-|Microsoft.ContainerInstance/containerGroups | Yes| No | [Container groups](../essentials/metrics-supported.md#microsoftcontainerinstancecontainergroups) |
+|Microsoft.Compute/virtualMachineScaleSets | Yes | No |[Virtual Machine Scale Sets](../essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets) |
+|Microsoft.ContainerInstance/containerGroups | Yes| No | [Container Groups](../essentials/metrics-supported.md#microsoftcontainerinstancecontainergroups) |
|Microsoft.ContainerRegistry/registries | No | No | [Container Registries](../essentials/metrics-supported.md#microsoftcontainerregistryregistries) | |Microsoft.ContainerService/managedClusters | Yes | No | [Managed Clusters](../essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters) | |Microsoft.DataBoxEdge/dataBoxEdgeDevices | Yes | Yes | [Data Box](../essentials/metrics-supported.md#microsoftdataboxedgedataboxedgedevices) |
azure-monitor Tutorial App Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/tutorial-app-dashboards.md
A single dashboard can contain resources from multiple applications, resource gr
5. Locate the **Markdown** tile and drag it on to your dashboard. This tile allows you to add text formatted in markdown, which is ideal for adding descriptive text to your dashboard. To learn more, see [Use a markdown tile on Azure dashboards to show custom content](../../azure-portal/azure-portal-markdown-tile.md). 6. Add text to the tile's properties and resize it on the dashboard canvas.
- [![Edit markdown tile](media/tutorial-app-dashboards/markdown.png)](media/tutorial-app-dashboards/dashboard-edit-mode.png#lightbox)
+ [![Edit markdown tile](media/tutorial-app-dashboards/markdown.png)](media/tutorial-app-dashboards/markdown.png#lightbox)
7. Select **Done customizing** at the top of the screen to exit tile customization mode.
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-logs-schema.md
Title: Azure Resource Logs supported services and schemas description: Understand the supported services and event schema for Azure resource logs. Previously updated : 09/01/2020 Last updated : 04/07/2020 # Common and service-specific schema for Azure Resource Logs > [!NOTE] > Resource logs were previously known as diagnostic logs. The name was changed in October 2019 as the types of logs gathered by Azure Monitor shifted to include more than just the Azure resource.
-> Also, the list of resource log categories you could collect used to be listed in this article. They were moved to [Resource log categories](resource-logs-categories.md).
+> Also, the list of resource log categories you can collect used to be listed in this article. They are now at [Resource log categories](resource-logs-categories.md).
[Azure Monitor resource logs](../essentials/platform-logs-overview.md) are logs emitted by Azure services that describe the operation of those services or resources. All resource logs available through Azure Monitor share a common top-level schema, with flexibility for each service to emit unique properties for their own events.
The schema for resource logs varies depending on the resource and log category.
| Azure Database for MySQL | [Azure Database for MySQL diagnostic logs](../../mysql/concepts-server-logs.md#diagnostic-logs) | | Azure Database for PostgreSQL | [Azure Database for PostgreSQL logs](../../postgresql/concepts-server-logs.md#resource-logs) | | Azure Databricks | [Diagnostic logging in Azure Databricks](/azure/databricks/administration-guide/account-settings/azure-diagnostic-logs) |
+| DDoS Protection | [Logging for Azure DDoS Protection Standard](../../ddos-protection/diagnostic-logging.md#log-schemas) |
| Azure Digital Twins | [Set up Azure Digital Twins Diagnostics](../../digital-twins/troubleshoot-diagnostics.md#log-schemas) | Event Hubs |[Azure Event Hubs logs](../../event-hubs/event-hubs-diagnostic-logs.md) | | Express Route | Schema not available. |
The schema for resource logs varies depending on the resource and log category.
| Load Balancer |[Log analytics for Azure Load Balancer](../../load-balancer/load-balancer-monitor-log.md) | | Logic Apps |[Logic Apps B2B custom tracking schema](../../logic-apps/logic-apps-track-integration-account-custom-tracking-schema.md) | | Network Security Groups |[Log analytics for network security groups (NSGs)](../../virtual-network/virtual-network-nsg-manage-log.md) |
-| DDoS Protection | [Logging for Azure DDoS Protection Standard](../../ddos-protection/diagnostic-logging.md#log-schemas) |
| Power BI Dedicated | [Logging for Power BI Embedded in Azure](/power-bi/developer/azure-pbie-diag-logs) | | Recovery Services | [Data Model for Azure Backup](../../backup/backup-azure-reports-data-model.md)| | Search |[Enabling and using Search Traffic Analytics](../../search/search-traffic-analytics.md) | | Service Bus |[Azure Service Bus logs](../../service-bus-messaging/service-bus-diagnostic-logs.md) | | SQL Database | [Azure SQL Database logging](../../azure-sql/database/metrics-diagnostic-telemetry-logging-streaming-export-configure.md) | | Stream Analytics |[Job logs](../../stream-analytics/stream-analytics-job-diagnostic-logs.md) |
+| Storage | [Blobs](/azure/storage/blobs/monitor-blob-storage-reference#resource-logs-preview), [Files](/azure/storage/files/storage-files-monitoring-reference#resource-logs-preview), [Queues](/azure/storage/queues/monitor-queue-storage-reference#resource-logs-preview), [Tables](/azure/storage/tables/monitor-table-storage-reference#resource-logs-preview) |
| Traffic Manager | [Traffic Manager Log schema](../../traffic-manager/traffic-manager-diagnostic-logs.md) | | Virtual Networks | Schema not available. | | Virtual Network Gateways | Schema not available. |
The schema for resource logs varies depending on the resource and log category.
* [Learn more about resource logs](../essentials/platform-logs-overview.md) * [Stream resource resource logs to **Event Hubs**](./resource-logs.md#send-to-azure-event-hubs) * [Change resource log diagnostic settings using the Azure Monitor REST API](/rest/api/monitor/diagnosticsettings)
-* [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
+* [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
azure-monitor Powershell Workspace Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/powershell-workspace-configuration.md
The following sample script creates a workspace with no data source configuratio
```powershell $ResourceGroup = "my-resource-group"
-$WorkspaceName = "log-analytics-" + (Get-Random -Maximum 99999) # workspace names need to be unique across all Azure subscriptions - Get-Random helps with this for the example code
+$WorkspaceName = "log-analytics-" + (Get-Random -Maximum 99999) # workspace names need to be unique in resource group - Get-Random helps with this for the example code
$Location = "westeurope" # Create the resource group if needed
This script performs the following functions:
```powershell $ResourceGroup = "my-resource-group"
-$WorkspaceName = "log-analytics-" + (Get-Random -Maximum 99999) # workspace names need to be unique across all Azure subscriptions - Get-Random helps with this for the example code
+$WorkspaceName = "log-analytics-" + (Get-Random -Maximum 99999) # workspace names need to be unique in resource group - Get-Random helps with this for the example code
$Location = "westeurope" # Create the resource group if needed
In the above example regexDelimiter was defined as "\\n" for newline. The log de
## Troubleshooting When you create a workspace that was deleted in the last 14 days and in [soft-delete state](../logs/delete-workspace.md#soft-delete-behavior), the operation could have different outcome depending on your workspace configuration: 1. If you provide the same workspace name, resource group, subscription and region as in the deleted workspace, your workspace will be recovered including its data, configuration and connected agents.
-2. If you use the same workspace name, but different resource group, subscription or region, you will get an error *The workspace name 'workspace-name' is not unique*, or *conflict*. To override the soft-delete and permanently delete your workspace and create a new workspace with the same name, follow these steps to recover the workspace first and perform permanent delete:
+2. Workspace name must be unique per resource group. If you use a workspace name that is already exists, also in soft-delete in your your resource group, you will get an error *The workspace name 'workspace-name' is not unique*, or *conflict*. To override the soft-delete and permanently delete your workspace and create a new workspace with the same name, follow these steps to recover the workspace first and perform permanent delete:
* [Recover](../logs/delete-workspace.md#recover-workspace) your workspace * [Permanently delete](../logs/delete-workspace.md#permanent-workspace-delete) your workspace * Create a new workspace using the same workspace name
azure-monitor Quick Create Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/quick-create-workspace-cli.md
The following parameters set a default value:
2. Edit the template to meet your requirements. Review [Microsoft.OperationalInsights/workspaces template](/azure/templates/microsoft.operationalinsights/2015-11-01-preview/workspaces) reference to learn what properties and values are supported. 3. Save this file as **deploylaworkspacetemplate.json** to a local folder.
-4. You are ready to deploy this template. Use the following commands from the folder containing the template. When you're prompted for a workspace name, provide a name that is globally unique across all Azure subscriptions.
+4. You are ready to deploy this template. Use the following commands from the folder containing the template. When you're prompted for a workspace name, provide a name that is unique in your resource group.
```azurecli az deployment group create --resource-group <my-resource-group> --name <my-deployment-name> --template-file deploylaworkspacetemplate.json
The deployment can take a few minutes to complete. When it finishes, you see a m
## Troubleshooting When you create a workspace that was deleted in the last 14 days and in [soft-delete state](../logs/delete-workspace.md#soft-delete-behavior), the operation could have different outcome depending on your workspace configuration: 1. If you provide the same workspace name, resource group, subscription and region as in the deleted workspace, your workspace will be recovered including its data, configuration and connected agents.
-2. If you use the same workspace name, but different resource group, subscription or region, you will get an error *The workspace name 'workspace-name' is not unique*, or *conflict*. To override the soft-delete and permanently delete your workspace and create a new workspace with the same name, follow these steps to recover the workspace first and perform permanent delete:
+2. Workspace name must be unique per resource group. If you use a workspace name that is already exists, also in soft-delete in your your resource group, you will get an error *The workspace name 'workspace-name' is not unique*, or *conflict*. To override the soft-delete and permanently delete your workspace and create a new workspace with the same name, follow these steps to recover the workspace first and perform permanent delete:
* [Recover](../logs/delete-workspace.md#recover-workspace) your workspace * [Permanently delete](../logs/delete-workspace.md#permanent-workspace-delete) your workspace * Create a new workspace using the same workspace name
azure-monitor Quick Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/quick-create-workspace.md
Click **Add**, and then provide values for the following options:
* Select a **Subscription** to link to by selecting from the drop-down list if the default selected is not appropriate. * For **Resource Group**, choose to use an existing resource group already setup or create a new one.
- * Provide a name for the new **Log Analytics workspace**, such as *DefaultLAWorkspace*. This name must be globally unique across all Azure Monitor subscriptions.
+ * Provide a name for the new **Log Analytics workspace**, such as *DefaultLAWorkspace*. This name must be unique per resource group.
* Select an available **Region**. For more information, see which [regions Log Analytics is available in](https://azure.microsoft.com/regions/services/) and search for Azure Monitor from the **Search for a product** field.
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-support-resources.md
Title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group or subscription. Previously updated : 01/11/2021 Last updated : 04/08/2021 # Move operation support for resources
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-name-rules.md
Title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. Previously updated : 04/06/2021 Last updated : 04/08/2021 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> | galleries / images / versions | image | 32-bit integer | Numbers and periods. | > | images | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End with alphanumeric or underscore. | > | snapshots | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End with alphanumeric or underscore. |
-> | virtualMachines | resource group | 1-15 (Windows)<br>1-64 (Linux)<br><br>See note below. | Can't use space or these characters:<br> `\/"'[]:|<>+=;,?*@&_`<br><br>Windows VMs can't include period or end with hyphen.<br><br>Linux VMs can't end with period or hyphen. |
-> | virtualMachineScaleSets | resource group | 1-15 (Windows)<br>1-64 (Linux)<br><br>See note below. | Can't use space or these characters:<br> `\/"'[]:|<>+=;,?*@&`<br><br>Can't start with underscore. Can't end with period or hyphen. |
+> | virtualMachines | resource group | 1-15 (Windows)<br>1-64 (Linux)<br><br>See note below. | Can't use space or these characters:<br> `~ ! @ # $ % ^ & * ( ) = + _ [ ] { } \ | ; : . ' " , < > / ?`<br><br>Windows VMs can't include period or end with hyphen.<br><br>Linux VMs can't end with period or hyphen. |
+> | virtualMachineScaleSets | resource group | 1-15 (Windows)<br>1-64 (Linux)<br><br>See note below. | Can't use space or these characters:<br> `~ ! @ # $ % ^ & * ( ) = + _ [ ] { } \ | ; : . ' " , < > / ?`<br><br>Can't start with underscore. Can't end with period or hyphen. |
> [!NOTE] > Azure virtual machines have two distinct names: resource name and host name. When you create a virtual machine in the portal, the same value is used for both names. The restrictions in the preceding table are for the host name. The actual resource name can have up to 64 characters.
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
By default, you can deploy up to 800 instances of a resource type in each resour
For some resource types, you need to contact support to have the 800 instance limit removed. Those resource types are noted in this article.
+## Microsoft.AlertsManagement
+
+* smartDetectorAlertRules
+
## Microsoft.Automation * automationAccounts
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-support.md
Title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. Previously updated : 10/21/2020 Last updated : 04/08/2021 # Tag support for Azure resources
Jump to a resource provider namespace:
> - [Microsoft.AgFoodPlatform](#microsoftagfoodplatform) > - [Microsoft.AlertsManagement](#microsoftalertsmanagement) > - [Microsoft.AnalysisServices](#microsoftanalysisservices)
+> - [Microsoft.AnyBuild](#microsoftanybuild)
> - [Microsoft.ApiManagement](#microsoftapimanagement)
+> - [Microsoft.AppAssessment](#microsoftappassessment)
> - [Microsoft.AppConfiguration](#microsoftappconfiguration) > - [Microsoft.AppPlatform](#microsoftappplatform) > - [Microsoft.Attestation](#microsoftattestation)
Jump to a resource provider namespace:
> - [Microsoft.AVS](#microsoftavs) > - [Microsoft.Azure.Geneva](#microsoftazuregeneva) > - [Microsoft.AzureActiveDirectory](#microsoftazureactivedirectory)
+> - [Microsoft.AzureArcData](#microsoftazurearcdata)
+> - [Microsoft.AzureCIS](#microsoftazurecis)
> - [Microsoft.AzureData](#microsoftazuredata)
+> - [Microsoft.AzureSphere](#microsoftazuresphere)
> - [Microsoft.AzureStack](#microsoftazurestack) > - [Microsoft.AzureStackHCI](#microsoftazurestackhci) > - [Microsoft.BareMetalInfrastructure](#microsoftbaremetalinfrastructure)
Jump to a resource provider namespace:
> - [Microsoft.BotService](#microsoftbotservice) > - [Microsoft.Cache](#microsoftcache) > - [Microsoft.Capacity](#microsoftcapacity)
+> - [Microsoft.Cascade](#microsoftcascade)
> - [Microsoft.Cdn](#microsoftcdn) > - [Microsoft.CertificateRegistration](#microsoftcertificateregistration) > - [Microsoft.ChangeAnalysis](#microsoftchangeanalysis)
Jump to a resource provider namespace:
> - [Microsoft.ClassicInfrastructureMigrate](#microsoftclassicinfrastructuremigrate) > - [Microsoft.ClassicNetwork](#microsoftclassicnetwork) > - [Microsoft.ClassicStorage](#microsoftclassicstorage)
+> - [Microsoft.ClusterStor](#microsoftclusterstor)
> - [Microsoft.Codespaces](#microsoftcodespaces) > - [Microsoft.CognitiveServices](#microsoftcognitiveservices) > - [Microsoft.Commerce](#microsoftcommerce) > - [Microsoft.Compute](#microsoftcompute) > - [Microsoft.ConnectedCache](#microsoftconnectedcache)
+> - [Microsoft.ConnectedVehicle](#microsoftconnectedvehicle)
+> - [Microsoft.ConnectedVMwarevSphere](#microsoftconnectedvmwarevsphere)
> - [Microsoft.Consumption](#microsoftconsumption) > - [Microsoft.ContainerInstance](#microsoftcontainerinstance) > - [Microsoft.ContainerRegistry](#microsoftcontainerregistry)
Jump to a resource provider namespace:
> - [Microsoft.DocumentDB](#microsoftdocumentdb) > - [Microsoft.DomainRegistration](#microsoftdomainregistration) > - [Microsoft.DynamicsLcs](#microsoftdynamicslcs)
+> - [Microsoft.EdgeOrder](#microsoftedgeorder)
> - [Microsoft.EnterpriseKnowledgeGraph](#microsoftenterpriseknowledgegraph) > - [Microsoft.EventGrid](#microsofteventgrid) > - [Microsoft.EventHub](#microsofteventhub)
Jump to a resource provider namespace:
> - [Microsoft.HanaOnAzure](#microsofthanaonazure) > - [Microsoft.HardwareSecurityModules](#microsofthardwaresecuritymodules) > - [Microsoft.HDInsight](#microsofthdinsight)
+> - [Microsoft.HealthBot](#microsofthealthbot)
> - [Microsoft.HealthcareApis](#microsofthealthcareapis) > - [Microsoft.HybridCompute](#microsofthybridcompute) > - [Microsoft.HybridData](#microsofthybriddata)
Jump to a resource provider namespace:
> - [Microsoft.Insights](#microsoftinsights) > - [Microsoft.Intune](#microsoftintune) > - [Microsoft.IoTCentral](#microsoftiotcentral)
+> - [Microsoft.IoTSecurity](#microsoftiotsecurity)
> - [Microsoft.IoTSpaces](#microsoftiotspaces) > - [Microsoft.KeyVault](#microsoftkeyvault) > - [Microsoft.Kubernetes](#microsoftkubernetes)
Jump to a resource provider namespace:
> - [Microsoft.Microservices4Spring](#microsoftmicroservices4spring) > - [Microsoft.Migrate](#microsoftmigrate) > - [Microsoft.MixedReality](#microsoftmixedreality)
+> - [Microsoft.MobileNetwork](#microsoftmobilenetwork)
> - [Microsoft.NetApp](#microsoftnetapp) > - [Microsoft.Network](#microsoftnetwork) > - [Microsoft.Notebooks](#microsoftnotebooks)
Jump to a resource provider namespace:
> - [Microsoft.Portal](#microsoftportal) > - [Microsoft.PowerBI](#microsoftpowerbi) > - [Microsoft.PowerBIDedicated](#microsoftpowerbidedicated)
+> - [Microsoft.PowerPlatform](#microsoftpowerplatform)
> - [Microsoft.ProjectBabylon](#microsoftprojectbabylon) > - [Microsoft.ProviderHub](#microsoftproviderhub)
+> - [Microsoft.Purview](#microsoftpurview)
> - [Microsoft.Quantum](#microsoftquantum) > - [Microsoft.RecoveryServices](#microsoftrecoveryservices) > - [Microsoft.RedHatOpenShift](#microsoftredhatopenshift) > - [Microsoft.Relay](#microsoftrelay)
+> - [Microsoft.ResourceConnector](#microsoftresourceconnector)
> - [Microsoft.ResourceGraph](#microsoftresourcegraph) > - [Microsoft.ResourceHealth](#microsoftresourcehealth) > - [Microsoft.Resources](#microsoftresources)
Jump to a resource provider namespace:
> - [Microsoft.ServiceBus](#microsoftservicebus) > - [Microsoft.ServiceFabric](#microsoftservicefabric) > - [Microsoft.ServiceFabricMesh](#microsoftservicefabricmesh)
+> - [Microsoft.ServiceLinker](#microsoftservicelinker)
> - [Microsoft.Services](#microsoftservices) > - [Microsoft.SignalRService](#microsoftsignalrservice) > - [Microsoft.Singularity](#microsoftsingularity)
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | farmBeats | Yes | Yes |
+> | farmBeats / eventGridFilters | No | No |
## Microsoft.AlertsManagement
Jump to a resource provider namespace:
> | alertsMetaData | No | No | > | alertsSummary | No | No | > | alertsSummaryList | No | No |
+> | migrateFromSmartDetection | No | No |
+> | resourceHealthAlertRules | Yes | Yes |
> | smartDetectorAlertRules | Yes | Yes | > | smartGroups | No | No |
Jump to a resource provider namespace:
> | - | -- | -- | > | servers | Yes | Yes |
+## Microsoft.AnyBuild
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | clusters | Yes | Yes |
+ ## Microsoft.ApiManagement > [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | deletedServices | No | No |
+> | getDomainOwnershipIdentifier | No | No |
> | reportFeedback | No | No | > | service | Yes | Yes | > | validateServiceName | No | No |
Jump to a resource provider namespace:
> [!NOTE] > Azure API Management only supports creating a maximum of 15 tag name/value pairs for each service.
+## Microsoft.AppAssessment
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | migrateProjects | Yes | Yes |
+> | migrateProjects / assessments | No | No |
+> | migrateProjects / assessments / assessedApplications | No | No |
+> | migrateProjects / assessments / assessedApplications / machines | No | No |
+> | migrateProjects / assessments / assessedMachines | No | No |
+> | migrateProjects / assessments / assessedMachines / applications | No | No |
+> | migrateProjects / assessments / machinesToAssess | No | No |
+> | migrateProjects / sites | No | No |
+> | migrateProjects / sites / applianceConfigurations | No | No |
+> | migrateProjects / sites / machines | No | No |
+> | osVersions | No | No |
+ ## Microsoft.AppConfiguration > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | accessReviewScheduleSettings | No | No | > | classicAdministrators | No | No | > | dataAliases | No | No |
+> | dataPolicyManifests | No | No |
> | denyAssignments | No | No | > | elevateAccess | No | No | > | findOrphanRoleAssignments | No | No |
Jump to a resource provider namespace:
> | privateLinkAssociations | No | No | > | providerOperations | No | No | > | resourceManagementPrivateLinks | Yes | Yes |
+> | roleAssignmentApprovals | No | No |
> | roleAssignments | No | No |
+> | roleAssignmentScheduleInstances | No | No |
+> | roleAssignmentScheduleRequests | No | No |
+> | roleAssignmentSchedules | No | No |
> | roleAssignmentsUsageMetrics | No | No | > | roleDefinitions | No | No |
+> | roleEligibilityScheduleInstances | No | No |
+> | roleEligibilityScheduleRequests | No | No |
+> | roleEligibilitySchedules | No | No |
+> | roleManagementPolicies | No | No |
+> | roleManagementPolicyAssignments | No | No |
## Microsoft.Automanage
Jump to a resource provider namespace:
> | privateClouds | Yes | Yes | > | privateClouds / addons | No | No | > | privateClouds / authorizations | No | No |
+> | privateClouds / cloudLinks | No | No |
> | privateClouds / clusters | No | No |
+> | privateClouds / clusters / datastores | No | No |
> | privateClouds / globalReachConnections | No | No | > | privateClouds / hcxEnterpriseSites | No | No |
+> | privateClouds / scriptExecutions | No | No |
+> | privateClouds / scriptPackages | No | No |
+> | privateClouds / scriptPackages / scriptCmdlets | No | No |
> | privateClouds / workloadNetworks | No | No | > | privateClouds / workloadNetworks / dhcpConfigurations | No | No |
+> | privateClouds / workloadNetworks / dnsServices | No | No |
+> | privateClouds / workloadNetworks / dnsZones | No | No |
> | privateClouds / workloadNetworks / gateways | No | No | > | privateClouds / workloadNetworks / portMirroringProfiles | No | No | > | privateClouds / workloadNetworks / segments | No | No |
Jump to a resource provider namespace:
> | b2ctenants | No | No | > | guestUsages | Yes | Yes |
-## Microsoft.AzureData
+## Microsoft.AzureArcData
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | dataControllers | Yes | Yes |
+> | dataWarehouseInstances | Yes | Yes |
> | postgresInstances | Yes | Yes | > | sqlManagedInstances | Yes | Yes | > | sqlServerInstances | Yes | Yes |+
+## Microsoft.AzureCIS
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | autopilotEnvironments | Yes | Yes |
+
+## Microsoft.AzureData
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
> | sqlServerRegistrations | Yes | Yes | > | sqlServerRegistrations / sqlServers | No | No |
+## Microsoft.AzureSphere
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | catalogs | Yes | Yes |
+> | catalogs / products | Yes | Yes |
+ ## Microsoft.AzureStack > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | clusters | Yes | Yes |
+> | galleryImages | Yes | Yes |
+> | networkInterfaces | Yes | Yes |
+> | virtualHardDisks | Yes | Yes |
+> | virtualMachines | Yes | Yes |
+> | virtualNetworks | Yes | Yes |
## Microsoft.BareMetalInfrastructure
Jump to a resource provider namespace:
> | billingAccounts / billingProfiles / invoiceSections / products / updateAutoRenew | No | No | > | billingAccounts / billingProfiles / invoiceSections / transactions | No | No | > | billingAccounts / billingProfiles / invoiceSections / transfers | No | No |
+> | billingAccounts / billingProfiles / invoiceSections / validateDeleteInvoiceSectionEligibility | No | No |
> | billingAccounts / BillingProfiles / patchOperations | No | No | > | billingAccounts / billingProfiles / paymentMethods | No | No | > | billingAccounts / billingProfiles / policies | No | No |
Jump to a resource provider namespace:
> | billingAccounts / billingProfiles / products | No | No | > | billingAccounts / billingProfiles / reservations | No | No | > | billingAccounts / billingProfiles / transactions | No | No |
+> | billingAccounts / billingProfiles / validateDeleteBillingProfileEligibility | No | No |
> | billingAccounts / billingProfiles / validateDetachPaymentMethodEligibility | No | No | > | billingAccounts / billingRoleAssignments | No | No | > | billingAccounts / billingRoleDefinitions | No | No | > | billingAccounts / billingSubscriptions | No | No |
+> | billingAccounts / billingSubscriptions / elevateRole | No | No |
> | billingAccounts / billingSubscriptions / invoices | No | No | > | billingAccounts / createBillingRoleAssignment | No | No | > | billingAccounts / createInvoiceSectionOperations | No | No |
Jump to a resource provider namespace:
> | billingAccounts / departments / billingPermissions | No | No | > | billingAccounts / departments / billingRoleAssignments | No | No | > | billingAccounts / departments / billingRoleDefinitions | No | No |
+> | billingAccounts / departments / billingSubscriptions | No | No |
> | billingAccounts / enrollmentAccounts | No | No | > | billingAccounts / enrollmentAccounts / billingPermissions | No | No | > | billingAccounts / enrollmentAccounts / billingRoleAssignments | No | No | > | billingAccounts / enrollmentAccounts / billingRoleDefinitions | No | No |
+> | billingAccounts / enrollmentAccounts / billingSubscriptions | No | No |
> | billingAccounts / invoices | No | No | > | billingAccounts / invoices / transactions | No | No |
+> | billingAccounts / invoices / transactionSummary | No | No |
> | billingAccounts / invoiceSections | No | No | > | billingAccounts / invoiceSections / billingSubscriptionMoveOperations | No | No | > | billingAccounts / invoiceSections / billingSubscriptions | No | No |
Jump to a resource provider namespace:
> | billingAccounts / invoiceSections / transfers | No | No | > | billingAccounts / lineOfCredit | No | No | > | billingAccounts / patchOperations | No | No |
+> | billingAccounts / payableOverage | No | No |
> | billingAccounts / paymentMethods | No | No |
+> | billingAccounts / payNow | No | No |
> | billingAccounts / products | No | No | > | billingAccounts / reservations | No | No | > | billingAccounts / transactions | No | No |
Jump to a resource provider namespace:
> | departments | No | No | > | enrollmentAccounts | No | No | > | invoices | No | No |
+> | promotions | No | No |
> | transfers | No | No | > | transfers / acceptTransfer | No | No | > | transfers / declineTransfer | No | No |
Jump to a resource provider namespace:
> | botServices | Yes | Yes | > | botServices / channels | No | No | > | botServices / connections | No | No |
+> | hostSettings | No | No |
> | languages | No | No | > | templates | No | No |
Jump to a resource provider namespace:
> | Redis / privateEndpointConnections | No | No | > | Redis / privateLinkResources | No | No | > | redisEnterprise | Yes | Yes |
+> | redisEnterprise / databases | No | No |
> | RedisEnterprise / privateEndpointConnectionProxies | No | No | > | RedisEnterprise / privateEndpointConnectionProxies / validate | No | No | > | RedisEnterprise / privateEndpointConnections | No | No |
Jump to a resource provider namespace:
> | resources | No | No | > | validateReservationOrder | No | No |
+## Microsoft.Cascade
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | sites | Yes | Yes |
+ ## Microsoft.Cdn > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | CdnWebApplicationFirewallPolicies | Yes | Yes | > | edgenodes | No | No | > | profiles | Yes | Yes |
+> | profiles / afdendpoints | Yes | Yes |
+> | profiles / afdendpoints / routes | No | No |
+> | profiles / customdomains | No | No |
> | profiles / endpoints | Yes | Yes | > | profiles / endpoints / customdomains | No | No | > | profiles / endpoints / origingroups | No | No | > | profiles / endpoints / origins | No | No |
+> | profiles / origingroups | No | No |
+> | profiles / origingroups / origins | No | No |
+> | profiles / rulesets | No | No |
+> | profiles / rulesets / rules | No | No |
+> | profiles / secrets | No | No |
+> | profiles / securitypolicies | No | No |
> | validateProbe | No | No | ## Microsoft.CertificateRegistration
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | changes | No | No |
> | profile | No | No | > | resourceChanges | No | No |
Jump to a resource provider namespace:
> | storageAccounts / vmImages | No | No | > | vmImages | No | No |
+## Microsoft.ClusterStor
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | nodes | Yes | Yes |
+ ## Microsoft.Codespaces > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | - | -- | -- | > | CacheNodes | Yes | Yes |
+## Microsoft.ConnectedVehicle
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | platformAccounts | Yes | Yes |
+> | registeredSubscriptions | No | No |
+
+## Microsoft.ConnectedVMwarevSphere
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | ResourcePools | Yes | Yes |
+> | VCenters | Yes | Yes |
+> | VCenters / InventoryItems | No | No |
+> | VirtualMachines | Yes | Yes |
+> | VirtualMachines / Extensions | Yes | Yes |
+> | VirtualMachines / GuestAgents | No | No |
+> | VirtualMachines / HybridIdentityMetadata | No | No |
+> | VirtualMachineTemplates | Yes | Yes |
+> | VirtualNetworks | Yes | Yes |
+ ## Microsoft.Consumption > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | registries / builds / getLogLink | No | No | > | registries / buildTasks | Yes | Yes | > | registries / buildTasks / steps | No | No |
+> | registries / connectedRegistries | No | No |
+> | registries / connectedRegistries / deactivate | No | No |
> | registries / eventGridFilters | No | No | > | registries / exportPipelines | No | No | > | registries / generateCredentials | No | No |
Jump to a resource provider namespace:
> | - | -- | -- | > | containerServices | Yes | Yes | > | managedClusters | Yes | Yes |
+> | ManagedClusters / eventGridFilters | No | No |
> | openShiftManagedClusters | Yes | Yes | ## Microsoft.CostManagement
Jump to a resource provider namespace:
> | ExternalSubscriptions / Dimensions | No | No | > | ExternalSubscriptions / Forecast | No | No | > | ExternalSubscriptions / Query | No | No |
+> | fetchPrices | No | No |
> | Forecast | No | No |
+> | GenerateDetailedCostReport | No | No |
+> | GenerateReservationDetailsReport | No | No |
> | Insights | No | No | > | Query | No | No | > | register | No | No | > | Reportconfigs | No | No | > | Reports | No | No |
+> | ScheduledActions | No | No |
> | Settings | No | No | > | showbackRules | No | No | > | Views | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | DisableLockbox | No | No |
+> | EnableLockbox | No | No |
> | requests | No | No |
+> | TenantOptedIn | No | No |
## Microsoft.CustomProviders
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | DatabaseMigrations | No | No |
> | services | No | No | > | services / projects | No | No |
+> | SqlMigrationServices | Yes | Yes |
## Microsoft.DataProtection
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | BackupVaults | Yes | Yes |
-> | ResourceOperationGateKeepers | Yes | Yes |
+> | ResourceGuards | Yes | Yes |
## Microsoft.DataShare
Jump to a resource provider namespace:
> | servers / privateLinkResources | No | No | > | servers / queryTexts | No | No | > | servers / recoverableServers | No | No |
+> | servers / resetQueryPerformanceInsightData | No | No |
> | servers / start | No | No | > | servers / stop | No | No | > | servers / topQueryStatistics | No | No |
Jump to a resource provider namespace:
> | servers / privateLinkResources | No | No | > | servers / queryTexts | No | No | > | servers / recoverableServers | No | No |
+> | servers / resetQueryPerformanceInsightData | No | No |
> | servers / start | No | No | > | servers / stop | No | No | > | servers / topQueryStatistics | No | No |
Jump to a resource provider namespace:
> | - | -- | -- | > | flexibleServers | Yes | Yes | > | serverGroups | Yes | Yes |
+> | serverGroupsv2 | Yes | Yes |
> | servers | Yes | Yes | > | servers / advisors | No | No | > | servers / keys | No | No |
Jump to a resource provider namespace:
> | servers / privateLinkResources | No | No | > | servers / queryTexts | No | No | > | servers / recoverableServers | No | No |
+> | servers / resetQueryPerformanceInsightData | No | No |
> | servers / topQueryStatistics | No | No | > | servers / virtualNetworkRules | No | No | > | servers / waitStatistics | No | No |
Jump to a resource provider namespace:
> | hostpools / sessionhosts | No | No | > | hostpools / sessionhosts / usersessions | No | No | > | hostpools / usersessions | No | No |
+> | scalingPlans | Yes | Yes |
> | workspaces | Yes | Yes | ## Microsoft.Devices
Jump to a resource provider namespace:
> | - | -- | -- | > | accounts | Yes | Yes | > | accounts / instances | Yes | Yes |
+> | registeredSubscriptions | No | No |
## Microsoft.DevOps
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | cassandraClusters | Yes | Yes |
> | databaseAccountNames | No | No | > | databaseAccounts | Yes | Yes | > | restorableDatabaseAccounts | No | No |
Jump to a resource provider namespace:
> | lcsprojects / clouddeployments | No | No | > | lcsprojects / connectors | No | No |
+## Microsoft.EdgeOrder
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | addresses | Yes | Yes |
+> | orderCollections | Yes | Yes |
+> | orders | Yes | Yes |
+> | productFamiliesMetadata | No | No |
+ ## Microsoft.EnterpriseKnowledgeGraph > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | featureConfigurations | No | No |
+> | featureProviderNamespaces | No | No |
> | featureProviders | No | No | > | features | No | No | > | providers | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | clusterPools | Yes | Yes |
+> | clusterPools / clusters | Yes | Yes |
> | clusters | Yes | Yes | > | clusters / applications | No | No |
+## Microsoft.HealthBot
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | healthBots | Yes | Yes |
+ ## Microsoft.HealthcareApis > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | services / privateEndpointConnectionProxies | No | No | > | services / privateEndpointConnections | No | No | > | services / privateLinkResources | No | No |
+> | workspaces | Yes | Yes |
+> | workspaces / dicomservices | Yes | Yes |
## Microsoft.HybridCompute
Jump to a resource provider namespace:
> | machines / assessPatches | No | No | > | machines / extensions | Yes | Yes | > | machines / installPatches | No | No |
+> | machines / privateLinkScopes | No | No |
+> | privateLinkScopes | Yes | Yes |
+> | privateLinkScopes / privateEndpointConnectionProxies | No | No |
+> | privateLinkScopes / privateEndpointConnections | No | No |
## Microsoft.HybridData
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | devices | Yes | Yes |
-> | networkFunctions | Yes | Yes |
+> | networkfunctions | Yes | Yes |
> | networkFunctionVendors | No | No | > | registeredSubscriptions | No | No |
-> | vendors | No | No |
-> | vendors / vendorSkus | No | No |
-> | vendors / vendorSkus / previewSubscriptions | No | No |
+> | Vendors | No | No |
+> | Vendors / vendorskus | No | No |
+> | Vendors / vendorskus / previewsubscriptions | No | No |
> | virtualNetworkFunctions | Yes | Yes | > | virtualNetworkFunctionVendors | No | No |
Jump to a resource provider namespace:
> | appTemplates | No | No | > | IoTApps | Yes | Yes |
+## Microsoft.IoTSecurity
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | defenderSettings | No | No |
+ ## Microsoft.IoTSpaces > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | deletedManagedHSMs | No | No |
> | deletedVaults | No | No | > | hsmPools | Yes | Yes | > | managedHSMs | Yes | Yes |
Jump to a resource provider namespace:
> | clusters / databases / dataconnections | No | No | > | clusters / databases / eventhubconnections | No | No | > | clusters / databases / principalassignments | No | No |
+> | clusters / databases / scripts | No | No |
> | clusters / dataconnections | No | No | > | clusters / principalassignments | No | No | > | clusters / sharedidentities | No | No |
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | labaccounts | Yes | No |
+> | labplans | Yes | Yes |
+> | labs | Yes | Yes |
> | users | No | No | ## Microsoft.Logic
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | modelinventories | Yes | Yes |
+> | virtualclusters | Yes | Yes |
> | workspaces | Yes | Yes | > | workspaces / batchEndpoints | Yes | Yes | > | workspaces / batchEndpoints / deployments | Yes | Yes |
+> | workspaces / batchEndpoints / deployments / jobs | No | No |
+> | workspaces / batchEndpoints / jobs | No | No |
> | workspaces / codes | No | No | > | workspaces / codes / versions | No | No | > | workspaces / computes | No | No |
+> | workspaces / data | No | No |
> | workspaces / datastores | No | No |
+> | workspaces / environments | No | No |
> | workspaces / eventGridFilters | No | No | > | workspaces / jobs | No | No | > | workspaces / labelingJobs | No | No |
Jump to a resource provider namespace:
> | workspaces / models / versions | No | No | > | workspaces / onlineEndpoints | Yes | Yes | > | workspaces / onlineEndpoints / deployments | Yes | Yes |
-
+ > [!NOTE]
-> Workspace tags don't propagate to compute clusters and compute instances.
+> Workspace tags don't propagate to compute clusters and compute instances.
## Microsoft.Maintenance
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | accounts | Yes | Yes |
+> | accounts / creators | Yes | Yes |
> | accounts / eventGridFilters | No | No | > | accounts / privateAtlases | Yes | Yes |
Jump to a resource provider namespace:
> | privategalleryitems | No | No | > | privateStoreClient | No | No | > | privateStores | No | No |
+> | privateStores / AdminRequestApprovals | No | No |
> | privateStores / offers | No | No |
+> | privateStores / offers / acknowledgeNotification | No | No |
+> | privateStores / queryNotificationsState | No | No |
+> | privateStores / RequestApprovals | No | No |
+> | privateStores / requestApprovals / query | No | No |
+> | privateStores / requestApprovals / withdrawPlan | No | No |
> | products | No | No | > | publishers | No | No | > | publishers / offers | No | No |
Jump to a resource provider namespace:
> | mediaservices / assets / assetFilters | No | No | > | mediaservices / contentKeyPolicies | No | No | > | mediaservices / eventGridFilters | No | No |
+> | mediaservices / graphInstances | No | No |
+> | mediaservices / graphTopologies | No | No |
> | mediaservices / liveEventOperations | No | No | > | mediaservices / liveEvents | Yes | Yes | > | mediaservices / liveEvents / liveOutputs | No | No |
Jump to a resource provider namespace:
> | mediaservices / streamingPolicies | No | No | > | mediaservices / transforms | No | No | > | mediaservices / transforms / jobs | No | No |
+> | videoAnalyzers | Yes | Yes |
+> | videoAnalyzers / edgeModules | No | No |
## Microsoft.Microservices4Spring
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | holographicsBroadcastAccounts | Yes | Yes |
+> | objectAnchorsAccounts | Yes | Yes |
> | objectUnderstandingAccounts | Yes | Yes | > | remoteRenderingAccounts | Yes | Yes | > | spatialAnchorsAccounts | Yes | Yes |
+## Microsoft.MobileNetwork
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | networks | Yes | Yes |
+> | networks / sites | Yes | Yes |
+> | packetCores | Yes | Yes |
+> | sims | Yes | Yes |
+> | sims / simProfiles | Yes | Yes |
+ ## Microsoft.NetApp > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | netAppAccounts / capacityPools | Yes | No | > | netAppAccounts / capacityPools / volumes | Yes | No | > | netAppAccounts / capacityPools / volumes / snapshots | No | No |
+> | netAppAccounts / volumeGroups | No | No |
## Microsoft.Network
Jump to a resource provider namespace:
> | networkSecurityGroups | Yes | Yes | > | networkWatchers | Yes | Yes | > | networkWatchers / connectionMonitors | Yes | No |
-> | networkWatchers / flowLogs | No | No |
+> | networkWatchers / flowLogs | Yes | No |
> | networkWatchers / lenses | Yes | No | > | networkWatchers / pingMeshes | Yes | No | > | p2sVpnGateways | Yes | Yes |
Jump to a resource provider namespace:
> | clusters | Yes | Yes | > | deletedWorkspaces | No | No | > | linkTargets | No | No |
+> | querypacks | Yes | Yes |
> | storageInsightConfigs | No | No | > | workspaces | Yes | Yes | > | workspaces / dataExports | No | No |
Jump to a resource provider namespace:
> | workspaces / metadata | No | No | > | workspaces / query | No | No | > | workspaces / scopedPrivateLinkProxies | No | No |
+> | workspaces / storageInsightConfigs | No | No |
+> | workspaces / tables | No | No |
## Microsoft.OperationsManagement
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | cdnPeeringPrefixes | No | No |
> | legacyPeerings | No | No | > | peerAsns | No | No | > | peerings | Yes | Yes |
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | attestations | No | No |
+> | eventGridFilters | No | No |
> | policyEvents | No | No | > | policyMetadata | No | No | > | policyStates | No | No |
Jump to a resource provider namespace:
> | - | -- | -- | > | consoles | No | No | > | dashboards | Yes | Yes |
+> | tenantconfigurations | No | No |
> | userSettings | No | No | ## Microsoft.PowerBI
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | autoScaleVCores | Yes | Yes |
> | capacities | Yes | Yes |
+## Microsoft.PowerPlatform
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | enterprisePolicies | Yes | Yes |
+ ## Microsoft.ProjectBabylon > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | providerRegistrations | No | No |
+> | providerRegistrations / customRollouts | No | No |
> | providerRegistrations / defaultRollouts | No | No | > | providerRegistrations / resourceTypeRegistrations | No | No |
-> | rollouts | Yes | Yes |
+
+## Microsoft.Purview
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | accounts | Yes | Yes |
+> | deletedAccounts | No | No |
+> | getDefaultAccount | No | No |
+> | removeDefaultAccount | No | No |
+> | setDefaultAccount | No | No |
## Microsoft.Quantum
Jump to a resource provider namespace:
> | namespaces / wcfrelays | No | No | > | namespaces / wcfrelays / authorizationrules | No | No |
+## Microsoft.ResourceConnector
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | appliances | Yes | Yes |
+ ## Microsoft.ResourceGraph > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | events | No | No | > | impactedResources | No | No | > | metadata | No | No |
-> | notifications | No | No |
## Microsoft.Resources > [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | calculateTemplateHash | No | No |
> | deployments | Yes | No | > | deployments / operations | No | No | > | deploymentScripts | Yes | Yes | > | deploymentScripts / logs | No | No | > | links | No | No |
-> | notifyResourceJobs | No | No |
> | providers | No | No | > | resourceGroups | Yes | No | > | subscriptions | Yes | No |
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | applications | Yes | Yes |
+> | resources | Yes | Yes |
> | saasresources | No | No | ## Microsoft.ScVmm
Jump to a resource provider namespace:
> | Compliances | No | No | > | connectors | No | No | > | dataCollectionAgents | No | No |
+> | devices | No | No |
> | deviceSecurityGroups | No | No | > | discoveredSecuritySolutions | No | No | > | externalSecuritySolutions | No | No | > | InformationProtectionPolicies | No | No |
+> | ingestionSettings | No | No |
+> | insights | No | No |
+> | iotAlerts | No | No |
+> | iotAlertTypes | No | No |
> | iotDefenderSettings | No | No |
+> | iotRecommendations | No | No |
+> | iotRecommendationTypes | No | No |
> | iotSecuritySolutions | Yes | Yes | > | iotSecuritySolutions / analyticsModels | No | No | > | iotSecuritySolutions / analyticsModels / aggregatedAlerts | No | No |
Jump to a resource provider namespace:
> | iotSecuritySolutions / iotRecommendations | No | No | > | iotSecuritySolutions / iotRecommendationTypes | No | No | > | iotSensors | No | No |
+> | iotSites | No | No |
> | jitNetworkAccessPolicies | No | No | > | jitPolicies | No | No |
+> | onPremiseIotSensors | No | No |
> | policies | No | No | > | pricings | No | No | > | regulatoryComplianceStandards | No | No |
Jump to a resource provider namespace:
> | cases | No | No | > | dataConnectors | No | No | > | dataConnectorsCheckRequirements | No | No |
+> | enrichment | No | No |
> | entities | No | No | > | entityQueries | No | No |
+> | entityQueryTemplates | No | No |
> | incidents | No | No | > | officeConsents | No | No | > | settings | No | No |
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | consoleServices | No | No |
+> | serialPorts | No | No |
## Microsoft.ServiceBus
Jump to a resource provider namespace:
> | edgeclusters | Yes | Yes | > | edgeclusters / applications | No | No | > | managedclusters | Yes | Yes |
+> | managedclusters / applications | No | No |
+> | managedclusters / applications / services | No | No |
+> | managedclusters / applicationTypes | No | No |
+> | managedclusters / applicationTypes / versions | No | No |
> | managedclusters / nodetypes | No | No | > | networks | Yes | Yes | > | secretstores | Yes | Yes |
Jump to a resource provider namespace:
> | secrets | Yes | Yes | > | volumes | Yes | Yes |
+## Microsoft.ServiceLinker
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | linkers | No | No |
+ ## Microsoft.Services > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | - | -- | -- | > | SignalR | Yes | Yes | > | SignalR / eventGridFilters | No | No |
+> | WebPubSub | Yes | Yes |
## Microsoft.Singularity
Jump to a resource provider namespace:
> | accounts / groupPolicies | No | No | > | accounts / jobs | No | No | > | accounts / storageContainers | No | No |
+> | images | No | No |
## Microsoft.SoftwarePlan
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | longtermRetentionManagedInstance / longtermRetentionDatabase / longtermRetentionBackup | No | No |
+> | longtermRetentionServer / longtermRetentionDatabase / longtermRetentionBackup | No | No |
> | managedInstances | Yes | Yes | > | managedInstances / databases | No | No | > | managedInstances / databases / backupShortTermRetentionPolicies | No | No |
Jump to a resource provider namespace:
> | managedInstances / keys | No | No | > | managedInstances / restorableDroppedDatabases / backupShortTermRetentionPolicies | No | No | > | managedInstances / vulnerabilityAssessments | No | No |
-> | longtermRetentionManagedInstance / longtermRetentionDatabase / longtermRetentionBackup | No | No |
> | servers | Yes | Yes | > | servers / administrators | No | No | > | servers / communicationLinks | No | No |
Jump to a resource provider namespace:
> | servers / restorableDroppedDatabases | No | No | > | servers / serviceobjectives | No | No | > | servers / tdeCertificates | No | No |
-> | longtermRetentionServer / longtermRetentionDatabase / longtermRetentionBackup | No | No |
> | virtualClusters | No | No | <a id="sqlnote"></a>
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | amlFilesystems | Yes | Yes |
> | caches | Yes | Yes | > | caches / storageTargets | No | No | > | usageModels | No | No |
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | acceptChangeTenant | No | No |
+> | acceptOwnership | No | No |
+> | acceptOwnershipStatus | No | No |
> | aliases | No | No | > | cancel | No | No | > | changeTenantRequest | No | No | > | changeTenantStatus | No | No | > | CreateSubscription | No | No | > | enable | No | No |
+> | policies | No | No |
> | rename | No | No | > | SubscriptionDefinitions | No | No | > | SubscriptionOperations | No | No |
Jump to a resource provider namespace:
> | environments | Yes | No | > | environments / accessPolicies | No | No | > | environments / eventsources | Yes | No |
+> | environments / privateEndpointConnectionProxies | No | No |
+> | environments / privateEndpointConnections | No | No |
+> | environments / privateLinkResources | No | No |
> | environments / referenceDataSets | Yes | No | ## Microsoft.Token
Jump to a resource provider namespace:
> | ArcZones | Yes | Yes | > | ResourcePools | Yes | Yes | > | VCenters | Yes | Yes |
-> | VirtualMachines | Yes | Yes |
+> | VCenters / InventoryItems | No | No |
+> | virtualmachines | Yes | Yes |
> | VirtualMachineTemplates | Yes | Yes | > | VirtualNetworks | Yes | Yes |
Jump to a resource provider namespace:
> | connections | Yes | Yes | > | customApis | Yes | Yes | > | deletedSites | No | No |
+> | functionAppStacks | No | No |
+> | generateGithubAccessTokenForAppserviceCLI | No | No |
> | hostingEnvironments | Yes | Yes | > | hostingEnvironments / eventGridFilters | No | No | > | hostingEnvironments / multiRolePools | No | No |
Jump to a resource provider namespace:
> | staticSites | Yes | Yes | > | validate | No | No | > | verifyHostingEnvironmentVnet | No | No |
+> | webAppStacks | No | No |
## Microsoft.WindowsDefenderATP
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | migrationAgents | Yes | Yes |
> | workloads | Yes | Yes | > | workloads / instances | No | No | > | workloads / versions | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | components | No | No |
-> | componentsSummary | No | No |
-> | monitorInstances | No | No |
-> | monitorInstancesSummary | No | No |
> | monitors | No | No |
-> | notificationSettings | No | No |
## Next steps
azure-resource-manager Bicep Operators Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-operators-comparison.md
+
+ Title: Bicep comparison operators
+description: Describes Bicep comparison operators that compare values.
+ Last updated : 04/07/2021++
+# Bicep comparison operators
+
+The comparison operators compare values and return either `true` or `false`. To run the examples, use Azure CLI or Azure PowerShell to [deploy a Bicep file](bicep-tutorial-create-first-bicep.md#deploy-bicep-file).
+
+| Operator | Name |
+| - | - |
+| `>=` | [Greater than or equal](#greater-than-or-equal-) |
+| `>` | [Greater than](#greater-than-) |
+| `<=` | [Less than or equal](#less-than-or-equal-) |
+| `<` | [Less than](#less-than-) |
+| `==` | [Equals](#equals-) |
+| `!=` | [Not equal](#not-equal-) |
+| `=~` | [Equal case-insensitive](#equal-case-insensitive-) |
+| `!~` | [Not equal case-insensitive](#not-equal-case-insensitive-) |
+
+## Greater than or equal >=
+
+`operand1 >= operand2`
+
+Evaluates if the first value is greater than or equal to the second value.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `operand1` | integer, string | First value in the comparison. |
+| `operand2` | integer, string | Second value in the comparison. |
+
+### Return value
+
+If the first value is greater than or equal to the second value, `true` is returned. Otherwise, `false` is returned.
+
+### Example
+
+A pair of integers and pair of strings are compared.
+
+```bicep
+param firstInt int = 10
+param secondInt int = 5
+
+param firstString string = 'A'
+param secondString string = 'A'
+
+output intGtE bool = firstInt >= secondInt
+output stringGtE bool = firstString >= secondString
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `intGtE` | boolean | true |
+| `stringGtE` | boolean | true |
+
+## Greater than >
+
+`operand1 > operand2`
+
+Evaluates if the first value is greater than the second value.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `operand1` | integer, string | First value in the comparison. |
+| `operand2` | integer, string | Second value in the comparison. |
+
+### Return value
+
+If the first value is greater than the second value, `true` is returned. Otherwise, `false` is returned.
+
+### Example
+
+A pair of integers and pair of strings are compared.
+
+```bicep
+param firstInt int = 10
+param secondInt int = 5
+
+param firstString string = 'bend'
+param secondString string = 'band'
+
+output intGt bool = firstInt > secondInt
+output stringGt bool = firstString > secondString
+```
+
+Output from the example:
+
+The **e** in **bend** makes the first string greater.
+
+| Name | Type | Value |
+| - | - | - |
+| `intGt` | boolean | true |
+| `stringGt` | boolean | true |
+
+## Less than or equal <=
+
+`operand1 <= operand2`
+
+Evaluates if the first value is less than or equal to the second value.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `operand1` | integer, string | First value in the comparison. |
+| `operand2` | integer, string | Second value in the comparison. |
+
+### Return value
+
+If the first value is less than or equal to the second value, `true` is returned. Otherwise, `false` is returned.
+
+### Example
+
+A pair of integers and pair of strings are compared.
+
+```bicep
+param firstInt int = 5
+param secondInt int = 10
+
+param firstString string = 'demo'
+param secondString string = 'demo'
+
+output intLtE bool = firstInt <= secondInt
+output stringLtE bool = firstString <= secondString
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `intLtE` | boolean | true |
+| `stringLtE` | boolean | true |
+
+## Less than <
+
+`operand1 < operand2`
+
+Evaluates if the first value is less than the second value.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `operand1` | integer, string | First value in the comparison. |
+| `operand2` | integer, string | Second value in the comparison. |
+
+### Return value
+
+If the first value is less than the second value, `true` is returned. Otherwise, `false` is returned.
+
+### Example
+
+A pair of integers and pair of strings are compared.
+
+```bicep
+param firstInt int = 5
+param secondInt int = 10
+
+param firstString string = 'demo'
+param secondString string = 'Demo'
+
+output intLt bool = firstInt < secondInt
+output stringLt bool = firstString < secondString
+```
+
+Output from the example:
+
+The string is `true` because lowercase letters are less than uppercase letters.
+
+| Name | Type | Value |
+| - | - | - |
+| `intLt` | boolean | true |
+| `stringLt` | boolean | true |
+
+## Equals ==
+
+`operand1 == operand2`
+
+Evaluates if the values are equal.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `operand1` | string, integer, boolean, array, object | First value in the comparison. |
+| `operand2` | string, integer, boolean, array, object | Second value in the comparison. |
+
+### Return value
+
+If the operands are equal, `true` is returned. If the operands are different, `false` is returned.
+
+### Example
+
+Pairs of integers, strings, and booleans are compared.
+
+```bicep
+param firstInt int = 5
+param secondInt int = 5
+
+param firstString string = 'demo'
+param secondString string = 'demo'
+
+param firstBool bool = true
+param secondBool bool = true
+
+output intEqual bool = firstInt == secondInt
+output stringEqual bool = firstString == secondString
+output boolEqual bool = firstBool == secondBool
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `intEqual` | boolean | true |
+| `stringEqual` | boolean | true |
+| `boolEqual` | boolean | true |
+
+## Not equal !=
+
+`operand1 != operand2`
+
+Evaluates if two values are **not** equal.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `operand1` | string, integer, boolean, array, object | First value in the comparison. |
+| `operand2` | string, integer, boolean, array, object | Second value in the comparison. |
+
+### Return value
+
+If the operands are **not** equal, `true` is returned. If the operands are equal, `false` is returned.
+
+### Example
+
+Pairs of integers, strings, and booleans are compared.
+
+```bicep
+param firstInt int = 10
+param secondInt int = 5
+
+param firstString string = 'demo'
+param secondString string = 'test'
+
+param firstBool bool = false
+param secondBool bool = true
+
+output intNotEqual bool = firstInt != secondInt
+output stringNotEqual bool = firstString != secondString
+output boolNotEqual bool = firstBool != secondBool
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `intNotEqual` | boolean | true |
+| `stringNotEqual` | boolean | true |
+| `boolNotEqual` | boolean | true |
+
+## Equal case-insensitive =~
+
+`operand1 =~ operand2`
+
+Ignores case to determine if the two values are equal.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `operand1` | string | First string in the comparison. |
+| `operand2` | string | Second string in the comparison. |
+
+### Return value
+
+If the strings are equal, `true` is returned. Otherwise, `false` is returned.
+
+### Example
+
+Compares strings that use mixed-case letters.
+
+```bicep
+param firstString string = 'demo'
+param secondString string = 'DEMO'
+
+param thirdString string = 'demo'
+param fourthString string = 'TEST'
+
+output strEqual1 bool = firstString =~ secondString
+output strEqual2 bool = thirdString =~ fourthString
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `strEqual1` | boolean | true |
+| `strEqual2` | boolean | false |
+
+## Not equal case-insensitive !~
+
+`operand1 !~ operand2`
+
+Ignores case to determine if the two values are **not** equal.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `operand1` | string | First string in the comparison. |
+| `operand2` | string | Second string in the comparison. |
+
+### Return value
+
+If the strings are **not** equal, `true` is returned. Otherwise, `false` is returned.
+
+### Example
+
+Compares strings that use mixed-case letters.
+
+```bicep
+param firstString string = 'demo'
+param secondString string = 'TEST'
+
+param thirdString string = 'demo'
+param fourthString string = 'DeMo'
+
+output strNotEqual1 bool = firstString !~ secondString
+output strEqual2 bool = thirdString !~ fourthString
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `strNotEqual1` | boolean | true |
+| `strNotEqual2` | boolean | false |
+
+## Next steps
+
+- To create a Bicep file, see [Tutorial: Create and deploy first Azure Resource Manager Bicep file](bicep-tutorial-create-first-bicep.md).
+- For information about how to resolve Bicep type errors, see [Any function for Bicep](template-functions-any.md).
+- To compare syntax for Bicep and JSON, see [Comparing JSON and Bicep for templates](compare-template-syntax.md).
+- For examples of Bicep and ARM template functions, see [ARM template functions](template-functions.md).
azure-resource-manager Bicep Operators Logical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-operators-logical.md
+
+ Title: Bicep logical operators
+description: Describes Bicep logical operators that evaluate conditions.
+ Last updated : 04/07/2021++
+# Bicep logical operators
+
+The logical operators evaluate boolean values, return non-null values, or evaluate a conditional expression. To run the examples, use Azure CLI or Azure PowerShell to [deploy a Bicep file](bicep-tutorial-create-first-bicep.md#deploy-bicep-file).
+
+| Operator | Name |
+| - | - |
+| `&&` | [And](#and-) |
+| `||` | [Or](#or-) |
+| `!` | [Not](#not-) |
+| `??` | [Coalesce](#coalesce-) |
+| `?` `:` | [Conditional expression](#conditional-expression--) |
+
+## And &&
+
+`operand1 && operand2`
+
+Determines if both values are true.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `operand1` | boolean | The first value to check if true. |
+| `operand2` | boolean | The second value to check if true. |
+| More operands | boolean | More operands can be included. |
+
+### Return value
+
+`True` when both values are true, otherwise `false` is returned.
+
+### Example
+
+Evaluates a set of parameter values and a set of expressions.
+
+```bicep
+param operand1 bool = true
+param operand2 bool = true
+
+output andResultParm bool = operand1 && operand2
+output andResultExp bool = bool(10 >= 10) && bool(5 > 2)
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `andResultParm` | boolean | true |
+| `andResultExp` | boolean | true |
+
+## Or ||
+
+`operand1 || operand2`
+
+Determines if either value is true.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `operand1` | boolean | The first value to check if true. |
+| `operand2` | boolean | The second value to check if true. |
+| More operands | boolean | More operands can be included. |
+
+### Return value
+
+`True` when either value is true, otherwise `false` is returned.
+
+### Example
+
+Evaluates a set of parameter values and a set of expressions.
+
+```bicep
+param operand1 bool = true
+param operand2 bool = false
+
+output orResultParm bool = operand1 || operand2
+output orResultExp bool = bool(10 >= 10) || bool(5 < 2)
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `orResultParm` | boolean | true |
+| `orResultExp` | boolean | true |
+
+## Not !
+
+`!boolValue`
+
+Negates a boolean value.
+
+### Operand
+
+| Operand | Type | Description |
+| - | - | - |
+| `boolValue` | boolean | Boolean value that's negated. |
+
+### Return value
+
+Negates the initial value and returns a boolean. If the initial value is `true`, then `false` is returned.
+
+### Example
+
+The `not` operator negates a value. The values can be wrapped with parentheses.
+
+```bicep
+param initTrue bool = true
+param initFalse bool = false
+
+output startedTrue bool = !(initTrue)
+output startedFalse bool = !initFalse
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `startedTrue` | boolean | false |
+| `startedFalse` | boolean | true |
+
+## Coalesce ??
+
+`operand1 ?? operand2`
+
+Returns first non-null value from operands.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `operand1` | string, integer, boolean, object, array | Value to test for `null`. |
+| `operand2` | string, integer, boolean, object, array | Value to test for `null`. |
+| More operands | string, integer, boolean, object, array | Value to test for `null`. |
+
+### Return value
+
+Returns the first non-null value. Empty strings, empty arrays, and empty objects aren't `null` and an \<empty> value is returned.
+
+### Example
+
+The output statements return the non-null values. The output type must match the type in the comparison or an error is generated.
+
+```bicep
+param myObject object = {
+ 'isnull1': null
+ 'isnull2': null
+ 'string': 'demoString'
+ 'emptystr': ''
+ 'integer': 10
+ }
+
+output nonNullStr string = myObject.isnull1 ?? myObject.string ?? myObject.isnull2
+output nonNullInt int = myObject.isnull1 ?? myObject.integer ?? myObject.isnull2
+output nonNullEmpty string = myObject.isnull1 ?? myObject.emptystr ?? myObject.string ?? myObject.isnull2
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `nonNullStr` | string | demoString |
+| `nonNullInt` | int | 10 |
+| `nonNullEmpty` | string | \<empty> |
+
+## Conditional expression ? :
+
+`condition ? true-value : false-value`
+
+Evaluates a condition and returns a value whether the condition is true or false.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `condition` | boolean | Condition to evaluate as true or false. |
+| `true-value` | string, integer, boolean, object, array | Value when condition is true. |
+| `false-value` | string, integer, boolean, object, array | Value when condition is false. |
+
+### Example
+
+This example evaluates a parameter's initial and returns a value whether the condition is true or false.
+
+```bicep
+param initValue bool = true
+
+output outValue string = initValue ? 'true value' : 'false value'
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `outValue` | string | true value |
+
+## Next steps
+
+- To create a Bicep file, see [Tutorial: Create and deploy first Azure Resource Manager Bicep file](bicep-tutorial-create-first-bicep.md).
+- For information about how to resolve Bicep type errors, see [Any function for Bicep](template-functions-any.md).
+- To compare syntax for Bicep and JSON, see [Comparing JSON and Bicep for templates](compare-template-syntax.md).
+- For examples of Bicep and ARM template functions, see [ARM template functions](template-functions.md).
azure-resource-manager Bicep Operators Numeric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-operators-numeric.md
+
+ Title: Bicep numeric operators
+description: Describes Bicep numeric operators that calculate values.
+ Last updated : 04/07/2021++
+# Bicep numeric operators
+
+The numeric operators use integers to do calculations and return integer values. To run the examples, use Azure CLI or Azure PowerShell to [deploy a Bicep file](bicep-tutorial-create-first-bicep.md#deploy-bicep-file).
+
+| Operator | Name |
+| - | - |
+| `*` | [Multiply](#multiply-) |
+| `/` | [Divide](#divide-) |
+| `%` | [Modulo](#modulo-) |
+| `+` | [Add](#add-) |
+| `-` | [Subtract](#subtract--) |
+| `-` | [Minus](#minus--) |
+
+> [!NOTE]
+> Subtract and minus use the same operator. The functionality is different because subtract uses two
+> operands and minus uses one operand.
+
+## Multiply *
+
+`operand1 * operand2`
+
+Multiplies two integers.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `operand1` | integer | Number to multiply. |
+| `operand2` | integer | Multiplier of the number. |
+
+### Return value
+
+The multiplication returns the product as an integer.
+
+### Example
+
+Two integers are multiplied and return the product.
+
+```bicep
+param firstInt int = 5
+param secondInt int = 2
+
+output product int = firstInt * secondInt
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `product` | integer | 10 |
+
+## Divide /
+
+`operand1 / operand2`
+
+Divides an integer by an integer.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `operand1` | integer | Integer that's divided. |
+| `operand2` | integer | Integer that's used for division. Can't be zero. |
+
+### Return value
+
+The division returns the quotient as an integer.
+
+### Example
+
+Two integers are divided and return the quotient.
+
+```bicep
+param firstInt int = 10
+param secondInt int = 2
+
+output quotient int = firstInt / secondInt
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `quotient` | integer | 5 |
+
+## Modulo %
+
+`operand1 % operand2`
+
+Divides an integer by an integer and returns the remainder.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `operand1` | integer | The integer that's divided. |
+| `operand2` | integer | The integer that's used for division. Can't be 0. |
+
+### Return value
+
+The remainder is returned as an integer. If the division doesn't produce a remainder, 0 is returned.
+
+### Example
+
+Two pairs of integers are divided and return the remainders.
+
+```bicep
+param firstInt int = 10
+param secondInt int = 3
+
+param thirdInt int = 8
+param fourthInt int = 4
+
+output remainder int = firstInt % secondInt
+output zeroRemainder int = thirdInt % fourthInt
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `remainder` | integer | 1 |
+| `zeroRemainder` | integer | 0 |
+
+## Add +
+
+`operand1 + operand2`
+
+Adds two integers.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `operand1` | integer | Number to add. |
+| `operand2` | integer | Number that's added to a number. |
+
+### Return value
+
+The addition returns the sum as an integer.
+
+### Example
+
+Two integers are added and return the sum.
+
+```bicep
+param firstInt int = 10
+param secondInt int = 2
+
+output sum int = firstInt + secondInt
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `sum` | integer | 12 |
+
+## Subtract -
+
+`operand1 - operand2`
+
+Subtracts an integer from an integer.
+
+### Operands
+
+| Operand | Type | Description |
+| - | - | - |
+| `operand1` | integer | Larger number that's subtracted from. |
+| `operand2` | integer | Number that's subtracted from the larger number. |
+
+### Return value
+
+The subtraction returns the difference as an integer.
+
+### Example
+
+An integer is subtracted and returns the difference.
+
+```bicep
+param firstInt int = 10
+param secondInt int = 4
+
+output difference int = firstInt - secondInt
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `difference` | integer | 6 |
+
+## Minus -
+
+`-integerValue`
+
+Multiplies an integer by `-1`.
+
+### Operand
+
+| Operand | Type | Description |
+| - | - | - |
+| `integerValue` | integer | Integer multiplied by `-1`. |
+
+### Return value
+
+An integer is multiplied by `-1`. A positive integer returns a negative integer and a negative integer returns a positive integer. The values can be wrapped with parentheses.
+
+### Example
+
+```bicep
+param posInt int = 10
+param negInt int = -20
+
+output startedPositive int = -posInt
+output startedNegative int = -(negInt)
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `startedPositive` | integer | -10 |
+| `startedNegative` | integer | 20 |
+
+## Next steps
+
+- To create a Bicep file, see [Tutorial: Create and deploy first Azure Resource Manager Bicep file](bicep-tutorial-create-first-bicep.md).
+- For information about how to resolve Bicep type errors, see [Any function for Bicep](template-functions-any.md).
+- To compare syntax for Bicep and JSON, see [Comparing JSON and Bicep for templates](compare-template-syntax.md).
+- For examples of Bicep and ARM template functions, see [ARM template functions](template-functions.md).
azure-resource-manager Bicep Operators https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-operators.md
+
+ Title: Bicep operators
+description: Describes the Bicep operators available for Azure Resource Manager deployments.
+ Last updated : 04/07/2021++
+# Bicep operators
+
+This article describes the Bicep operators that are available when you create a Bicep template and use Azure Resource Manager to deploy resources. Operators are used to calculate values, compare values, or evaluate conditions. There are three types of Bicep operators: [comparison](#comparison), [logical](#logical), and [numeric](#numeric).
+
+## Comparison
+
+The comparison operators compare values and return either `true` or `false`.
+
+| Operator | Name | Description |
+| - | - | - |
+| `>=` | [Greater than or equal](bicep-operators-comparison.md#greater-than-or-equal-) | Evaluates if the first value is greater than or equal to the second value. |
+| `>` | [Greater than](bicep-operators-comparison.md#greater-than-) | Evaluates if the first value is greater than the second value. |
+| `<=` | [Less than or equal](bicep-operators-comparison.md#less-than-or-equal-) | Evaluates if the first value is less than or equal to the second value. |
+| `<` | [Less than](bicep-operators-comparison.md#less-than-) | Evaluates if the first value is less than the second value. |
+| `==` | [Equals](bicep-operators-comparison.md#equals-) | Evaluates if two values are equal. |
+| `!=` | [Not equal](bicep-operators-comparison.md#not-equal-) | Evaluates if two values are **not** equal. |
+| `=~` | [Equal case-insensitive](bicep-operators-comparison.md#equal-case-insensitive-) | Ignores case to determine if two values are equal. |
+| `!~` | [Not equal case-insensitive](bicep-operators-comparison.md#not-equal-case-insensitive-) | Ignores case to determine if two values are **not** equal. |
+
+## Logical
+
+The logical operators evaluate boolean values, return non-null values, or evaluate a conditional expression.
+
+| Operator | Name | Description |
+| - | - | - |
+| `&&` | [And](bicep-operators-logical.md#and-) | Returns `true` if all values are true. |
+| `||`| [Or](bicep-operators-logical.md#or-) | Returns `true` if either value is true. |
+| `!` | [Not](bicep-operators-logical.md#not-) | Negates a boolean value. |
+| `??` | [Coalesce](bicep-operators-logical.md#coalesce-) | Returns the first non-null value. |
+| `?` `:` | [Conditional expression](bicep-operators-logical.md#conditional-expression--) | Evaluates a condition for true or false and returns a value. |
+
+## Numeric
+
+The numeric operators use integers to do calculations and return integer values.
+
+| Operator | Name | Description |
+| - | - | - |
+| `*` | [Multiply](bicep-operators-numeric.md#multiply-) | Multiplies two integers. |
+| `/` | [Divide](bicep-operators-numeric.md#divide-) | Divides an integer by an integer. |
+| `%` | [Modulo](bicep-operators-numeric.md#modulo-) | Divides an integer by an integer and returns the remainder. |
+| `+` | [Add](bicep-operators-numeric.md#add-) | Adds two integers. |
+| `-` | [Subtract](bicep-operators-numeric.md#subtract--) | Subtracts an integer from an integer. |
+| `-` | [Minus](bicep-operators-numeric.md#minus--) | Multiplies an integer by `-1`. |
+
+> [!NOTE]
+> Subtract and minus use the same operator. The functionality is different because subtract uses two
+> operands and minus uses one operand.
+
+## Next steps
+
+- To create a Bicep file, see [Tutorial: Create and deploy first Azure Resource Manager Bicep file](bicep-tutorial-create-first-bicep.md).
+- For information about how to resolve Bicep type errors, see [Any function for Bicep](template-functions-any.md).
+- To compare syntax for Bicep and JSON, see [Comparing JSON and Bicep for templates](compare-template-syntax.md).
+- For examples of Bicep and ARM template functions, see [ARM template functions](template-functions.md).
azure-resource-manager Complete Mode Deletion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/complete-mode-deletion.md
Title: Complete mode deletion description: Shows how resource types handle complete mode deletion in Azure Resource Manager templates. Previously updated : 10/21/2020 Last updated : 04/08/2021 # Deletion of Azure resources for complete mode deployments
The resources are listed by resource provider namespace. To match a resource pro
> [!NOTE] > Always use the [what-if operation](template-deploy-what-if.md) before deploying a template in complete mode. What-if shows you which resources will be created, deleted, or modified. Use what-if to avoid unintentionally deleting resources.+ Jump to a resource provider namespace: > [!div class="op_single_selector"] > - [Microsoft.AAD](#microsoftaad)
Jump to a resource provider namespace:
> - [Microsoft.AgFoodPlatform](#microsoftagfoodplatform) > - [Microsoft.AlertsManagement](#microsoftalertsmanagement) > - [Microsoft.AnalysisServices](#microsoftanalysisservices)
+> - [Microsoft.AnyBuild](#microsoftanybuild)
> - [Microsoft.ApiManagement](#microsoftapimanagement)
+> - [Microsoft.AppAssessment](#microsoftappassessment)
> - [Microsoft.AppConfiguration](#microsoftappconfiguration) > - [Microsoft.AppPlatform](#microsoftappplatform) > - [Microsoft.Attestation](#microsoftattestation)
Jump to a resource provider namespace:
> - [Microsoft.AVS](#microsoftavs) > - [Microsoft.Azure.Geneva](#microsoftazuregeneva) > - [Microsoft.AzureActiveDirectory](#microsoftazureactivedirectory)
+> - [Microsoft.AzureArcData](#microsoftazurearcdata)
+> - [Microsoft.AzureCIS](#microsoftazurecis)
> - [Microsoft.AzureData](#microsoftazuredata)
+> - [Microsoft.AzureSphere](#microsoftazuresphere)
> - [Microsoft.AzureStack](#microsoftazurestack) > - [Microsoft.AzureStackHCI](#microsoftazurestackhci) > - [Microsoft.BareMetalInfrastructure](#microsoftbaremetalinfrastructure)
Jump to a resource provider namespace:
> - [Microsoft.BotService](#microsoftbotservice) > - [Microsoft.Cache](#microsoftcache) > - [Microsoft.Capacity](#microsoftcapacity)
+> - [Microsoft.Cascade](#microsoftcascade)
> - [Microsoft.Cdn](#microsoftcdn) > - [Microsoft.CertificateRegistration](#microsoftcertificateregistration) > - [Microsoft.ChangeAnalysis](#microsoftchangeanalysis)
Jump to a resource provider namespace:
> - [Microsoft.ClassicInfrastructureMigrate](#microsoftclassicinfrastructuremigrate) > - [Microsoft.ClassicNetwork](#microsoftclassicnetwork) > - [Microsoft.ClassicStorage](#microsoftclassicstorage)
+> - [Microsoft.ClusterStor](#microsoftclusterstor)
> - [Microsoft.Codespaces](#microsoftcodespaces) > - [Microsoft.CognitiveServices](#microsoftcognitiveservices) > - [Microsoft.Commerce](#microsoftcommerce) > - [Microsoft.Compute](#microsoftcompute) > - [Microsoft.ConnectedCache](#microsoftconnectedcache)
+> - [Microsoft.ConnectedVehicle](#microsoftconnectedvehicle)
+> - [Microsoft.ConnectedVMwarevSphere](#microsoftconnectedvmwarevsphere)
> - [Microsoft.Consumption](#microsoftconsumption) > - [Microsoft.ContainerInstance](#microsoftcontainerinstance) > - [Microsoft.ContainerRegistry](#microsoftcontainerregistry)
Jump to a resource provider namespace:
> - [Microsoft.DocumentDB](#microsoftdocumentdb) > - [Microsoft.DomainRegistration](#microsoftdomainregistration) > - [Microsoft.DynamicsLcs](#microsoftdynamicslcs)
+> - [Microsoft.EdgeOrder](#microsoftedgeorder)
> - [Microsoft.EnterpriseKnowledgeGraph](#microsoftenterpriseknowledgegraph) > - [Microsoft.EventGrid](#microsofteventgrid) > - [Microsoft.EventHub](#microsofteventhub)
Jump to a resource provider namespace:
> - [Microsoft.HanaOnAzure](#microsofthanaonazure) > - [Microsoft.HardwareSecurityModules](#microsofthardwaresecuritymodules) > - [Microsoft.HDInsight](#microsofthdinsight)
+> - [Microsoft.HealthBot](#microsofthealthbot)
> - [Microsoft.HealthcareApis](#microsofthealthcareapis) > - [Microsoft.HybridCompute](#microsofthybridcompute) > - [Microsoft.HybridData](#microsofthybriddata)
Jump to a resource provider namespace:
> - [Microsoft.ImportExport](#microsoftimportexport) > - [Microsoft.Intune](#microsoftintune) > - [Microsoft.IoTCentral](#microsoftiotcentral)
+> - [Microsoft.IoTSecurity](#microsoftiotsecurity)
> - [Microsoft.IoTSpaces](#microsoftiotspaces) > - [Microsoft.KeyVault](#microsoftkeyvault) > - [Microsoft.Kubernetes](#microsoftkubernetes)
Jump to a resource provider namespace:
> - [Microsoft.Microservices4Spring](#microsoftmicroservices4spring) > - [Microsoft.Migrate](#microsoftmigrate) > - [Microsoft.MixedReality](#microsoftmixedreality)
+> - [Microsoft.MobileNetwork](#microsoftmobilenetwork)
> - [Microsoft.NetApp](#microsoftnetapp) > - [Microsoft.Network](#microsoftnetwork) > - [Microsoft.Notebooks](#microsoftnotebooks)
Jump to a resource provider namespace:
> - [Microsoft.Portal](#microsoftportal) > - [Microsoft.PowerBI](#microsoftpowerbi) > - [Microsoft.PowerBIDedicated](#microsoftpowerbidedicated)
+> - [Microsoft.PowerPlatform](#microsoftpowerplatform)
> - [Microsoft.ProjectBabylon](#microsoftprojectbabylon) > - [Microsoft.ProviderHub](#microsoftproviderhub)
+> - [Microsoft.Purview](#microsoftpurview)
> - [Microsoft.Quantum](#microsoftquantum) > - [Microsoft.RecoveryServices](#microsoftrecoveryservices) > - [Microsoft.RedHatOpenShift](#microsoftredhatopenshift) > - [Microsoft.Relay](#microsoftrelay)
+> - [Microsoft.ResourceConnector](#microsoftresourceconnector)
> - [Microsoft.ResourceGraph](#microsoftresourcegraph) > - [Microsoft.ResourceHealth](#microsoftresourcehealth) > - [Microsoft.Resources](#microsoftresources)
Jump to a resource provider namespace:
> - [Microsoft.ServiceBus](#microsoftservicebus) > - [Microsoft.ServiceFabric](#microsoftservicefabric) > - [Microsoft.ServiceFabricMesh](#microsoftservicefabricmesh)
+> - [Microsoft.ServiceLinker](#microsoftservicelinker)
> - [Microsoft.Services](#microsoftservices) > - [Microsoft.SignalRService](#microsoftsignalrservice) > - [Microsoft.Singularity](#microsoftsingularity)
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | farmBeats | Yes |
+> | farmBeats / eventGridFilters | No |
## Microsoft.AlertsManagement
Jump to a resource provider namespace:
> | alertsMetaData | No | > | alertsSummary | No | > | alertsSummaryList | No |
+> | migrateFromSmartDetection | No |
+> | resourceHealthAlertRules | Yes |
> | smartDetectorAlertRules | Yes | > | smartGroups | No |
Jump to a resource provider namespace:
> | - | -- | > | servers | Yes |
+## Microsoft.AnyBuild
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | clusters | Yes |
+ ## Microsoft.ApiManagement > [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | deletedServices | No |
+> | getDomainOwnershipIdentifier | No |
> | reportFeedback | No | > | service | Yes | > | validateServiceName | No |
+## Microsoft.AppAssessment
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | migrateProjects | Yes |
+> | migrateProjects / assessments | No |
+> | migrateProjects / assessments / assessedApplications | No |
+> | migrateProjects / assessments / assessedApplications / machines | No |
+> | migrateProjects / assessments / assessedMachines | No |
+> | migrateProjects / assessments / assessedMachines / applications | No |
+> | migrateProjects / assessments / machinesToAssess | No |
+> | migrateProjects / sites | No |
+> | migrateProjects / sites / applianceConfigurations | No |
+> | migrateProjects / sites / machines | No |
+> | osVersions | No |
+ ## Microsoft.AppConfiguration > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | accessReviewScheduleSettings | No | > | classicAdministrators | No | > | dataAliases | No |
+> | dataPolicyManifests | No |
> | denyAssignments | No | > | elevateAccess | No | > | findOrphanRoleAssignments | No |
Jump to a resource provider namespace:
> | privateLinkAssociations | No | > | providerOperations | No | > | resourceManagementPrivateLinks | Yes |
+> | roleAssignmentApprovals | No |
> | roleAssignments | No |
+> | roleAssignmentScheduleInstances | No |
+> | roleAssignmentScheduleRequests | No |
+> | roleAssignmentSchedules | No |
> | roleAssignmentsUsageMetrics | No | > | roleDefinitions | No |
+> | roleEligibilityScheduleInstances | No |
+> | roleEligibilityScheduleRequests | No |
+> | roleEligibilitySchedules | No |
+> | roleManagementPolicies | No |
+> | roleManagementPolicyAssignments | No |
## Microsoft.Automanage
Jump to a resource provider namespace:
> | privateClouds | Yes | > | privateClouds / addons | No | > | privateClouds / authorizations | No |
+> | privateClouds / cloudLinks | No |
> | privateClouds / clusters | No |
+> | privateClouds / clusters / datastores | No |
> | privateClouds / globalReachConnections | No | > | privateClouds / hcxEnterpriseSites | No |
+> | privateClouds / scriptExecutions | No |
+> | privateClouds / scriptPackages | No |
+> | privateClouds / scriptPackages / scriptCmdlets | No |
> | privateClouds / workloadNetworks | No | > | privateClouds / workloadNetworks / dhcpConfigurations | No |
+> | privateClouds / workloadNetworks / dnsServices | No |
+> | privateClouds / workloadNetworks / dnsZones | No |
> | privateClouds / workloadNetworks / gateways | No | > | privateClouds / workloadNetworks / portMirroringProfiles | No | > | privateClouds / workloadNetworks / segments | No |
Jump to a resource provider namespace:
> | b2ctenants | No | > | guestUsages | Yes |
-## Microsoft.AzureData
+## Microsoft.AzureArcData
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- | > | dataControllers | Yes |
+> | dataWarehouseInstances | Yes |
> | postgresInstances | Yes | > | sqlManagedInstances | Yes | > | sqlServerInstances | Yes |+
+## Microsoft.AzureCIS
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | autopilotEnvironments | Yes |
+
+## Microsoft.AzureData
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
> | sqlServerRegistrations | Yes | > | sqlServerRegistrations / sqlServers | No |
+## Microsoft.AzureSphere
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | catalogs | Yes |
+> | catalogs / products | Yes |
+ ## Microsoft.AzureStack > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | clusters | Yes |
+> | galleryImages | Yes |
+> | networkInterfaces | Yes |
+> | virtualHardDisks | Yes |
+> | virtualMachines | Yes |
+> | virtualNetworks | Yes |
## Microsoft.BareMetalInfrastructure
Jump to a resource provider namespace:
> | billingAccounts / billingProfiles / invoiceSections / products / updateAutoRenew | No | > | billingAccounts / billingProfiles / invoiceSections / transactions | No | > | billingAccounts / billingProfiles / invoiceSections / transfers | No |
+> | billingAccounts / billingProfiles / invoiceSections / validateDeleteInvoiceSectionEligibility | No |
> | billingAccounts / BillingProfiles / patchOperations | No | > | billingAccounts / billingProfiles / paymentMethods | No | > | billingAccounts / billingProfiles / policies | No |
Jump to a resource provider namespace:
> | billingAccounts / billingProfiles / products | No | > | billingAccounts / billingProfiles / reservations | No | > | billingAccounts / billingProfiles / transactions | No |
+> | billingAccounts / billingProfiles / validateDeleteBillingProfileEligibility | No |
> | billingAccounts / billingProfiles / validateDetachPaymentMethodEligibility | No | > | billingAccounts / billingRoleAssignments | No | > | billingAccounts / billingRoleDefinitions | No | > | billingAccounts / billingSubscriptions | No |
+> | billingAccounts / billingSubscriptions / elevateRole | No |
> | billingAccounts / billingSubscriptions / invoices | No | > | billingAccounts / createBillingRoleAssignment | No | > | billingAccounts / createInvoiceSectionOperations | No |
Jump to a resource provider namespace:
> | billingAccounts / departments / billingPermissions | No | > | billingAccounts / departments / billingRoleAssignments | No | > | billingAccounts / departments / billingRoleDefinitions | No |
+> | billingAccounts / departments / billingSubscriptions | No |
> | billingAccounts / enrollmentAccounts | No | > | billingAccounts / enrollmentAccounts / billingPermissions | No | > | billingAccounts / enrollmentAccounts / billingRoleAssignments | No | > | billingAccounts / enrollmentAccounts / billingRoleDefinitions | No |
+> | billingAccounts / enrollmentAccounts / billingSubscriptions | No |
> | billingAccounts / invoices | No | > | billingAccounts / invoices / transactions | No |
+> | billingAccounts / invoices / transactionSummary | No |
> | billingAccounts / invoiceSections | No | > | billingAccounts / invoiceSections / billingSubscriptionMoveOperations | No | > | billingAccounts / invoiceSections / billingSubscriptions | No |
Jump to a resource provider namespace:
> | billingAccounts / invoiceSections / transfers | No | > | billingAccounts / lineOfCredit | No | > | billingAccounts / patchOperations | No |
+> | billingAccounts / payableOverage | No |
> | billingAccounts / paymentMethods | No |
+> | billingAccounts / payNow | No |
> | billingAccounts / products | No | > | billingAccounts / reservations | No | > | billingAccounts / transactions | No |
Jump to a resource provider namespace:
> | departments | No | > | enrollmentAccounts | No | > | invoices | No |
+> | promotions | No |
> | transfers | No | > | transfers / acceptTransfer | No | > | transfers / declineTransfer | No |
Jump to a resource provider namespace:
> | botServices | Yes | > | botServices / channels | No | > | botServices / connections | No |
+> | hostSettings | No |
> | languages | No | > | templates | No |
Jump to a resource provider namespace:
> | Redis / privateEndpointConnections | No | > | Redis / privateLinkResources | No | > | redisEnterprise | Yes |
+> | redisEnterprise / databases | No |
> | RedisEnterprise / privateEndpointConnectionProxies | No | > | RedisEnterprise / privateEndpointConnectionProxies / validate | No | > | RedisEnterprise / privateEndpointConnections | No |
Jump to a resource provider namespace:
> | resources | No | > | validateReservationOrder | No |
+## Microsoft.Cascade
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | sites | Yes |
+ ## Microsoft.Cdn > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | CdnWebApplicationFirewallPolicies | Yes | > | edgenodes | No | > | profiles | Yes |
+> | profiles / afdendpoints | Yes |
+> | profiles / afdendpoints / routes | No |
+> | profiles / customdomains | No |
> | profiles / endpoints | Yes | > | profiles / endpoints / customdomains | No | > | profiles / endpoints / origingroups | No | > | profiles / endpoints / origins | No |
+> | profiles / origingroups | No |
+> | profiles / origingroups / origins | No |
+> | profiles / rulesets | No |
+> | profiles / rulesets / rules | No |
+> | profiles / secrets | No |
+> | profiles / securitypolicies | No |
> | validateProbe | No | ## Microsoft.CertificateRegistration
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | changes | No |
> | profile | No | > | resourceChanges | No |
Jump to a resource provider namespace:
> | storageAccounts / vmImages | No | > | vmImages | No |
+## Microsoft.ClusterStor
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | nodes | Yes |
+ ## Microsoft.Codespaces > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | - | -- | > | CacheNodes | Yes |
+## Microsoft.ConnectedVehicle
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | platformAccounts | Yes |
+> | registeredSubscriptions | No |
+
+## Microsoft.ConnectedVMwarevSphere
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | ResourcePools | Yes |
+> | VCenters | Yes |
+> | VCenters / InventoryItems | No |
+> | VirtualMachines | Yes |
+> | VirtualMachines / Extensions | Yes |
+> | VirtualMachines / GuestAgents | No |
+> | VirtualMachines / HybridIdentityMetadata | No |
+> | VirtualMachineTemplates | Yes |
+> | VirtualNetworks | Yes |
+ ## Microsoft.Consumption > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | registries / builds / getLogLink | No | > | registries / buildTasks | Yes | > | registries / buildTasks / steps | No |
+> | registries / connectedRegistries | No |
+> | registries / connectedRegistries / deactivate | No |
> | registries / eventGridFilters | No | > | registries / exportPipelines | No | > | registries / generateCredentials | No |
Jump to a resource provider namespace:
> | - | -- | > | containerServices | Yes | > | managedClusters | Yes |
+> | ManagedClusters / eventGridFilters | No |
> | openShiftManagedClusters | Yes | ## Microsoft.CostManagement
Jump to a resource provider namespace:
> | ExternalSubscriptions / Dimensions | No | > | ExternalSubscriptions / Forecast | No | > | ExternalSubscriptions / Query | No |
+> | fetchPrices | No |
> | Forecast | No |
+> | GenerateDetailedCostReport | No |
+> | GenerateReservationDetailsReport | No |
> | Insights | No | > | Query | No | > | register | No | > | Reportconfigs | No | > | Reports | No |
+> | ScheduledActions | No |
> | Settings | No | > | showbackRules | No | > | Views | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | DisableLockbox | No |
+> | EnableLockbox | No |
> | requests | No |
+> | TenantOptedIn | No |
## Microsoft.CustomProviders
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | DatabaseMigrations | No |
> | services | Yes | > | services / projects | Yes |
+> | SqlMigrationServices | Yes |
## Microsoft.DataProtection
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | BackupVaults | Yes |
-> | ResourceOperationGateKeepers | Yes |
+> | ResourceGuards | Yes |
## Microsoft.DataShare
Jump to a resource provider namespace:
> | servers / privateLinkResources | No | > | servers / queryTexts | No | > | servers / recoverableServers | No |
+> | servers / resetQueryPerformanceInsightData | No |
> | servers / start | No | > | servers / stop | No | > | servers / topQueryStatistics | No |
Jump to a resource provider namespace:
> | servers / privateLinkResources | No | > | servers / queryTexts | No | > | servers / recoverableServers | No |
+> | servers / resetQueryPerformanceInsightData | No |
> | servers / start | No | > | servers / stop | No | > | servers / topQueryStatistics | No |
Jump to a resource provider namespace:
> | - | -- | > | flexibleServers | Yes | > | serverGroups | Yes |
+> | serverGroupsv2 | Yes |
> | servers | Yes | > | servers / advisors | No | > | servers / keys | No |
Jump to a resource provider namespace:
> | servers / privateLinkResources | No | > | servers / queryTexts | No | > | servers / recoverableServers | No |
+> | servers / resetQueryPerformanceInsightData | No |
> | servers / topQueryStatistics | No | > | servers / virtualNetworkRules | No | > | servers / waitStatistics | No |
Jump to a resource provider namespace:
> | hostpools / sessionhosts | No | > | hostpools / sessionhosts / usersessions | No | > | hostpools / usersessions | No |
+> | scalingPlans | Yes |
> | workspaces | Yes | ## Microsoft.Devices
Jump to a resource provider namespace:
> | - | -- | > | accounts | Yes | > | accounts / instances | Yes |
+> | registeredSubscriptions | No |
## Microsoft.DevOps
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | cassandraClusters | Yes |
> | databaseAccountNames | No | > | databaseAccounts | Yes | > | restorableDatabaseAccounts | No |
Jump to a resource provider namespace:
> | lcsprojects / clouddeployments | No | > | lcsprojects / connectors | No |
+## Microsoft.EdgeOrder
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | addresses | Yes |
+> | orderCollections | Yes |
+> | orders | Yes |
+> | productFamiliesMetadata | No |
+ ## Microsoft.EnterpriseKnowledgeGraph > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | featureConfigurations | No |
+> | featureProviderNamespaces | No |
> | featureProviders | No | > | features | No | > | providers | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | clusterPools | Yes |
+> | clusterPools / clusters | Yes |
> | clusters | Yes | > | clusters / applications | No |
+## Microsoft.HealthBot
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | healthBots | Yes |
+ ## Microsoft.HealthcareApis > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | services / privateEndpointConnectionProxies | No | > | services / privateEndpointConnections | No | > | services / privateLinkResources | No |
+> | workspaces | Yes |
+> | workspaces / dicomservices | Yes |
## Microsoft.HybridCompute
Jump to a resource provider namespace:
> | machines / assessPatches | No | > | machines / extensions | Yes | > | machines / installPatches | No |
+> | machines / privateLinkScopes | No |
+> | privateLinkScopes | Yes |
+> | privateLinkScopes / privateEndpointConnectionProxies | No |
+> | privateLinkScopes / privateEndpointConnections | No |
## Microsoft.HybridData
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | devices | Yes |
-> | networkFunctions | Yes |
+> | networkfunctions | Yes |
> | networkFunctionVendors | No | > | registeredSubscriptions | No |
-> | vendors | No |
-> | vendors / vendorSkus | No |
-> | vendors / vendorSkus / previewSubscriptions | No |
+> | Vendors | No |
+> | Vendors / vendorskus | No |
+> | Vendors / vendorskus / previewsubscriptions | No |
> | virtualNetworkFunctions | Yes | > | virtualNetworkFunctionVendors | No |
Jump to a resource provider namespace:
> | appTemplates | No | > | IoTApps | Yes |
+## Microsoft.IoTSecurity
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | defenderSettings | No |
+ ## Microsoft.IoTSpaces > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | deletedManagedHSMs | No |
> | deletedVaults | No | > | hsmPools | Yes | > | managedHSMs | Yes |
Jump to a resource provider namespace:
> | clusters / databases / dataconnections | No | > | clusters / databases / eventhubconnections | No | > | clusters / databases / principalassignments | No |
+> | clusters / databases / scripts | No |
> | clusters / dataconnections | No | > | clusters / principalassignments | No | > | clusters / sharedidentities | No |
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | labaccounts | Yes |
+> | labplans | Yes |
+> | labs | Yes |
> | users | No | ## Microsoft.Logic
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | modelinventories | Yes |
+> | virtualclusters | Yes |
> | workspaces | Yes | > | workspaces / batchEndpoints | Yes | > | workspaces / batchEndpoints / deployments | Yes |
+> | workspaces / batchEndpoints / deployments / jobs | No |
+> | workspaces / batchEndpoints / jobs | No |
> | workspaces / codes | No | > | workspaces / codes / versions | No | > | workspaces / computes | No |
+> | workspaces / data | No |
> | workspaces / datastores | No |
+> | workspaces / environments | No |
> | workspaces / eventGridFilters | No | > | workspaces / jobs | No | > | workspaces / labelingJobs | No |
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | accounts | Yes |
+> | accounts / creators | Yes |
> | accounts / eventGridFilters | No | > | accounts / privateAtlases | Yes |
Jump to a resource provider namespace:
> | privategalleryitems | No | > | privateStoreClient | No | > | privateStores | No |
+> | privateStores / AdminRequestApprovals | No |
> | privateStores / offers | No |
+> | privateStores / offers / acknowledgeNotification | No |
+> | privateStores / queryNotificationsState | No |
+> | privateStores / RequestApprovals | No |
+> | privateStores / requestApprovals / query | No |
+> | privateStores / requestApprovals / withdrawPlan | No |
> | products | No | > | publishers | No | > | publishers / offers | No |
Jump to a resource provider namespace:
> | mediaservices / assets / assetFilters | No | > | mediaservices / contentKeyPolicies | No | > | mediaservices / eventGridFilters | No |
+> | mediaservices / graphInstances | No |
+> | mediaservices / graphTopologies | No |
> | mediaservices / liveEventOperations | No | > | mediaservices / liveEvents | Yes | > | mediaservices / liveEvents / liveOutputs | No |
Jump to a resource provider namespace:
> | mediaservices / streamingPolicies | No | > | mediaservices / transforms | No | > | mediaservices / transforms / jobs | No |
+> | videoAnalyzers | Yes |
+> | videoAnalyzers / edgeModules | No |
## Microsoft.Microservices4Spring
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | holographicsBroadcastAccounts | Yes |
+> | objectAnchorsAccounts | Yes |
> | objectUnderstandingAccounts | Yes | > | remoteRenderingAccounts | Yes | > | spatialAnchorsAccounts | Yes |
+## Microsoft.MobileNetwork
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | networks | Yes |
+> | networks / sites | Yes |
+> | packetCores | Yes |
+> | sims | Yes |
+> | sims / simProfiles | Yes |
+ ## Microsoft.NetApp > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | netAppAccounts / capacityPools | Yes | > | netAppAccounts / capacityPools / volumes | Yes | > | netAppAccounts / capacityPools / volumes / snapshots | No |
+> | netAppAccounts / volumeGroups | No |
## Microsoft.Network > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | clusters | Yes | > | deletedWorkspaces | No | > | linkTargets | No |
+> | querypacks | Yes |
> | storageInsightConfigs | No | > | workspaces | Yes | > | workspaces / dataExports | No |
Jump to a resource provider namespace:
> | workspaces / metadata | No | > | workspaces / query | No | > | workspaces / scopedPrivateLinkProxies | No |
+> | workspaces / storageInsightConfigs | No |
+> | workspaces / tables | No |
## Microsoft.OperationsManagement
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | cdnPeeringPrefixes | No |
> | legacyPeerings | No | > | peerAsns | No | > | peerings | Yes |
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | attestations | No |
+> | eventGridFilters | No |
> | policyEvents | No | > | policyMetadata | No | > | policyStates | No |
Jump to a resource provider namespace:
> | - | -- | > | consoles | No | > | dashboards | Yes |
+> | tenantconfigurations | No |
> | userSettings | No | ## Microsoft.PowerBI
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | autoScaleVCores | Yes |
> | capacities | Yes |
+## Microsoft.PowerPlatform
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | enterprisePolicies | Yes |
+ ## Microsoft.ProjectBabylon > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | providerRegistrations | No |
+> | providerRegistrations / customRollouts | No |
> | providerRegistrations / defaultRollouts | No | > | providerRegistrations / resourceTypeRegistrations | No |
-> | rollouts | Yes |
+
+## Microsoft.Purview
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | accounts | Yes |
+> | deletedAccounts | No |
+> | getDefaultAccount | No |
+> | removeDefaultAccount | No |
+> | setDefaultAccount | No |
## Microsoft.Quantum
Jump to a resource provider namespace:
> | namespaces / wcfrelays | No | > | namespaces / wcfrelays / authorizationrules | No |
+## Microsoft.ResourceConnector
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | appliances | Yes |
+ ## Microsoft.ResourceGraph > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | events | No | > | impactedResources | No | > | metadata | No |
-> | notifications | No |
## Microsoft.Resources > [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | calculateTemplateHash | No |
> | deployments | No | > | deployments / operations | No | > | deploymentScripts | Yes | > | deploymentScripts / logs | No | > | links | No |
-> | notifyResourceJobs | No |
> | providers | No | > | resourceGroups | No | > | subscriptions | No |
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | applications | Yes |
+> | resources | Yes |
> | saasresources | No | ## Microsoft.ScVmm
Jump to a resource provider namespace:
> | Compliances | No | > | connectors | No | > | dataCollectionAgents | No |
+> | devices | No |
> | deviceSecurityGroups | No | > | discoveredSecuritySolutions | No | > | externalSecuritySolutions | No | > | InformationProtectionPolicies | No |
+> | ingestionSettings | No |
+> | insights | No |
+> | iotAlerts | No |
+> | iotAlertTypes | No |
> | iotDefenderSettings | No |
+> | iotRecommendations | No |
+> | iotRecommendationTypes | No |
> | iotSecuritySolutions | Yes | > | iotSecuritySolutions / analyticsModels | No | > | iotSecuritySolutions / analyticsModels / aggregatedAlerts | No |
Jump to a resource provider namespace:
> | iotSecuritySolutions / iotRecommendations | No | > | iotSecuritySolutions / iotRecommendationTypes | No | > | iotSensors | No |
+> | iotSites | No |
> | jitNetworkAccessPolicies | No | > | jitPolicies | No |
+> | onPremiseIotSensors | No |
> | policies | No | > | pricings | No | > | regulatoryComplianceStandards | No |
Jump to a resource provider namespace:
> | cases | No | > | dataConnectors | No | > | dataConnectorsCheckRequirements | No |
+> | enrichment | No |
> | entities | No | > | entityQueries | No |
+> | entityQueryTemplates | No |
> | incidents | No | > | officeConsents | No | > | settings | No |
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | consoleServices | No |
+> | serialPorts | No |
## Microsoft.ServiceBus
Jump to a resource provider namespace:
> | edgeclusters | Yes | > | edgeclusters / applications | No | > | managedclusters | Yes |
+> | managedclusters / applications | No |
+> | managedclusters / applications / services | No |
+> | managedclusters / applicationTypes | No |
+> | managedclusters / applicationTypes / versions | No |
> | managedclusters / nodetypes | No | > | networks | Yes | > | secretstores | Yes |
Jump to a resource provider namespace:
> | secrets | Yes | > | volumes | Yes |
+## Microsoft.ServiceLinker
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | linkers | No |
+ ## Microsoft.Services > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | - | -- | > | SignalR | Yes | > | SignalR / eventGridFilters | No |
+> | WebPubSub | Yes |
## Microsoft.Singularity
Jump to a resource provider namespace:
> | accounts / groupPolicies | No | > | accounts / jobs | No | > | accounts / storageContainers | No |
+> | images | No |
## Microsoft.SoftwarePlan
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | amlFilesystems | Yes |
> | caches | Yes | > | caches / storageTargets | No | > | usageModels | No |
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | acceptChangeTenant | No |
+> | acceptOwnership | No |
+> | acceptOwnershipStatus | No |
> | aliases | No | > | cancel | No | > | changeTenantRequest | No | > | changeTenantStatus | No | > | CreateSubscription | No | > | enable | No |
+> | policies | No |
> | rename | No | > | SubscriptionDefinitions | No | > | SubscriptionOperations | No |
Jump to a resource provider namespace:
> | environments | Yes | > | environments / accessPolicies | No | > | environments / eventsources | Yes |
+> | environments / privateEndpointConnectionProxies | No |
+> | environments / privateEndpointConnections | No |
+> | environments / privateLinkResources | No |
> | environments / referenceDataSets | Yes | ## Microsoft.Token
Jump to a resource provider namespace:
> | ArcZones | Yes | > | ResourcePools | Yes | > | VCenters | Yes |
-> | VirtualMachines | Yes |
+> | VCenters / InventoryItems | No |
+> | virtualmachines | Yes |
> | VirtualMachineTemplates | Yes | > | VirtualNetworks | Yes |
Jump to a resource provider namespace:
> | connections | Yes | > | customApis | Yes | > | deletedSites | No |
+> | functionAppStacks | No |
+> | generateGithubAccessTokenForAppserviceCLI | No |
> | hostingEnvironments | Yes | > | hostingEnvironments / eventGridFilters | No | > | hostingEnvironments / multiRolePools | No |
Jump to a resource provider namespace:
> | staticSites | Yes | > | validate | No | > | verifyHostingEnvironmentVnet | No |
+> | webAppStacks | No |
## Microsoft.WindowsDefenderATP
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | migrationAgents | Yes |
> | workloads | Yes | > | workloads / instances | No | > | workloads / versions | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | components | No |
-> | componentsSummary | No |
-> | monitorInstances | No |
-> | monitorInstancesSummary | No |
> | monitors | No |
-> | notificationSettings | No |
## Next steps
-To get the same data as a file of comma-separated values, download [complete-mode-deletion.csv](https://github.com/tfitzmac/resource-capabilities/blob/master/complete-mode-deletion.csv).
+To get the same data as a file of comma-separated values, download [complete-mode-deletion.csv](https://github.com/tfitzmac/resource-capabilities/blob/master/complete-mode-deletion.csv).
azure-sql Db2 To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/db2-to-sql-database-guide.md
For additional assistance, see the following resources, which were developed in
||| |[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.| |[Db2 zOS data assets discovery and assessment package](https://github.com/microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
-|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20Db2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
-|[Db2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Db2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. Although business requirements will differ, the same basic pattern applies. This architectural pattern can also be used for OLAP applications on Azure.|
+|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/microsoft/DataMigrationTeam/blob/master/IBM%20DB2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
+|[Db2 LUW pure scale on Azure - setup guide](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/DB2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. Although business requirements will differ, the same basic pattern applies. This architectural pattern can also be used for OLAP applications on Azure.|
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
azure-sql Db2 To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/db2-to-managed-instance-guide.md
For additional assistance, see the following resources, which were developed in
||| |[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.| |[Db2 zOS data assets discovery and assessment package](https://github.com/microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
-|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20Db2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
-|[Db2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Db2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. Although business requirements will differ, the same basic pattern applies. This architectural pattern can also be used for OLAP applications on Azure.|
+|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/microsoft/DataMigrationTeam/blob/master/IBM%20DB2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
+|[Db2 LUW pure scale on Azure - setup guide](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/DB2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. Although business requirements will differ, the same basic pattern applies. This architectural pattern can also be used for OLAP applications on Azure.|
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
azure-vmware Production Ready Deployment Steps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/production-ready-deployment-steps.md
The following image shows the **Create a private cloud** deployment screen with
> [!NOTE] > Any virtual network that is going to be used or created may be seen by your on-premises environment and Azure VMware Solution, so make sure whatever IP segment you use in this virtual network and subnets do not overlap.
-## (Optional) VMware HCX network segments
+## VMware HCX network segments
VMware HCX is a technology that's bundled with Azure VMware Solution. The primary use cases for VMware HCX are workload migrations and disaster recovery. If you plan to do either, it's best to plan out the networking now. Otherwise, you can skip and continue to the next step.
azure-vmware Reserved Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/reserved-instance.md
Title: Save costs with Azure VMware Solution reserved instance
-description: Learn how to buy a reserved instance for Azure VMware Solution.
+ Title: Reserved instances of Azure VMware Solution
+description: Learn how to buy a reserved instance for Azure VMware Solution. The reserved instance covers only the compute part of your usage and includes software licensing costs.
Previously updated : 02/03/2021 Last updated : 04/09/2021 # Save costs with Azure VMware Solution
CSPs can cancel, exchange, or refund reservations, with certain limitations, pur
## Next steps
-Now that you've covered buying a reserved instance of Azure VMware Solution, you may want to learn about:
+Now that you've covered reserved instance of Azure VMware Solution, you may want to learn about:
- [Creating an Azure VMware Solution assessment](../migrate/how-to-create-azure-vmware-solution-assessment.md). - [Managing DHCP for Azure VMware Solution](manage-dhcp.md).
backup Back Up Hyper V Virtual Machines Mabs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/back-up-hyper-v-virtual-machines-mabs.md
These are the prerequisites for backing up Hyper-V virtual machines with MABS:
|Prerequisite|Details| ||-| |MABS prerequisites|- If you want to perform item-level recovery for virtual machines (recover files, folders, volumes), then you'll need to install the Hyper-V role on the MABS server. If you only want to recover the virtual machine and not item-level, then the role isn't required.<br />- You can protect up to 800 virtual machines of 100 GB each on one MABS server and allow multiple MABS servers that support larger clusters.<br />- MABS excludes the page file from incremental backups to improve virtual machine backup performance.<br />- MABS can back up a Hyper-V server or cluster in the same domain as the MABS server, or in a child or trusted domain. If you want to back up Hyper-V in a workgroup or an untrusted domain, you'll need to set up authentication. For a single Hyper-V server, you can use NTLM or certificate authentication. For a cluster, you can use certificate authentication only.<br />- Using host-level backup to back up virtual machine data on passthrough disks isn't supported. In this scenario, we recommend you use host-level backup to back up VHD files and guest-level backup to back up the other data that isn't visible on the host.<br /> -You can back up VMs stored on deduplicated volumes.|
-|Hyper-V VM prerequisites|- The version of Integration Components that's running on the virtual machine should be the same as the version of the Hyper-V host. <br />- For each virtual machine backup you'll need free space on the volume hosting the virtual hard disk files to allow Hyper-V enough room for differencing disks (AVHD's) during backup. The space must be at least equal to the calculation **Initial disk size\*Churn rate\*Backup** window time. If you're running multiple backups on a cluster, you'll need enough storage capacity to accommodate the AVHDs for each of the virtual machines using this calculation.<br />- To back up virtual machines located on Hyper-V host servers running Windows Server 2012 R2, the virtual machine should have a SCSI controller specified, even if it's not connected to anything. (In Windows Server 2012 R2 online backup, the Hyper-V host mounts a new VHD in the VM and then later dismounts it. Only the SCSI controller can support this and therefore is required for online backup of the virtual machine. Without this setting, event ID 10103 will be issued when you try to back up the virtual machine.)|
+|Hyper-V VM prerequisites|- The version of Integration Components that's running on the virtual machine should be the same as the version of the Hyper-V host. <br />- For each virtual machine backup you'll need free space on the volume hosting the virtual hard disk files to allow Hyper-V enough room for differencing disks (AVHD's) during backup. The space must be at least equal to the calculation **Initial disk size\*Churn rate\*Backup** window time. If you're running multiple backups on a cluster, you'll need enough storage capacity to accommodate the AVHDs for each of the virtual machines using this calculation.<br />- To back up virtual machines located on Hyper-V host servers running Windows Server 2012 R2, the virtual machine should have a SCSI controller specified, even if it's not connected to anything. (In Windows Server 2012 R2 backup, the Hyper-V host mounts a new VHD in the VM and then later dismounts it. Only the SCSI controller can support this and therefore is required for online backup of the virtual machine. Without this setting, event ID 10103 will be issued when you try to back up the virtual machine.)|
|Linux prerequisites|- You can back up Linux virtual machines using MABS. Only file-consistent snapshots are supported.| |Back up VMs with CSV storage|- For CSV storage, install the Volume Shadow Copy Services (VSS) hardware provider on the Hyper-V server. Contact your storage area network (SAN) vendor for the VSS hardware provider.<br />- If a single node shuts down unexpectedly in a CSV cluster, MABS will perform a consistency check against the virtual machines that were running on that node.<br />- If you need to restart a Hyper-V server that has BitLocker Drive Encryption enabled on the CSV cluster, you must run a consistency check for Hyper-V virtual machines.| |Back up VMs with SMB storage|- Turn on auto-mount on the server that's running Hyper-V to enable virtual machine protection.<br /> - Disable TCP Chimney Offload.<br />- Ensure that all Hyper-V machine$ accounts have full permissions on the specific remote SMB file shares.<br />- Ensure that the file path for all virtual machine components during recovery to alternate location is fewer than 260 characters. If not, recovery might succeed, but Hyper-V won't be able to mount the virtual machine.<br />- The following scenarios aren't supported:<br /> Deployments where some components of the virtual machine are on local volumes and some components are on remote volumes; an IPv4 or IPv6 address for storage location file server, and recovery of a virtual machine to a computer that uses remote SMB shares.<br />- You'll need to enable the File Server VSS Agent service on each SMB server - Add it in **Add roles and features** > **Select server roles** > **File and Storage Services** > **File Services** > **File Service** > **File Server VSS Agent Service**.|
These are the prerequisites for backing up Hyper-V virtual machines with MABS:
- Total size of 800 VMs - 80 TB - Required space for backup storage - 80 TB
-2. Set up the MABS protection agent on the Hyper-V server or Hyper-V cluster nodes. If you're doing guest-level backup, you'll install the agent on the VMs you want to back up at the guest-level.
+2. Set up the MABS protection agent on the Hyper-V server or Hyper-V cluster nodes.
3. In the MABS Administrator console, select **Protection** > **Create protection group** to open the **Create New Protection Group** wizard.
These are the prerequisites for backing up Hyper-V virtual machines with MABS:
> [!NOTE] >
- >If you're protecting application workloads, recovery points are created in accordance with Synchronization frequency, provided the application supports incremental backups. If it doesn't, then MABS runs an express full backup, instead of an incremental backup, and creates recovery points in accordance with the express backup schedule.
+ >If you're protecting application workloads, recovery points are created in accordance with Synchronization frequency, provided the application supports incremental backups. If it doesn't, then MABS runs an express full backup, instead of an incremental backup, and creates recovery points in accordance with the express backup schedule.<br></br>The backup process doesn't back up the checkpoints associated with VMs.
7. In the **Review disk allocation** page, review the storage pool disk space allocated for the protection group.
backup Backup Azure File Folder Backup Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-file-folder-backup-faq.md
As a safety measure, Azure Backup will preserve the most recent recovery point,
If an ongoing restore job is canceled, the restore process stops. All files restored before the cancellation stay in configured destination (original or alternate location), without any rollbacks.
-### Does the MARS agent backup and restore ACLs set on files, folders, and volumes?
+### Does the MARS agent back up and restore ACLs set on files, folders, and volumes?
* The MARS agent backs up ACLs set on files, folders, and volumes * For Volume Restore recovery option, the MARS agent provides an option to skip restoring ACL permissions to the file or folder being recovered
backup Disk Backup Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-faq.md
Yes, you can restore the disk onto a different subscription than that of the sou
No, point-in-time snapshots of multiple disks attached to a virtual machine isn't supported. For more information, see [Configure backup](backup-managed-disks.md#configure-backup) and to learn more about limitations, refer to the [support matrix](disk-backup-support-matrix.md).
-### What are my options to back up disks across multiple subscriptions?
-
-Currently, using the Azure portal to configure backup of disks is limited to a maximum of 20 disks from the same subscription.
- ### What is a target resource group? During a restore operation, you can choose the subscription and a resource group where you want to restore the disk to. Azure Backup will create new disks from the recovery point in the selected resource group. This is referred to as a target resource group. Note that the Backup vault's managed identity requires the role assignment on the target resource group to be able to perform restore operation successfully. For more information, see the [restore documentation](restore-managed-disks.md).
batch Batch Application Packages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-application-packages.md
With application packages, your pool's start task doesn't have to specify a long
You can use the [Azure portal](https://portal.azure.com) or the Batch Management APIs to manage the application packages in your Batch account. The following sections explain how to link a storage account, and how to add and manage applications and application packages in the Azure portal. > [!NOTE]
-> While you can define application values in the [Microsoft.Batch/batchAccounts](/templates/microsoft.batch/batchaccounts) resource of an [ARM template](quick-create-template.md), it's not currently possible to use an ARM template to upload application packages to use in your Batch account. You must upload them to your linked storage account as described [below](#add-a-new-application).
+> While you can define application values in the [Microsoft.Batch/batchAccounts](/azure/templates/microsoft.batch/batchaccounts) resource of an [ARM template](quick-create-template.md), it's not currently possible to use an ARM template to upload application packages to use in your Batch account. You must upload them to your linked storage account as described [below](#add-a-new-application).
### Link a storage account
batch Virtual File Mount https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/virtual-file-mount.md
The following code examples demonstrate mounting a variety of file shares to a p
### Azure Files share
-Azure Files is the standard Azure cloud file system offering. To learn more about how to get any of the parameters in the mount configuration code sample, see [Use an Azure Files share](../storage/files/storage-how-to-use-files-windows.md).
+Azure Files is the standard Azure cloud file system offering. To learn more about how to get any of the parameters in the mount configuration code sample, see [Use an Azure Files share - SMB](../storage/files/storage-how-to-use-files-windows.md) or [Use an Azure Files share with - NFS](../storage/files/storage-files-how-to-create-nfs-shares.md).
```csharp new PoolAddParameter
certification Concepts Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/concepts-legacy.md
+
+ Title: Legacy devices on the Azure Certified Device catalog
+description: An explanation of legacy devices on the Azure Certified Device catalog
++++ Last updated : 04/07/2021+++
+# Legacy devices on the Azure Certified Device catalog
+
+You may have noticed on the Azure Certified Device catalog that some devices don't have the blue outline or the "Azure Certified Device" label. These devices (dubbed "legacy devices") were previously certified under the legacy program.
+
+## Certified for Azure IoT program
+
+Before the launch of the Azure Certified Device program, hardware partners could previously certify their products under the Certified for Azure IoT program. The Azure Certified Device certification program refocuses its mission to deliver on customer promises rather than technical device capabilities.
+
+Devices that have been certified as an ΓÇÿIoT Hub certified deviceΓÇÖ appear on the Azure Certified Device catalog as a ΓÇÿlegacy device.ΓÇÖ This label indicates devices have previously qualified through the now-retired program, but haven't been certified through the updated Azure Certified Device program. These devices are clearly noted in the catalog by their lack of blue outline, and can be found through the "IoT Hub Certified devices (legacy)" filter.
+
+## Next steps
+
+Interested in recertifying a legacy device under the Azure Certified Device program? You can submit your device through our portal and leave a note to our review team to coordinate. Follow the link below to get started!
+
+- [Tutorial: Select your certification program](./tutorial-00-selecting-your-certification.md)
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-template.md
This tutorial explains how to create a Cloud Service (extended support) deployme
2. Create a new resource group using the [Azure portal](../azure-resource-manager/management/manage-resource-groups-portal.md) or [PowerShell](../azure-resource-manager/management/manage-resource-groups-powershell.md). This step is optional if you are using an existing resource group.
-3. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) only supports [Basic] (https://docs.microsoft.com/azure/virtual-network/public-ip-addresses#basic) SKU Public IP addresses. Standard SKU Public IPs do not work with Cloud Services.
+3. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) only supports [Basic](https://docs.microsoft.com/azure/virtual-network/public-ip-addresses#basic) SKU Public IP addresses. Standard SKU Public IPs do not work with Cloud Services.
If you are using a Static IP, it needs to be referenced as a Reserved IP in Service Configuration (.cscfg) file. If using an existing IP address, skip this step and add the IP address information directly into the load balancer configuration settings of your ARM template.-
-4. Create a Network Profile Object and associate the public IP address to the frontend of the load balancer. The Azure platform automatically creates a 'Classic' SKU load balancer resource in the same subscription as the cloud service resource. The load balancer resource is a read-only resource in ARM. Any updates to the resource are supported only via the cloud service deployment files (.cscfg & .csdef)
-5. Create a new storage account using the [Azure portal](../storage/common/storage-account-create.md?tabs=azure-portal) or [PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell). This step is optional if you are using an existing storage account.
+4. Create a new storage account using the [Azure portal](../storage/common/storage-account-create.md?tabs=azure-portal) or [PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell). This step is optional if you are using an existing storage account.
-6. Upload your Service Definition (.csdef) and Service Configuration (.cscfg) files to the storage account using the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md#upload-a-block-blob), [AzCopy](../storage/common/storage-use-azcopy-blobs-upload.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or [PowerShell](../storage/blobs/storage-quickstart-blobs-powershell.md#upload-blobs-to-the-container). Obtain the SAS URIs of both files to be added to the ARM template later in this tutorial.
+5. Upload your Service Definition (.csdef) and Service Configuration (.cscfg) files to the storage account using the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md#upload-a-block-blob), [AzCopy](../storage/common/storage-use-azcopy-blobs-upload.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or [PowerShell](../storage/blobs/storage-quickstart-blobs-powershell.md#upload-blobs-to-the-container). Obtain the SAS URIs of both files to be added to the ARM template later in this tutorial.
6. (Optional) Create a key vault and upload the certificates.
cognitive-services Luis Reference Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-reference-regions.md
Previously updated : 01/21/2021 Last updated : 04/07/2021
The authoring region app can only be published to a corresponding publish region
| Europe | `westeurope`| North Europe<br>`northeurope` | `https://northeurope.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | | Europe | `westeurope`| West Europe<br>`westeurope` | `https://westeurope.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | | Europe | `westeurope`| UK South<br>`uksouth` | `https://uksouth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Europe | `westeurope`| Switzerland North<br>`switzerlandnorth` | `https://switzerlandnorth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
## Publishing to Australia
cognitive-services Luis User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-user-privacy.md
Previously updated : 12/10/2020 Last updated : 04/08/2020
United States Authoring (also known as Programmatic APIs) resources are hosted i
* Azure geographies not supported by the Europe or Australia authoring regions
-When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's United States geography for active learning.
+When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's United States geography for active learning.
+
+### Switzerland North
+
+Switzerland North Authoring (also known as Programmatic APIs) resources are hosted in Azure's Switzerland geography, and support deployment of endpoints to the following Azure geographies:
+
+* Switzerland
+
+When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's Switzerland geography for active learning.
## Disable active learning
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/whats-new.md
description: This article is regularly updated with news about the Azure Cogniti
Previously updated : 02/16/2021 Last updated : 04/07/2021 # What's new in Language Understanding
Learn what's new in the service. These items include release notes, videos, blog
## Release notes
+### April 2021
+
+* Switzerland North [authoring region](luis-reference-regions.md#publishing-to-europe).
+ ### January 2021 * The V3 prediction API now supports the [Bing Spellcheck API](luis-tutorial-bing-spellcheck.md).
cognitive-services Get Started Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-started-intent-recognition.md
+
+ Title: "Intent recognition quickstart - Speech service"
+
+description: In this quickstart, you use intent recognition to interactively recognize intents from audio data captured from a microphone.
++++++ Last updated : 09/02/2020++
+zone_pivot_groups: programming-languages-set-two-with-js
+keywords: intent recognition
++
+# Get started with intent recognition
+++++++
cognitive-services Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/intent-recognition.md
Using intent recognition, your applications, tools, and devices can determine wh
## Get started
-See the [quickstart](quickstarts/intent-recognition.md) to get started with intent recognition.
+See the [quickstart](get-started-intent-recognition.md) to get started with intent recognition.
## Sample code
Sample code for intent recognition:
## Next steps
-* Complete the intent recognition [quickstart](quickstarts/intent-recognition.md)
+* Complete the intent recognition [quickstart](get-started-intent-recognition.md)
* [Get a Speech service subscription key for free](overview.md#try-the-speech-service-for-free) * [Get the Speech SDK](speech-sdk.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
Neural voices can be used to make interactions with chatbots and voice assistant
| Thai (Thailand) | `th-TH` | Male | `th-TH-NiwatNeural` | General | | Turkish (Turkey) | `tr-TR` | Female | `tr-TR-EmelNeural` | General | | Turkish (Turkey) | `tr-TR` | Male | `tr-TR-AhmetNeural` | General |
-| Ukrainian (Ukraine) | `uk-UA` | Female | `en-ZA-LeahNeural` <sup>New</sup> | General |
-| Ukrainian (Ukraine) | `uk-UA` | Male | `en-ZA-LukeNeural` <sup>New</sup> | General |
-| Urdu (Pakistan) | `ur-PK` | Female | `uk-UA-PolinaNeural` <sup>New</sup> | General |
-| Urdu (Pakistan) | `ur-PK` | Male | `uk-UA-OstapNeural` <sup>New</sup> | General |
+| Ukrainian (Ukraine) | `uk-UA` | Female | `uk-UA-PolinaNeural` <sup>New</sup> | General |
+| Ukrainian (Ukraine) | `uk-UA` | Male | `uk-UA-OstapNeural` <sup>New</sup> | General |
+| Urdu (Pakistan) | `ur-PK` | Female | `ur-PK-UzmaNeural` <sup>New</sup> | General |
+| Urdu (Pakistan) | `ur-PK` | Male | `ur-PK-AsadNeural` <sup>New</sup> | General |
| Vietnamese (Vietnam) | `vi-VN` | Female | `vi-VN-HoaiMyNeural` | General | | Vietnamese (Vietnam) | `vi-VN` | Male | `vi-VN-NamMinhNeural` | General | | Welsh (UK) | `cy-GB` | Female | `cy-GB-NiaNeural` <sup>New</sup> | General |
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/overview.md
The following features are part of the Speech service. Use the links in this tab
| | [Multi-device Conversation](multi-device-conversation.md) | Connect multiple devices or clients in a conversation to send speech- or text-based messages, with easy support for transcription and translation| Yes | No | | | [Conversation Transcription](./conversation-transcription.md) | Enables real-time speech recognition, speaker identification, and diarization. It's perfect for transcribing in-person meetings with the ability to distinguish speakers. | Yes | No | | | [Create Custom Speech Models](#customize-your-speech-experience) | If you are using speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models to address ambient noise or industry-specific vocabulary. | No | [Yes](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) |
-| [Text-to-Speech](text-to-speech.md) | Text-to-speech | Text-to-speech converts input text into human-like synthesized speech using [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). Choose from standard voices and neural voices (see [Language support](language-support.md)). | [Yes](./speech-sdk.md) | [Yes](#reference-docs) |
+| [Text-to-Speech](text-to-speech.md) | Text-to-speech | Text-to-speech converts input text into human-like synthesized speech using [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). Use neural voices, which are human-like voices powered by deep neural networks. See [Language support](language-support.md). | [Yes](./speech-sdk.md) | [Yes](#reference-docs) |
| | [Create Custom Voices](#customize-your-speech-experience) | Create custom voice fonts unique to your brand or product. | No | [Yes](#reference-docs) | | [Speech Translation](speech-translation.md) | Speech translation | Speech translation enables real-time, multi-language translation of speech to your applications, tools, and devices. Use this service for speech-to-speech and speech-to-text translation. | [Yes](./speech-sdk.md) | No | | [Voice assistants](voice-assistants.md) | Voice assistants | Voice assistants using the Speech service empower developers to create natural, human-like conversational interfaces for their applications and experiences. The voice assistant service provides fast, reliable interaction between a device and an assistant implementation that uses the Bot Framework's Direct Line Speech channel or the integrated Custom Commands service for task completion. | [Yes](voice-assistants.md) | No |
We offer quickstarts in most popular programming languages, each designed to tea
* [Speech-to-text quickstart](get-started-speech-to-text.md) * [Text-to-speech quickstart](get-started-text-to-speech.md) * [Speech translation quickstart](./get-started-speech-translation.md)
-* [Intent recognition quickstart](quickstarts/intent-recognition.md)
+* [Intent recognition quickstart](./get-started-intent-recognition.md)
* [Speaker recognition quickstart](./get-started-speaker-recognition.md) After you've had a chance to get started with the Speech service, try our tutorials that show you how to solve various scenarios.
cognitive-services Create Luis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md
On the **Keys and Endpoint settings** page:
## Next steps > [!div class="nextstepaction"]
-> [Recognize Intents](~/articles/cognitive-services/Speech-Service/quickstarts/intent-recognition.md)
+> [Recognize Intents](~/articles/cognitive-services/Speech-Service/get-started-intent-recognition.md)
cognitive-services Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/quickstarts/intent-recognition.md
- Title: "Intent recognition quickstart - Speech service"-
-description: In this quickstart, you use intent recognition to interactively recognize intents from audio data captured from a microphone.
------ Previously updated : 09/02/2020--
-zone_pivot_groups: programming-languages-set-two-with-js
-keywords: intent recognition
--
-# Get started with intent recognition
-------
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
Custom Neural Voice feature requires registration and Microsoft may limit access
**New features** - **All**: New 48KHz output formats available for the private preview of custom neural voice through the TTS speech synthesis API: Audio48Khz192KBitRateMonoMp3, audio-48khz-192kbitrate-mono-mp3, Audio48Khz96KBitRateMonoMp3, audio-48khz-96kbitrate-mono-mp3, Raw48Khz16BitMonoPcm, raw-48khz-16bit-mono-pcm, Riff48Khz16BitMonoPcm, riff-48khz-16bit-mono-pcm. - **All**: Custom voice is also easier to use. Added support for setting custom voice via `EndpointId` ([C++](/cpp/cognitive-services/speech/speechconfig#setendpointid), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.endpointid#Microsoft_CognitiveServices_Speech_SpeechConfig_EndpointId), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechconfig.setendpointid#com_microsoft_cognitiveservices_speech_SpeechConfig_setEndpointId_String_), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#endpointId), [Objective-C](/objectivec/cognitive-services/speech/spxspeechconfiguration#endpointid), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig#endpoint-id)). Before this change, custom voice users needed to set the endpoint URL via the `FromEndpoint` method. Now customers can use the `FromSubscription` method just like public voices, and then provide the deployment ID by setting `EndpointId`. This simplifies setting up custom voices. -- **C++/C#/Jav#add-a-languageunderstandingmodel-and-intents).
+- **C++/C#/Jav#add-a-languageunderstandingmodel-and-intents).
- **C++/C#/Java**: Make your voice assistant or bot stop listening immediately. `DialogServiceConnector` ([C++](/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [Java](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector)) now has a `StopListeningAsync()` method to accompany `ListenOnceAsync()`. This will immediately stop audio capture and gracefully wait for a result, making it perfect for use with "stop now" button-press scenarios. - **C++/C#/Java/JavaScript**: Make your voice assistant or bot react better to underlying system errors. `DialogServiceConnector` ([C++](/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [Java](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/dialogserviceconnector)) now has a new `TurnStatusReceived` event handler. These optional events correspond to every [`ITurnContext`](/dotnet/api/microsoft.bot.builder.iturncontext) resolution on the Bot and will report turn execution failures when they happen, e.g. as a result of an unhandled exception, timeout, or network drop between Direct Line Speech and the bot. `TurnStatusReceived` makes it easier to respond to failure conditions. For example, if a bot takes too long on a backend database query (e.g. looking up a product), `TurnStatusReceived` allows the client to know to reprompt with "sorry, I didn't quite get that, could you please try again" or something similar. - **C++/C#**: Use the Speech SDK on more platforms. The [Speech SDK NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) now supports Windows ARM/ARM64 desktop native binaries (UWP was already supported) to make the Speech SDK more useful on more machine types.
Stay healthy!
**Samples** - **Go**: Added quickstarts for [speech recognition](./get-started-speech-to-text.md?pivots=programming-language-go) and [custom voice assistant](./quickstarts/voice-assistants.md?pivots=programming-language-go). Find sample code [here](https://github.com/microsoft/cognitive-services-speech-sdk-go/tree/master/samples). -- **JavaScript**: Added quickstarts for [Text-to-speech](./get-started-text-to-speech.md?pivots=programming-language-javascript), [Translation](./get-started-speech-translation.md?pivots=programming-language-csharp&tabs=script), and [Intent Recognition](./quickstarts/intent-recognition.md?pivots=programming-language-javascript).
+- **JavaScript**: Added quickstarts for [Text-to-speech](./get-started-text-to-speech.md?pivots=programming-language-javascript), [Translation](./get-started-speech-translation.md?pivots=programming-language-csharp&tabs=script), and [Intent Recognition](./get-started-intent-recognition.md?pivots=programming-language-javascript).
- Keyword recognition samples for [C\#](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/uwp/keyword-recognizer) and [Java](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/keyword-recognizer) (Android).   **COVID-19 abridged testing:**
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The Speech service implementation of SSML is based on World Wide Web Consortium'
> [!IMPORTANT] > Chinese, Japanese, and Korean characters count as two characters for billing. For more information, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-## Standard, neural, and custom voices
+## Neural and custom voices
-Choose from standard and neural voices, or create your own custom voice unique to your product or brand. 75+ standard voices are available in more than 45 languages and locales, and 5 neural voices are available in four languages and locales. For a complete list of supported languages, locales, and voices (neural and standard), see [language support](language-support.md).
-
-To learn more about standard, neural, and custom voices, see [Text-to-speech overview](text-to-speech.md).
+Use a human-like neural voice, or create your own custom voice unique to your product or brand. For a complete list of supported languages, locales, and voices, see [language support](language-support.md). To learn more about neural and custom voices, see [Text-to-speech overview](text-to-speech.md).
> [!NOTE]
speechConfig!.setPropertyTo(
## Adjust speaking styles
-> [!IMPORTANT]
-> The adjustment of speaking styles will only work with neural voices.
-
-By default, the text-to-speech service synthesizes text using a neutral speaking style for both standard and neural voices. With neural voices, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm, or optimize the voice for different scenarios like customer service, newscasting and voice assistant, using the `mstts:express-as` element. This is an optional element unique to the Speech service.
+By default, the text-to-speech service synthesizes text using a neutral speaking style for neural voices. You can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm, or optimize the voice for different scenarios like customer service, newscasting and voice assistant, using the `mstts:express-as` element. This is an optional element unique to the Speech service.
-Currently, speaking style adjustments are supported for these neural voices:
+Currently, speaking style adjustments are supported for the following neural voices:
* `en-US-AriaNeural` * `en-US-JennyNeural` * `en-US-GuyNeural`
Currently, speaking style adjustments are supported for these neural voices:
The intensity of speaking style can be further changed to better fit your use case. You can specify a stronger or softer style with `styledegree` to make the speech more expressive or subdued. Currently, speaking style adjustments are supported for Chinese (Mandarin, Simplified) neural voices.
-Apart from adjusting the speaking styles and style degree, you can also adjust the `role` parameter so that the voice will imitate a different age and gender. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name will not be changed. Currently, role-play adjustments are supported for these Chinese (Mandarin, Simplified) neural voices:
+Apart from adjusting the speaking styles and style degree, you can also adjust the `role` parameter so that the voice will imitate a different age and gender. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name will not be changed. Currently, role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices:
* `zh-CN-XiaomoNeural` * `zh-CN-XiaoxuanNeural`
-Above changes are applied at the sentence level, and styles and role-plays vary by voice. If a style or role-play isn't supported, the service will return speech in the default neutral speaking way. You can see what styles and role-play are supported for each voice through the [voice list API](rest-text-to-speech.md#get-a-list-of-voices) or through the code-free [Audio Content Creation](https://aka.ms/audiocontentcreation) platform.
+Above changes are applied at the sentence level, and styles and role-plays vary by voice. If a style or role-play isn't supported, the service will return speech in the default neutral speaking way. You can see what styles and roles are supported for each voice through the [voice list API](rest-text-to-speech.md#get-a-list-of-voices) or through the code-free [Audio Content Creation](https://aka.ms/audiocontentcreation) platform.
**Syntax**
In the sample above, we're using the International Phonetic Alphabet, also known
Considering that the IPA is not easy to remember, the Speech service defines a phonetic set for seven languages (`en-US`, `fr-FR`, `de-DE`, `es-ES`, `ja-JP`, `zh-CN`, and `zh-TW`).
-You can use the `sapi` as the vale for the `alphabet` attribute with custom lexicons as demonstrated below:
+You can use the `sapi` as the value for the `alphabet` attribute with custom lexicons as demonstrated below:
```xml <?xml version="1.0" encoding="UTF-8"?>
Pitch changes can be applied to standard voices at the word or sentence-level. W
```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-Guy24kRUS">
+ <voice name="en-US-AriaNeural">
Welcome to <prosody pitch="high">Microsoft Cognitive Services Text-to-Speech API.</prosody> </voice> </speak>
cognitive-services Spx Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-overview.md
Use the Speech SDK when:
* Speech recognition - Convert speech-to-text either from audio files or directly from a microphone, or transcribe a recorded conversation.
-* Speech synthesis - Convert text-to-speech using either input from text files, or input directly from the command line. Customize speech output characteristics using [SSML configurations](speech-synthesis-markup.md), and either [standard or neural voices](speech-synthesis-markup.md#standard-neural-and-custom-voices).
+* Speech synthesis - Convert text-to-speech using either input from text files, or input directly from the command line. Customize speech output characteristics using [SSML configurations](speech-synthesis-markup.md), and either [standard or neural voices](speech-synthesis-markup.md#neural-and-custom-voices).
* Speech translation - Translate audio in a source language to text or audio in a target language.
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/text-to-speech.md
keywords: text to speech
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
-In this overview, you learn about the benefits and capabilities of the text-to-speech service, which enables your applications, tools, or devices to convert text into human-like synthesized speech. Choose from standard and neural voices, or create a custom voice unique to your product or brand. 75+ standard voices are available in more than 45 languages and locales, and 5 neural voices are available in a select number of languages and locales. For a full list of supported voices, languages, and locales, see [supported languages](language-support.md#text-to-speech).
+In this overview, you learn about the benefits and capabilities of the text-to-speech service, which enables your applications, tools, or devices to convert text into human-like synthesized speech. Use human-like neural voices, or create a custom voice unique to your product or brand. For a full list of supported voices, languages, and locales, see [supported languages](language-support.md#text-to-speech).
This documentation contains the following article types:
This documentation contains the following article types:
* Asynchronous synthesis of long audio - Use the [Long Audio API](long-audio-api.md) to asynchronously synthesize text-to-speech files longer than 10 minutes (for example audio books or lectures). Unlike synthesis performed using the Speech SDK or speech-to-text REST API, responses aren't returned in real time. The expectation is that requests are sent asynchronously, responses are polled for, and that the synthesized audio is downloaded when made available from the service. Only custom neural voices are supported.
-* Standard voices - Created using Statistical Parametric Synthesis and/or Concatenation Synthesis techniques. These voices are highly intelligible and sound natural. You can easily enable your applications to speak in more than 45 languages, with a wide range of voice options. These voices provide high pronunciation accuracy, including support for abbreviations, acronym expansions, date/time interpretations, polyphones, and more. For a full list of standard voices, see [supported languages](language-support.md#text-to-speech).
- * Neural voices - Deep neural networks are used to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis are performed simultaneously, which results in more fluid and natural-sounding outputs. Neural voices can be used to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. With the human-like natural prosody and clear articulation of words, neural voices significantly reduce listening fatigue when you interact with AI systems. For a full list of neural voices, see [supported languages](language-support.md#text-to-speech). * Adjust speaking styles with SSML - Speech Synthesis Markup Language (SSML) is an XML-based markup language used to customize speech-to-text outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, speed up or slow down speaking rate, increase or decrease volume, and attribute multiple voices to a single document. See the [how-to](speech-synthesis-markup.md) for adjusting speaking styles.
This documentation contains the following article types:
* Visemes - [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw and tongue when producing a particular phoneme. Visemes have a strong correlation with voices and phonemes. Using viseme events in Speech SDK, you can generate facial animation data, which can be used to animate faces in lip-reading communication, education, entertainment, and customer service. > [!NOTE]
-> Viseme only works for `en-US-AriaNeural` voice for now.
+> Viseme events are currently only supported for the `en-US-AriaNeural` voice.
## Get started
Sample code for text-to-speech is available on GitHub. These samples cover text-
## Customization
-In addition to standard and neural voices, you can create and fine-tune custom voices unique to your product or brand. All it takes to get started are a handful of audio files and the associated transcriptions. For more information, see [Get started with Custom Voice](how-to-custom-voice.md)
+In addition to neural voices, you can create and fine-tune custom voices unique to your product or brand. All it takes to get started are a handful of audio files and the associated transcriptions. For more information, see [Get started with Custom Voice](how-to-custom-voice.md)
## Pricing note
confidential-computing Confidential Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-get-started.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster by using Azure CLI with confidential computing nodes'
-description: In this quickstart, you will learn to create an AKS cluster with confidential nodes and deploy a hello world app using the Azure CLI.
+ Title: 'Quickstart: Deploy an AKS cluster with confidential computing nodes by using the Azure CLI'
+description: Learn how to create an Azure Kubernetes Service (AKS) cluster with confidential nodes and deploy a Hello World app by using the Azure CLI.
-# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster with confidential computing nodes (DCsv2) using Azure CLI
+# Quickstart: Deploy an AKS cluster with confidential computing nodes by using the Azure CLI
-This quickstart is intended for developers or cluster operators who want to quickly create an AKS cluster and deploy an application to monitor applications using the managed Kubernetes service in Azure. You can also provision the cluster and add confidential computing nodes from Azure portal.
+In this quickstart, you'll use the Azure CLI to deploy an Azure Kubernetes Service (AKS) cluster with confidential computing (DCsv2) nodes. You'll then run a simple Hello World application in an enclave. You can also provision a cluster and add confidential computing nodes from the Azure portal, but this quickstart focuses on the Azure CLI.
-## Overview
+AKS is a managed Kubernetes service that enables developers or cluster operators to quickly deploy and manage clusters. To learn more, read the [AKS introduction](../aks/intro-kubernetes.md) and the [overview of AKS confidential nodes](confidential-nodes-aks-overview.md).
-In this quickstart, you'll learn how to deploy an Azure Kubernetes Service (AKS) cluster with confidential computing nodes using the Azure CLI and run a simple hello world application in an enclave. AKS is a managed Kubernetes service that lets you quickly deploy and manage clusters. To learn more, read the [AKS Introduction](../aks/intro-kubernetes.md) and the [AKS Confidential Nodes Overview](confidential-nodes-aks-overview.md).
+Features of confidential computing nodes include:
-> [!NOTE]
-> Confidential computing DCsv2 VMs leverage specialized hardware that is subject to higher pricing and region availability. For more information, see the virtual machines page for [available SKUs and supported regions](virtual-machine-solutions.md).
-
-### Confidential computing node features (DCsv2)
+- Linux worker nodes supporting Linux containers.
+- Generation 2 virtual machine (VM) with Ubuntu 18.04 VM nodes.
+- Intel SGX-based CPU with Encrypted Page Cache Memory (EPC). For more information, see [Frequently asked questions for Azure confidential computing](./faq.md).
+- Support for Kubernetes version 1.16+.
+- Intel SGX DCAP Driver preinstalled on the AKS nodes. For more information, see [Frequently asked questions for Azure confidential computing](./faq.md).
-1. Linux Worker Nodes supporting Linux Containers.
-1. Generation 2 VM with Ubuntu 18.04 Virtual Machines Nodes.
-1. Intel SGX-based CPU with Encrypted Page Cache Memory (EPC). Read more [here](./faq.md).
-1. Support for Kubernetes version 1.16+.
-1. Intel SGX DCAP Driver pre-installed on the AKS Nodes. Read more [here](./faq.md).
+> [!NOTE]
+> DCsv2 VMs use specialized hardware that's subject to higher pricing and region availability. For more information, see the [available SKUs and supported regions](virtual-machine-solutions.md).
## Prerequisites This quickstart requires:
-1. An active Azure Subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-1. Azure CLI version 2.0.64 or later installed and configured on your deployment machine (Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](../container-registry/container-registry-get-started-azure-cli.md).
-1. A minimum of six **DCsv2** cores available in your subscription for use. By default, the VM cores quota for the confidential computing per Azure subscription is eight cores. If you plan to provision a cluster that requires more than eight cores, follow [these](../azure-portal/supportability/per-vm-quota-requests.md) instructions to raise a quota increase ticket.
+- An active Azure subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Azure CLI version 2.0.64 or later installed and configured on your deployment machine.
+
+ Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](../container-registry/container-registry-get-started-azure-cli.md).
+- A minimum of six DCsv2 cores available in your subscription.
+
+ By default, the quota for confidential computing per Azure subscription is eight VM cores. If you plan to provision a cluster that requires more than eight cores, follow [these instructions](../azure-portal/supportability/per-vm-quota-requests.md) to raise a quota-increase ticket.
-## Create a new AKS cluster with confidential computing nodes and add-on
+## Create an AKS cluster with confidential computing nodes and add-on
-Follow the below instructions to add confidential computing capable nodes with add-on.
+Use the following instructions to create an AKS cluster with the confidential computing add-on enabled, add a node pool to the cluster, and verify what you created.
### Create an AKS cluster with a system node pool
-If you already have an AKS cluster that meets the above requirements, [skip to the existing cluster section](#existing-cluster) to add a new confidential computing node pool.
+> [!NOTE]
+> If you already have an AKS cluster that meets the prerequisite criteria listed earlier, [skip to the next section](#add-a-user-node-pool-with-confidential-computing-capabilities-to-the-aks-cluster) to add a confidential computing node pool.
-First, create a resource group for the cluster using the [az group create][az-group-create] command. The following example creates a resource group name *myResourceGroup* in the *westus2* region:
+First, create a resource group for the cluster by using the [az group create][az-group-create] command. The following example creates a resource group named *myResourceGroup* in the *westus2* region:
```azurecli-interactive az group create --name myResourceGroup --location westus2 ```
-Now create an AKS cluster using the [az aks create][az-aks-create] command:
+Now create an AKS cluster, with the confidential computing add-on enabled, by using the [az aks create][az-aks-create] command:
```azurecli-interactive az aks create -g myResourceGroup --name myAKSCluster --generate-ssh-keys --enable-addon confcom ```
-The above creates a new AKS cluster with a system node pool with the add-on enabled. Next, add a user node pool with confidential computing capabilities to the AKS cluster.
-
-### Add a confidential computing node pool to the AKS cluster
+### Add a user node pool with confidential computing capabilities to the AKS cluster
-Run the following command to add a user node pool of `Standard_DC2s_v2` size with three nodes. You can choose another SKU from the supported list of [DCsv2 SKUs and regions](../virtual-machines/dcv2-series.md).
+Run the following command to add a user node pool of `Standard_DC2s_v2` size with three nodes to the AKS cluster. You can choose another SKU from the [list of supported DCsv2 SKUs and regions](../virtual-machines/dcv2-series.md).
```azurecli-interactive az aks nodepool add --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup --node-vm-size Standard_DC2s_v2 ```
-After running, a new node pool with **DCsv2** should be visible with confidential computing add-on daemonsets ([SGX Device Plugin](confidential-nodes-aks-overview.md#sgx-plugin)).
+After you run the command, a new node pool with DCsv2 should be visible with confidential computing add-on DaemonSets ([SGX device plug-in](confidential-nodes-aks-overview.md#confidential-computing-add-on-for-aks)).
### Verify the node pool and add-on
-Get the credentials for your AKS cluster using the [az aks get-credentials][az-aks-get-credentials] command:
+Get the credentials for your AKS cluster by using the [az aks get-credentials][az-aks-get-credentials] command:
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
-Verify the nodes are created properly and the SGX-related daemonsets are running on **DCsv2** node pools using kubectl get pods & nodes command as shown below:
+Use the `kubectl get pods` command to verify that the nodes are created properly and the SGX-related DaemonSets are running on DCsv2 node pools:
```console $ kubectl get pods --all-namespaces
$ kubectl get pods --all-namespaces
kube-system sgx-device-plugin-xxxx 1/1 Running ```
-If the output matches the above, then your AKS cluster is now ready to run confidential applications.
+If the output matches the preceding code, your AKS cluster is now ready to run confidential applications.
-Go to the [Hello World from Enclave](#hello-world) deployment section to test an app in an enclave. Or follow the below instructions to add additional node pools on AKS (AKS supports mixing SGX node pools and non-SGX node pools).
+You can go to the [Deploy Hello World from an isolated enclave application](#hello-world) section in this quickstart to test an app in an enclave. Or use the following instructions to add more node pools on AKS. (AKS supports mixing SGX node pools and non-SGX node pools.)
## Add a confidential computing node pool to an existing AKS cluster<a id="existing-cluster"></a>
-This section assumes you have an AKS cluster running already that meets the criteria listed in the prerequisites section (applies to add-on).
+This section assumes you're already running an AKS cluster that meets the prerequisite criteria listed earlier in this quickstart.
### Enable the confidential computing AKS add-on on the existing cluster
Run the following command to enable the confidential computing add-on:
az aks enable-addons --addons confcom --name MyManagedCluster --resource-group MyResourceGroup ```
-### Add a **DCsv2** user node pool to the cluster
+### Add a DCsv2 user node pool to the cluster
> [!NOTE]
-> To use the confidential computing capability, your existing AKS cluster needs to have at minimum one **DCsv2** VM SKU based node pool. To learn more about confidential computing DCs-v2 VMs SKU's, see [available SKUs and supported regions](virtual-machine-solutions.md).
+> To use the confidential computing capability, your existing AKS cluster needs to have a minimum of one node pool that's based on a DCsv2 VM SKU. To learn more about DCs-v2 VMs SKUs for confidential computing, see the [available SKUs and supported regions](virtual-machine-solutions.md).
-Run the following command to create a new node pool:
+Run the following command to create a node pool:
```azurecli-interactive az aks nodepool add --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup --node-count 1 --node-vm-size Standard_DC4s_v2 ```
-Verify the new node pool with the name confcompool1 has been created:
+Verify that the new node pool with the name *confcompool1* has been created:
```azurecli-interactive az aks nodepool list --cluster-name myAKSCluster --resource-group myResourceGroup ```
-### Verify that daemonsets are running on confidential node pools
+### Verify that DaemonSets are running on confidential node pools
-Sign in to your existing AKS cluster to perform the following verification.
+Sign in to your existing AKS cluster to perform the following verification:
```console kubectl get nodes ```
-The output should show the newly added confcompool1 on the AKS cluster. You may also see other daemonsets.
+The output should show the newly added *confcompool1* pool on the AKS cluster. You might also see other DaemonSets.
```console $ kubectl get pods --all-namespaces
$ kubectl get pods --all-namespaces
kube-system sgx-device-plugin-xxxx 1/1 Running ```
-If the output matches the above, then your AKS cluster is now ready to run confidential applications. Follow the instructions below to deploy a test application.
+If the output matches the preceding code, your AKS cluster is now ready to run confidential applications.
-## Hello World from isolated enclave application <a id="hello-world"></a>
-Create a file named *hello-world-enclave.yaml* and paste the following YAML manifest. This Open Enclave based sample application code can be found in the [Open Enclave project](https://github.com/openenclave/openenclave/tree/master/samples/helloworld). The following deployment assumes you have deployed the addon "confcom".
+## Deploy Hello World from an isolated enclave application <a id="hello-world"></a>
+You're now ready to deploy a test application.
+
+Create a file named *hello-world-enclave.yaml* and paste in the following YAML manifest. You can find this sample application code in the [Open Enclave project](https://github.com/openenclave/openenclave/tree/master/samples/helloworld). This deployment assumes that you've deployed the *confcom* add-on.
```yaml apiVersion: batch/v1
spec:
image: oeciteam/sgx-test:1.0 resources: limits:
- kubernetes.azure.com/sgx_epc_mem_in_MiB: 5 # This limit will automatically place the job into confidential computing node. Alternatively you can target deployment to nodepools
+ kubernetes.azure.com/sgx_epc_mem_in_MiB: 5 # This limit will automatically place the job into a confidential computing node. Alternatively, you can target deployment to node pools
restartPolicy: Never backoffLimit: 0 ```
-Now use the kubectl apply command to create a sample job that will launch in a secure enclave, as shown in the following example output:
+Now use the `kubectl apply` command to create a sample job that will open in a secure enclave, as shown in the following example output:
```console $ kubectl apply -f hello-world-enclave.yaml
$ kubectl apply -f hello-world-enclave.yaml
job "sgx-test" created ```
-You can confirm that the workload successfully created a Trusted Execution Environment (Enclave) by running the following commands:
+You can confirm that the workload successfully created a Trusted Execution Environment (enclave) by running the following commands:
```console $ kubectl get jobs -l app=sgx-test
Enclave called into host to print: Hello World!
## Clean up resources
-To remove the associated node pools or delete the AKS cluster, use the following commands:
-
-### Remove the confidential computing node pool
+To remove the confidential computing node pool that you created in this quickstart, use the following command:
```azurecli-interactive az aks nodepool delete --cluster-name myAKSCluster --name myNodePoolName --resource-group myResourceGroup ```
-### Delete the AKS cluster
+To delete the AKS cluster, use the following command:
```azurecli-interactive az aks delete --resource-group myResourceGroup --name myAKSCluster
az aks delete --resource-group myResourceGroup --name myAKSCluster
## Next steps
-* Run Python, Node, etc. applications confidentially through confidential containers by visiting [confidential container samples](https://github.com/Azure-Samples/confidential-container-samples).
+* Run Python, Node, or other applications through confidential containers by using the [confidential container samples in GitHub](https://github.com/Azure-Samples/confidential-container-samples).
-* Run Enclave aware applications by visiting [Enclave Aware Azure Container Samples](https://github.com/Azure-Samples/confidential-computing/blob/main/containersamples/).
+* Run enclave-aware applications by using the [enclave-aware Azure container samples in GitHub](https://github.com/Azure-Samples/confidential-computing/blob/main/containersamples/).
<!-- LINKS --> [az-group-create]: /cli/azure/group#az_group_create
cosmos-db Create Sql Api Dotnet V4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-sql-api-dotnet-v4.md
ms.devlang: dotnet Previously updated : 09/22/2020 Last updated : 04/07/2021
-# Quickstart: Build a console app using the .NET V4 SDK to manage Azure Cosmos DB SQL API account resources.
+# Quickstart: Build a console app using the .NET V4 SDK (Preview) to manage Azure Cosmos DB SQL API account resources.
[!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
> * [Python](create-sql-api-python.md) > * [Xamarin](create-sql-api-xamarin-dotnet.md)
-Get started with the Azure Cosmos DB SQL API client library for .NET. Follow the steps in this doc to install the .NET V4 (Azure.Cosmos) package, build an app, and try out the example code for basic CRUD operations on the data stored in Azure Cosmos DB.
+Get started with the Azure Cosmos DB SQL API client library for .NET. Follow the steps in this doc to install the .NET V4 (Azure.Cosmos) package, build an app, and try out the example code for basic CRUD operations on the data stored in Azure Cosmos DB.
+
+> [!IMPORTANT]
+> The .NET V4 SDK for Azure Cosmos DB is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value, document, and graph databases. Use the Azure Cosmos DB SQL API client library for .NET to:
cosmos-db Performance Tips Dotnet Sdk V3 Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/performance-tips-dotnet-sdk-v3-sql.md
Throughput is provisioned based on the number of [Request Units](request-units.m
The complexity of a query affects how many Request Units are consumed for an operation. The number of predicates, the nature of the predicates, the number of UDF files, and the size of the source dataset all influence the cost of query operations.
-To measure the overhead of any operation (create, update, or delete), inspect the [x-ms-request-charge](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers) header (or the equivalent `RequestCharge` property in `ResourceResponse\<T>` or `FeedResponse\<T>` in the .NET SDK) to measure the number of Request Units consumed by the operations:
+To measure the overhead of any operation (create, update, or delete), inspect the [x-ms-request-charge](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers) header (or the equivalent `RequestCharge` property in `ResourceResponse<T>` or `FeedResponse<T>` in the .NET SDK) to measure the number of Request Units consumed by the operations:
```csharp // Measure the performance (Request Units) of writes
The request charge (that is, the request-processing cost) of a specified operati
## Next steps For a sample application that's used to evaluate Azure Cosmos DB for high-performance scenarios on a few client machines, see [Performance and scale testing with Azure Cosmos DB](performance-testing.md).
-To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](partitioning-overview.md).
+To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](partitioning-overview.md).
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-expression-language-functions.md
In the following example, the pipeline takes **inputPath** and **outputPath** pa
} } ```+
+### Replacing special characters
+
+Dynamic content editor automatically escapes characters like double quote, backslash in your content when you finish editing. This causes trouble if you want to replace line feed or tab by using **\n**, **\t** in replace() function. You can of edit your dynamic content in code view to remove the extra \ in the expression, or you can follow below steps to replace special characters using expression language:
+
+1. URL encoding against the original string value
+1. Replace URL encoded string, for example, line feed (%0A), carriage return(%0D), horizontal tab(%09).
+1. URL decoding
+
+For example, variable *companyName* with a newline character in its value, expression `@uriComponentToString(replace(uriComponent(variables('companyName')), '%0A', ''))` can remove the newline character.
+
+```json
+Contoso-
+Corporation
+```
+
+### Escaping single quote character
+
+Expression functions use single quote for string value parameters. Use two single quotes to escape a ' character in string functions. For example, expression `@concat('Baba', ''' ', 'book store')` will return below result.
+
+```
+Baba's book store
+```
+ ### Tutorial This [tutorial](https://azure.microsoft.com/mediahandler/files/resourcefiles/azure-data-factory-passing-parameters/Azure%20data%20Factory-Whitepaper-PassingParameters.pdf) walks you through how to pass parameters between a pipeline and activity as well as between the activities.
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-sink.md
Previously updated : 03/10/2021 Last updated : 04/06/2021 # Sink transformation in mapping data flow
Below is a video tutorial on how to use database error row handling automaticall
## Data preview in sink
-When fetching a data preview on a debug cluster, no data will be written to your sink. A snapshot of what the data looks like will be returned, but nothing will be written to your destination. To test writing data into your sink, run a pipeline debug from the pipeline canvas.
+When fetching a data preview in debug mode, no data will be written to your sink. A snapshot of what the data looks like will be returned, but nothing will be written to your destination. To test writing data into your sink, run a pipeline debug from the pipeline canvas.
+
+## Data flow script
+
+### Example
+
+Below is an example of a sink transformation and its data flow script:
+
+```
+sink(input(
+ movie as integer,
+ title as string,
+ genres as string,
+ year as integer,
+ Rating as integer
+ ),
+ allowSchemaDrift: true,
+ validateSchema: false,
+ deletable:false,
+ insertable:false,
+ updateable:true,
+ upsertable:false,
+ keys:['movie'],
+ format: 'table',
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true,
+ saveOrder: 1,
+ errorHandlingOption: 'stopOnFirstError') ~> sink1
+```
## Next steps
-Now that you've created your data flow, add a [data flow activity to your pipeline](concepts-data-flow-overview.md).
+Now that you've created your data flow, add a [data flow activity to your pipeline](concepts-data-flow-overview.md).
data-factory Format Delimited Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-delimited-text.md
The below table lists the properties supported by a delimited text sink. You can
| Clear the folder | If the destination folder is cleared prior to write | no | `true` or `false` | truncate | | File name option | The naming format of the data written. By default, one file per partition in format `part-#####-tid-<guid>` | no | Pattern: String <br> Per partition: String[] <br> Name file as column data: String <br> Output to single file: `['<fileName>']` <br> Name folder as column data: String | filePattern <br> partitionFileNames <br> rowUrlColumn <br> partitionFileNames <br> rowFolderUrlColumn | | Quote all | Enclose all values in quotes | no | `true` or `false` | quoteAll |-
-rowFolderUrlColumn:
+| Header | Add customer headers to output files | no | `[<string array>]` | header |
### Sink example
data-factory Transform Data Machine Learning Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-machine-learning-service.md
mlPipelineId | ID of the published Azure Machine Learning pipeline | String (or
experimentName | Run history experiment name of the Machine Learning pipeline run | String (or expression with resultType of string) | No mlPipelineParameters | Key, Value pairs to be passed to the published Azure Machine Learning pipeline endpoint. Keys must match the names of pipeline parameters defined in the published Machine Learning pipeline | Object with key value pairs (or Expression with resultType object) | No mlParentRunId | The parent Azure Machine Learning pipeline run ID | String (or expression with resultType of string) | No
+dataPathAssignments | Dictionary used for changing datapaths in Azure Machine learning. Enables the switching of datapaths | Object with key value pairs | No
continueOnStepFailure | Whether to continue execution of other steps in the Machine Learning pipeline run if a step fails | boolean | No > [!NOTE]
data-factory Tutorial Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-copy-data-tool.md
Prepare your Blob storage and your SQL Database for the tutorial by performing t
1. Launch **Notepad**. Copy the following text and save it in a file named **inputEmp.txt** on your disk:
- ```
- FirstName|LastName
- John|Doe
- Jane|Doe
- ```
+ ```text
+ FirstName|LastName
+ John|Doe
+ Jane|Doe
+ ```
1. Create a container named **adfv2tutorial** and upload the inputEmp.txt file to the container. You can use the Azure portal or various tools like [Azure Storage Explorer](https://storageexplorer.com/) to perform these tasks.
Prepare your Blob storage and your SQL Database for the tutorial by performing t
1. Use the following SQL script to create a table named **dbo.emp** in your SQL Database:
- ```sql
- CREATE TABLE dbo.emp
- (
- ID int IDENTITY(1,1) NOT NULL,
- FirstName varchar(50),
- LastName varchar(50)
- )
- GO
-
- CREATE CLUSTERED INDEX IX_emp_ID ON dbo.emp (ID);
- ```
+ ```sql
+ CREATE TABLE dbo.emp
+ (
+ ID int IDENTITY(1,1) NOT NULL,
+ FirstName varchar(50),
+ LastName varchar(50)
+ )
+ GO
+ CREATE CLUSTERED INDEX IX_emp_ID ON dbo.emp (ID);
+ ```
2. Allow Azure services to access SQL Server. Verify that the setting **Allow Azure services and resources to access this server** is enabled for your server that's running SQL Database. This setting lets Data Factory write data to your database instance. To verify and turn on this setting, go to logical SQL server > Security > Firewalls and virtual networks > set the **Allow Azure services and resources to access this server** option to **ON**.
+ > [!NOTE]
+ > The option to **Allow Azure services and resources to access this server** enables network access to your SQL Server from any Azure resource, not just those in your subscription. For more information, see [Azure SQL Server Firewall rules](../azure-sql/database/firewall-configure.md). Instead, you can use [Private endpoints](../private-link/private-endpoint-overview.md) to connect to Azure PaaS services without using public IPs.
+ ## Create a data factory 1. On the left menu, select **Create a resource** > **Integration** > **Data Factory**:
- ![New data factory creation](./media/doc-common-process/new-azure-data-factory-menu.png)
+ ![New data factory creation](./media/doc-common-process/new-azure-data-factory-menu.png)
+ 1. On the **New data factory** page, under **Name**, enter **ADFTutorialDataFactory**.
- The name for your data factory must be _globally unique_. You might receive the following error message:
+ The name for your data factory must be _globally unique_. You might receive the following error message:
:::image type="content" source="./media/doc-common-process/name-not-available-error.png" alt-text="New data factory error message for duplicate name.":::
- If you receive an error message about the name value, enter a different name for the data factory. For example, use the name _**yourname**_**ADFTutorialDataFactory**. For the naming rules for Data Factory artifacts, see [Data Factory naming rules](naming-rules.md).
+ If you receive an error message about the name value, enter a different name for the data factory. For example, use the name _**yourname**_**ADFTutorialDataFactory**. For the naming rules for Data Factory artifacts, see [Data Factory naming rules](naming-rules.md).
+ 1. Select the Azure **subscription** in which to create the new data factory.+ 1. For **Resource Group**, take one of the following steps:
- a. Select **Use existing**, and select an existing resource group from the drop-down list.
+ a. Select **Use existing**, and select an existing resource group from the drop-down list.
+
+ b. Select **Create new**, and enter the name of a resource group.
- b. Select **Create new**, and enter the name of a resource group.
-
- To learn about resource groups, see [Use resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
+ To learn about resource groups, see [Use resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
1. Under **version**, select **V2** for the version.+ 1. Under **location**, select the location for the data factory. Only supported locations are displayed in the drop-down list. The data stores (for example, Azure Storage and SQL Database) and computes (for example, Azure HDInsight) that are used by your data factory can be in other locations and regions.+ 1. Select **Create**. 1. After creation is finished, the **Data Factory** home page is displayed.
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Author & Monitor tile.":::
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Author & Monitor tile.":::
+ 1. To launch the Azure Data Factory user interface (UI) in a separate tab, select the **Author & Monitor** tile. ## Use the Copy Data tool to create a pipeline 1. On the **Let's get started** page, select the **Copy Data** tile to launch the Copy Data tool.
- ![Copy Data tool tile](./media/doc-common-process/get-started-page.png)
+ ![Copy Data tool tile](./media/doc-common-process/get-started-page.png)
+ 1. On the **Properties** page, under **Task name**, enter **CopyFromBlobToSqlPipeline**. Then select **Next**. The Data Factory UI creates a pipeline with the specified task name.
- ![Create a pipeline](./media/tutorial-copy-data-tool/create-pipeline.png)
+
+ ![Create a pipeline](./media/tutorial-copy-data-tool/create-pipeline.png)
1. On the **Source data store** page, complete the following steps:
- a. Click **+ Create new connection** to add a connection
+ a. Select **+ Create new connection** to add a connection
- b. Select **Azure Blob Storage** from the gallery, and then select **Continue**.
+ b. Select **Azure Blob Storage** from the gallery, and then select **Continue**.
- c. On the **New Linked Service** page, select your Azure subscription, and select your storage account from the **Storage account name** list. Test connection and then select **Create**.
+ c. On the **New Linked Service** page, select your Azure subscription, and select your storage account from the **Storage account name** list. Test connection and then select **Create**.
- d. Select the newly created linked service as source, then click **Next**.
+ d. Select the newly created linked service as source, then select **Next**.
- ![Select source linked service](./media/tutorial-copy-data-tool/select-source-linked-service.png)
+ ![Select source linked service](./media/tutorial-copy-data-tool/select-source-linked-service.png)
1. On the **Choose the input file or folder** page, complete the following steps:
- a. Click **Browse** to navigate to the **adfv2tutorial/input** folder, select the **inputEmp.txt** file, then click **Choose**.
+ a. Select **Browse** to navigate to the **adfv2tutorial/input** folder, select the **inputEmp.txt** file, then select **Choose**.
- b. Click **Next** to move to next step.
+ b. Select **Next** to move to next step.
1. On the **File format settings** page, enable the checkbox for *First row as header*. Notice that the tool automatically detects the column and row delimiters. Select **Next**. You can also preview data and view the schema of the input data on this page.
- ![File format settings](./media/tutorial-copy-data-tool/file-format-settings-page.png)
+ ![File format settings](./media/tutorial-copy-data-tool/file-format-settings-page.png)
+ 1. On the **Destination data store** page, completes the following steps:
- a. Click **+ Create new connection** to add a connection
+ a. Select **+ Create new connection** to add a connection
- b. Select **Azure SQL Database** from the gallery, and then select **Continue**.
+ b. Select **Azure SQL Database** from the gallery, and then select **Continue**.
- c. On the **New Linked Service** page, select your server name and DB name from the dropdown list, and specify the username and password, then select **Create**.
+ c. On the **New Linked Service** page, select your server name and DB name from the dropdown list, and specify the username and password, then select **Create**.
- ![Configure Azure SQL DB](./media/tutorial-copy-data-tool/config-azure-sql-db.png)
+ ![Configure Azure SQL DB](./media/tutorial-copy-data-tool/config-azure-sql-db.png)
- d. Select the newly created linked service as sink, then click **Next**.
+ d. Select the newly created linked service as sink, then select **Next**.
1. On the **Table mapping** page, select the **[dbo].[emp]** table, and then select **Next**. 1. On the **Column mapping** page, notice that the second and the third columns in the input file are mapped to the **FirstName** and **LastName** columns of the **emp** table. Adjust the mapping to make sure that there is no error, and then select **Next**.
- ![Column mapping page](./media/tutorial-copy-data-tool/column-mapping.png)
+ ![Column mapping page](./media/tutorial-copy-data-tool/column-mapping.png)
1. On the **Settings** page, select **Next**.
Prepare your Blob storage and your SQL Database for the tutorial by performing t
1. On the **Deployment page**, select **Monitor** to monitor the pipeline (task).
- ![Monitor pipeline](./media/tutorial-copy-data-tool/monitor-pipeline.png)
-
-1. On the Pipeline runs page, select **Refresh** to refresh the list. Click the link under **PIPELINE NAME** to view activity run details or rerun the pipeline.
- ![Pipeline run](./media/tutorial-copy-data-tool/pipeline-run.png)
+ ![Monitor pipeline](./media/tutorial-copy-data-tool/monitor-pipeline.png)
+
+1. On the Pipeline runs page, select **Refresh** to refresh the list. Select the link under **PIPELINE NAME** to view activity run details or rerun the pipeline.
+
+ ![Pipeline run](./media/tutorial-copy-data-tool/pipeline-run.png)
1. On the Activity runs page, select the **Details** link (eyeglasses icon) under the **ACTIVITY NAME** column for more details about copy operation. To go back to the Pipeline Runs view, select the **ALL pipeline runs** link in the breadcrumb menu. To refresh the view, select **Refresh**.
- ![Monitor activity runs](./media/tutorial-copy-data-tool/activity-monitoring.png)
+ ![Monitor activity runs](./media/tutorial-copy-data-tool/activity-monitoring.png)
1. Verify that the data is inserted into the **dbo.emp** table in your SQL Database. 1. Select the **Author** tab on the left to switch to the editor mode. You can update the linked services, datasets, and pipelines that were created via the tool by using the editor. For details on editing these entities in the Data Factory UI, see [the Azure portal version of this tutorial](tutorial-copy-data-portal.md).
- ![Select Author tab](./media/tutorial-copy-data-tool/author-tab.png)
+ ![Select Author tab](./media/tutorial-copy-data-tool/author-tab.png)
## Next steps The pipeline in this sample copies data from Blob storage to a SQL Database. You learned how to:
data-share How To Share From Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-share-from-storage.md
Storage snapshot performance is impacted by a number of factors in addition to n
* Concurrent access to the source and target data stores. * Location of source and target data stores.
-* For incremental snapshot, number of files in the shared dataset can impact the time takes to find the list of files with last modified time after the last successful snapshot.
+* For incremental snapshot, the number of files in the shared dataset can impact the time it takes to find the list of files with last modified time after the last successful snapshot.
## Next steps
-You've learned how to share and receive data from a storage account by using the Azure Data Share service. To learn about sharing from other data sources, see [Supported data stores](supported-data-stores.md).
+You've learned how to share and receive data from a storage account by using the Azure Data Share service. To learn about sharing from other data sources, see [Supported data stores](supported-data-stores.md).
databox-online Azure Stack Edge Gpu Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-quickstart.md
Previously updated : 01/27/2021 Last updated : 04/07/2021 # Customer intent: As an IT admin, I need to understand how to prepare the portal to quickly deploy Azure Stack Edge so I can use it to transfer data to Azure.
Before you deploy, make sure that following prerequisites are in place:
## Deployment steps
-1. **Install**: Connect PORT 1 to a client computer via a crossover cable or USB Ethernet adapter. Connect at least one other device port for data, preferably 25 GbE, (from PORT 3 to PORT 6) to Internet via at least 1 GbE switch and SFP+ copper cables. Connect the provided power cords to the Power Supply Units and to separate power distribution outlets. Press the power button on the front panel to turn on the device.
+1. **Install**: Connect PORT 1 to a client computer via a crossover cable or USB Ethernet adapter. Connect at least one other device port for data, preferably 25 GbE, (from PORT 3 to PORT 6) to Internet via SFP+ copper cables or use PORT 2 with RJ45 patch cable. Connect the provided power cords to the Power Supply Units and to separate power distribution outlets. Press the power button on the front panel to turn on the device.
See [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/) and [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products) to get compatible network cables and switches.
databox-online Azure Stack Edge Gpu System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-system-requirements.md
Previously updated : 03/17/2021 Last updated : 04/07/2021
We recommend that you set your firewall rules for outbound traffic, based on Azu
| https://\*.azurecr.io | Personal and third-party container registries (optional) | | https://\*.azure-devices.net | IoT Hub access (required) |
+### URL patterns for monitoring
+
+Add the following URL patterns for Azure Monitor if you're using the containerized version of the Log Analytics agent for Linux.
+
+| URL pattern | Port | Component or functionality |
+|-|-|-|
+| http://\*ods.opinsights.azure.com | 443 | Data ingestion |
+| http://\*.oms.opinsights.azure.com | 443 | Operations Management Suite (OMS) onboarding |
+| http://\*.dc.services.visualstudio.com | 443 | Agent telemetry that uses Azure Public Cloud Application Insights |
+
+For more information, see [Network firewall requirements for monitoring container insights](../azure-monitor/containers/container-insights-onboard.md#network-firewall-requirements).
+ ### URL patterns for gateway for Azure Government [!INCLUDE [Azure Government URL patterns for firewall](../../includes/azure-stack-edge-gateway-gov-url-patterns-firewall.md)]
We recommend that you set your firewall rules for outbound traffic, based on Azu
| https://\*.azure-devices.us | IoT Hub access (required) | | https://\*.azurecr.us | Personal and third-party container registries (optional) |
+### URL patterns for monitoring for Azure Government
+
+Add the following URL patterns for Azure Monitor if you're using the containerized version of the Log Analytics agent for Linux.
+
+| URL pattern | Port | Component or functionality |
+|-|-|-|
+| http://\*ods.opinsights.azure.us | 443 | Data ingestion |
+| http://\*.oms.opinsights.azure.us | 443 | Operations Management Suite (OMS) onboarding |
+| http://\*.dc.services.visualstudio.com | 443 | Agent telemetry that uses Azure Public Cloud Application Insights |
++ ## Internet bandwidth [!INCLUDE [Internet bandwidth](../../includes/azure-stack-edge-gateway-internet-bandwidth.md)]
ddos-protection Ddos Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-faq.md
Title: Azure DDoS Protection Standard frequent asked questions
description: Frequently asked questions about the Azure DDoS Protection Standard, which helps provide defense against DDoS attacks. documentationcenter: na-+ ms.devlang: na
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-protection-overview.md
Title: Azure DDoS Protection Standard Overview
description: Learn how the Azure DDoS Protection Standard, when combined with application design best practices, provides defense against DDoS attacks. documentationcenter: na-+ ms.devlang: na
ddos-protection Ddos Protection Partner Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-protection-partner-onboarding.md
Title: Partnering with Azure DDoS Protection Standard
description: "Understand partnering opportunities enabled by Azure DDoS Protection Standard." documentationcenter: na-+ mms.devlang: na Last updated 08/28/2020
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-protection-reference-architectures.md
Title: Azure DDoS Protection reference architectures
description: Learn Azure DDoS protection reference architectures. documentationcenter: na-+ ms.devlang: na
ddos-protection Ddos Protection Standard Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-protection-standard-features.md
Title: Azure DDoS Protection features
description: Learn Azure DDoS Protection features documentationcenter: na-+ ms.devlang: na
ddos-protection Ddos Rapid Response https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-rapid-response.md
Title: Azure DDoS Rapid Response
description: Learn how to engage DDoS experts during an active attack for specialized support. documentationcenter: na-+ ms.devlang: na
ddos-protection Ddos Response Strategy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-response-strategy.md
Title: Components of a DDoS response strategy
description: Learn what how to use Azure DDoS Protection Standard to respond to DDoS attacks. documentationcenter: na-+ ms.devlang: na
ddos-protection Fundamental Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/fundamental-best-practices.md
Title: Azure DDoS Protection fundamental best practices
description: Learn the best security practices using DDoS protection. documentationcenter: na-+ ms.devlang: na
ddos-protection Manage Ddos Protection Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/manage-ddos-protection-cli.md
Title: Create and configure an Azure DDoS Protection plan using Azure CLI
description: Learn how to create a DDoS Protection Plan using Azure CLI documentationcenter: na-+ ms.devlang: na
ddos-protection Manage Ddos Protection Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/manage-ddos-protection-powershell.md
Title: Create and configure an Azure DDoS Protection plan using Azure PowerShell
description: Learn how to create a DDoS Protection Plan using Azure PowerShell documentationcenter: na-+ ms.devlang: na
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/policy-reference.md
+
+ Title: Built-in policy definitions for Azure DDoS Protection Standard
+description: Lists Azure Policy built-in policy definitions for Azure DDoS Protection Standard. These built-in policy definitions provide common approaches to managing your Azure resources.
+
+documentationcenter: na
++
+ms.devlang: na
+ na
+ Last updated : 04/07/2021+++++
+# Azure Policy built-in definitions for Azure DDoS Protection Standard
+
+This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
+definitions for Azure DDoS Protection Standard. For additional Azure Policy built-ins for other services, see
+[Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
+
+The name of each built-in policy definition links to the policy definition in the Azure portal. Use
+the link in the **Version** column to view the source on the
+[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+
+## Azure DDoS Protection Standard
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Virtual networks should be protected by Azure DDoS Protection Standard](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94de2ad3-e0c1-4caf-ad78-5d47bbc83d3d)|Protect your virtual networks against volumetric and protocol attacks with Azure DDoS Protection Standard. For more information, visit [https://aka.ms/ddosprotectiondocs](https://aka.ms/ddosprotectiondocs).|Modify, Audit, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkDdosStandard_Audit.json)|
+|[Public IP addresses should have resource logs enabled for Azure DDoS Protection Standard](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F752154a7-1e0f-45c6-a880-ac75a7e4f648)|Enable resource logs for public IP addresses in diagnostic settings to stream to a Log Analytics workspace. Get detailed visibility into attack traffic and actions taken to mitigate DDoS attacks via notifications, reports and flow logs.|AuditIfNotExists, DeployIfNotExists, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/PublicIpDdosLogging_Audit.json)|
++
+## Next steps
+
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+- Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).
+- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
ddos-protection Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/telemetry.md
Title: View and configure DDoS protection telemetry for Azure DDoS Protection St
description: Learn how to view and configure DDoS protection telemetry for Azure DDoS Protection Standard. documentationcenter: na-+ ms.devlang: na
ddos-protection Test Through Simulations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/test-through-simulations.md
Title: Azure DDoS Protection simulation testing
description: Learn about how to test through simulations documentationcenter: na-+ ms.devlang: na
ddos-protection Types Of Attacks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/types-of-attacks.md
Title: Types of attacks Azure DDoS Protection Standard mitigates
description: Learn what types of attacks Azure DDoS Protection Standard protects against. documentationcenter: na-+ ms.devlang: na
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/alert-engine-messages.md
description: Review Defender for IoT Alert descriptions.
Previously updated : 03/29/2021 Last updated : 4/8/2021 # Alert types and descriptions
-This article describes all opf the alert types, that may be generated from the Defender for IoT engines. Alerts appear in the Alerts window, which allows you to manage the alert event.
+This article describes all of the alert types, that may be generated from the Defender for IoT engines. Alerts appear in the Alerts window, which allows you to manage the alert event.
## Policy engine alerts
digital-twins How To Enable Managed Identities Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-enable-managed-identities-cli.md
Either of these creation methods will give the same configuration options and th
In this section, you'll learn how to enable a system-managed identity on an Azure Digital Twins instance that is currently being created.
-This is done by adding an `--assign-identity` parameter to the `az dt create` command that's used to create the instance. (For more information about this command, see its [reference documentation](/cli/azure/ext/azure-iot/dt#ext_azure_iot_az_dt_create) or the [general instructions for setting up an Azure Digital Twins instance](how-to-set-up-instance-cli.md#create-the-azure-digital-twins-instance)).
+This is done by adding an `--assign-identity` parameter to the `az dt create` command that's used to create the instance. (For more information about this command, see its [reference documentation](/cli/azure/dt#az_dt_create) or the [general instructions for setting up an Azure Digital Twins instance](how-to-set-up-instance-cli.md#create-the-azure-digital-twins-instance)).
To create an instance with a system managed identity, add the `--assign-identity` parameter like this:
Here is an example that creates an instance with a system managed identity, and
az dt create -n {instance_name} -g {resource_group} --assign-identity --scopes "/subscriptions/<subscription ID>/resourceGroups/<resource_group>/providers/Microsoft.EventHub/namespaces/<Event_Hubs_namespace>/eventhubs/<event_hub_name>" --role MyCustomRole ```
-For more examples of role assignments with this command, see the [**az dt create** reference documentation](/cli/azure/ext/azure-iot/dt#ext_azure_iot_az_dt_create).
+For more examples of role assignments with this command, see the [**az dt create** reference documentation](/cli/azure/dt#az_dt_create).
Alternatively, you can also use the [**az role assignment**](/cli/azure/role/assignment) command group to create and manage roles. This can be used to support additional scenarios where you don't want to group role assignment with the create command.
After setting up a system-managed identity for your Azure Digital Twins instance
>[!NOTE] > You cannot edit an endpoint that has already been created with key-based identity to change to identity-based authentication. You must choose the authentication type when the endpoint is first created.
-This is done by adding a `--auth-type` parameter to the `az dt endpoint create` command that's used to create the endpoint. (For more information about this command, see its [reference documentation](/cli/azure/ext/azure-iot/dt/endpoint/create) or the [general instructions for setting up an Azure Digital Twins endpoint](how-to-manage-routes-apis-cli.md#create-the-endpoint)).
+This is done by adding a `--auth-type` parameter to the `az dt endpoint create` command that's used to create the endpoint. (For more information about this command, see its [reference documentation](/cli/azure/dt/endpoint/create) or the [general instructions for setting up an Azure Digital Twins endpoint](how-to-manage-routes-apis-cli.md#create-the-endpoint)).
To create an endpoint that uses identity-based authentication, specify the `IdentityBased` authentication type with the `--auth-type` parameter. The example below illustrates this for an Event Hubs endpoint.
digital-twins How To Enable Private Link Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-enable-private-link-cli.md
For a full list of required and optional parameters, as well as more private end
### Manage private endpoint connections on the instance
-Once a private endpoint has been created for your Azure Digital Twins instance, you can use the [**az dt network private-endpoint connection**](/cli/azure/ext/azure-iot/dt/network/private-endpoint/connection) commands to continue managing private endpoint **connections** with respect to the instance. Operations include:
+Once a private endpoint has been created for your Azure Digital Twins instance, you can use the [**az dt network private-endpoint connection**](/cli/azure/dt/network/private-endpoint/connection) commands to continue managing private endpoint **connections** with respect to the instance. Operations include:
* Show a private endpoint connection * Set the state of the private-endpoint connection * Delete the private-endpoint connection * List all the private-endpoint connections for an instance
-For more information and examples, see the [**az dt network private-endpoint** reference documentation](/cli/azure/ext/azure-iot/dt/network/private-endpoint).
+For more information and examples, see the [**az dt network private-endpoint** reference documentation](/cli/azure/dt/network/private-endpoint).
### Manage other Private Link information on an Azure Digital Twins instance
-You can get additional information about the Private Link status of your instance with the [**az dt network private-link**](/cli/azure/ext/azure-iot/dt/network/private-link) commands. Operations include:
+You can get additional information about the Private Link status of your instance with the [**az dt network private-link**](/cli/azure/dt/network/private-link) commands. Operations include:
* List private links associated with an Azure Digital Twins instance * Show a private link associated with the instance
-For more information and examples, see the [**az dt network private-link** reference documentation](/cli/azure/ext/azure-iot/dt/network/private-link).
+For more information and examples, see the [**az dt network private-link** reference documentation](/cli/azure/dt/network/private-link).
## Disable / enable public network access flags
This article shows how to update the value of the network flag using either the
### Use the Azure CLI
-In the Azure CLI, you can disable or enable public network access by adding a `--public-network-access` parameter to the `az dt create` command. While this command can also be used to create a new instance, you can use it to edit the properties of an existing instance by providing it the name of an instance that already exists. (For more information about this command, see its [reference documentation](/cli/azure/ext/azure-iot/dt#ext_azure_iot_az_dt_create) or the [general instructions for setting up an Azure Digital Twins instance](how-to-set-up-instance-cli.md#create-the-azure-digital-twins-instance)).
+In the Azure CLI, you can disable or enable public network access by adding a `--public-network-access` parameter to the `az dt create` command. While this command can also be used to create a new instance, you can use it to edit the properties of an existing instance by providing it the name of an instance that already exists. (For more information about this command, see its [reference documentation](/cli/azure/dt#az_dt_create) or the [general instructions for setting up an Azure Digital Twins instance](how-to-set-up-instance-cli.md#create-the-azure-digital-twins-instance)).
To **disable** public network access for an Azure Digital Twins instance, use the `--public-network-access` parameter like this:
digital-twins How To Integrate Time Series Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-time-series-insights.md
Next, you will set up a Time Series Insights instance to receive the data from y
## Begin sending IoT data to Azure Digital Twins
-To begin sending data to Time Series Insights, you will need to start updating the digital twin properties in Azure Digital Twins with changing data values. Use the [az dt twin update](/cli/azure/ext/azure-iot/dt/twin#ext-azure-iot-az-dt-twin-update) command.
+To begin sending data to Time Series Insights, you will need to start updating the digital twin properties in Azure Digital Twins with changing data values. Use the [az dt twin update](/cli/azure/dt/twin#az_dt_twin_update) command.
If you are using the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](tutorial-end-to-end.md)) to assist with environment setup, you can begin sending simulated IoT data by running the *DeviceSimulator* project from the sample. The instructions are in the [*Configure and run the simulation*](tutorial-end-to-end.md#configure-and-run-the-simulation) section of the tutorial.
digital-twins How To Manage Routes Apis Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes-apis-cli.md
This section explains how to create these endpoints using the Azure CLI. You can
### Create the endpoint
-Once you have created the endpoint resources, you can use them for an Azure Digital Twins endpoint. The following examples show how to create endpoints using the [az dt endpoint create](/cli/azure/ext/azure-iot/dt/endpoint/create) command for the [Azure Digital Twins CLI](how-to-use-cli.md). Replace the placeholders in the commands with the details of your own resources.
+Once you have created the endpoint resources, you can use them for an Azure Digital Twins endpoint. The following examples show how to create endpoints using the [az dt endpoint create](/cli/azure/dt/endpoint/create) command for the [Azure Digital Twins CLI](how-to-use-cli.md). Replace the placeholders in the commands with the details of your own resources.
To create an Event Grid endpoint:
Follow the steps below to set up these storage resources in your Azure account,
#### Create the dead-letter endpoint
-To create an endpoint that has dead-lettering enabled, add the following dead letter parameter to the [az dt endpoint create](/cli/azure/ext/azure-iot/dt/endpoint/create) command for the [Azure Digital Twins CLI](how-to-use-cli.md).
+To create an endpoint that has dead-lettering enabled, add the following dead letter parameter to the [az dt endpoint create](/cli/azure/dt/endpoint/create) command for the [Azure Digital Twins CLI](how-to-use-cli.md).
The value for the parameter is the **dead letter SAS URI** made up of the storage account name, container name, and SAS token that you gathered in the [previous section](#set-up-storage-resources). This parameter creates the endpoint with key-based authentication.
If there is a route name and a different filter is added, messages will be filte
One route should allow multiple notifications and event types to be selected.
-Event routes can be created with the Azure Digital Twins [**EventRoutes** data plane APIs](/rest/api/digital-twins/dataplane/eventroutes) or [**az dt route** CLI commands](/cli/azure/ext/azure-iot/dt/route). The rest of this section walks through the creation process.
+Event routes can be created with the Azure Digital Twins [**EventRoutes** data plane APIs](/rest/api/digital-twins/dataplane/eventroutes) or [**az dt route** CLI commands](/cli/azure/dt/route). The rest of this section walks through the creation process.
### Create routes with the APIs and C# SDK
The following sample method shows how to create, list, and delete an event route
### Create routes with the CLI
-Routes can also be managed using the [az dt route](/cli/azure/ext/azure-iot/dt/route) commands for the Azure Digital Twins CLI.
+Routes can also be managed using the [az dt route](/cli/azure/dt/route) commands for the Azure Digital Twins CLI.
For more information about using the CLI and what commands are available, see [*How-to: Use the Azure Digital Twins CLI*](how-to-use-cli.md).
digital-twins How To Provision Using Device Provisioning Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-provision-using-device-provisioning-service.md
Once you have gone through this flow, everything is set to retire devices end-to
To trigger the process of retirement, you need to manually delete the device from IoT Hub.
-You can do this with an [Azure CLI command](/cli/azure/ext/azure-iot/iot/hub/module-identity#ext_azure_iot_az_iot_hub_module_identity_delete) or in the Azure portal.
+You can do this with an [Azure CLI command](/cli/azure/iot/hub/module-identity#az_iot_hub_module_identity_delete) or in the Azure portal.
Follow the steps below to delete the device in the Azure portal: 1. Navigate to your IoT hub, and choose **IoT devices** in the menu options on the left.
digital-twins How To Query Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-query-graph.md
Here is a query example specifying a value for all three parameters:
When querying based on digital twins' **relationships**, the Azure Digital Twins query language has a special syntax.
-Relationships are pulled into the query scope in the `FROM` clause. An important distinction from "classical" SQL-type languages is that each expression in this `FROM` clause is not a table; rather, the `FROM` clause expresses a cross-entity relationship traversal, and is written with an Azure Digital Twins version of `JOIN`.
+Relationships are pulled into the query scope in the `FROM` clause. Unlike in "classical" SQL-type languages, each expression in this `FROM` clause is not a table; rather, the `FROM` clause expresses a cross-entity relationship traversal. To traverse across relationships, Azure Digital Twins uses a custom version of `JOIN`.
-Recall that with the Azure Digital Twins [model](concepts-models.md) capabilities, relationships do not exist independently of twins. This means the Azure Digital Twins query language's `JOIN` is a little different from the general SQL `JOIN`, as relationships here can't be queried independently and must be tied to a twin.
-To incorporate this difference, the keyword `RELATED` is used in the `JOIN` clause to reference a twin's set of relationships.
+Recall that with the Azure Digital Twins [model](concepts-models.md) capabilities, relationships do not exist independently of twins. This means that relationships here can't be queried independently and must be tied to a twin.
+To handle this, the keyword `RELATED` is used in the `JOIN` clause to pull in the set of a certain type of relationship coming from the twin collection. The query must then filter in the `WHERE` clause which specific twin(s) to use in the relationship query (using the twins' `$dtId` values).
-The following section gives several examples of what this looks like.
+The following sections give examples of what this looks like.
-> [!TIP]
-> Conceptually, this feature mimics the document-centric functionality of CosmosDB, where `JOIN` can be performed on child objects within a document. CosmosDB uses the `IN` keyword to indicate the `JOIN` is intended to iterate over array elements within the current context document.
-
-### Relationship-based query examples
-
-To get a dataset that includes relationships, use a single `FROM` statement followed by N `JOIN` statements, where the `JOIN` statements express relationships on the result of a previous `FROM` or `JOIN` statement.
+### Basic relationship query
Here is a sample relationship-based query. This code snippet selects all digital twins with an *ID* property of 'ABC', and all digital twins related to these digital twins via a *contains* relationship.
Here is a sample relationship-based query. This code snippet selects all digital
> [!NOTE] > The developer does not need to correlate this `JOIN` with a key value in the `WHERE` clause (or specify a key value inline with the `JOIN` definition). This correlation is computed automatically by the system, as the relationship properties themselves identify the target entity.
+### Query by the source or target of a relationship
+
+You can use the relationship query structure to identify a digital twin that's the source or the target of a relationship.
+
+For instance, you can start with a source twin and follow its relationships to find the target twins of the relationships. Here is an example of a query that finds the target twins of the *feeds* relationships coming from the twin *source-twin*.
++
+You can also start with the target of the relationship and trace the relationship back to find the source twin. Here's an example of a query that finds the source twin of a *feeds* relationship to the twin *target-twin*.
++ ### Query the properties of a relationship Similarly to the way digital twins have properties described via DTDL, relationships can also have properties. You can query twins **based on the properties of their relationships**.
In the example above, note how *reportedCondition* is a property of the *service
### Query with multiple JOINs
-Up to five `JOIN`s are supported in a single query. This allows you to traverse multiple levels of relationships at once.
+Up to five `JOIN`s are supported in a single query. This allows you to traverse multiple levels of relationships at once.
+
+To query on multiple levels of relationships, use a single `FROM` statement followed by N `JOIN` statements, where the `JOIN` statements express relationships on the result of a previous `FROM` or `JOIN` statement.
Here is an example of a multi-join query, which gets all the light bulbs contained in the light panels in rooms 1 and 2.
digital-twins How To Set Up Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-cli.md
You now have an Azure Digital Twins instance ready to go, and have assigned perm
## Next steps Test out individual REST API calls on your instance using the Azure Digital Twins CLI commands:
-* [az dt reference](/cli/azure/ext/azure-iot/dt)
+* [az dt reference](/cli/azure/dt)
* [*How-to: Use the Azure Digital Twins CLI*](how-to-use-cli.md) Or, see how to connect a client application to your instance with authentication code:
digital-twins How To Set Up Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-portal.md
You now have an Azure Digital Twins instance ready to go, and have assigned perm
## Next steps Test out individual REST API calls on your instance using the Azure Digital Twins CLI commands:
-* [az dt reference](/cli/azure/ext/azure-iot/dt)
+* [az dt reference](/cli/azure/dt)
* [*How-to: Use the Azure Digital Twins CLI*](how-to-use-cli.md) Or, see how to connect a client application to your instance with authentication code:
digital-twins How To Set Up Instance Scripted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-scripted.md
If verification was unsuccessful, you can also redo your own role assignment usi
## Next steps Test out individual REST API calls on your instance using the Azure Digital Twins CLI commands:
-* [az dt reference](/cli/azure/ext/azure-iot/dt)
+* [az dt reference](/cli/azure/dt)
* [*How-to: Use the Azure Digital Twins CLI*](how-to-use-cli.md) Or, see how to connect a client application to your instance with authentication code:
digital-twins How To Use Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-cli.md
In addition to managing your Azure Digital Twins instance in the Azure portal, A
* Managing [routes](concepts-route-events.md) * Configuring [security](concepts-security.md) via Azure role-based access control (Azure RBAC)
-The command set is called **az dt**, and is part of the [Azure IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension). You can view the full list of commands and their usage as part of the reference documentation for the `az iot` command set: [*az dt* command reference](/cli/azure/ext/azure-iot/dt).
+The command set is called **az dt**, and is part of the [Azure IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension). You can view the full list of commands and their usage as part of the reference documentation for the `az iot` command set: [*az dt* command reference](/cli/azure/dt).
## Uses (deploy and validate)
az extension add --upgrade -n azure-iot
## Next steps Explore the CLI and its full set of commands through the reference docs:
-* [*az dt* command reference](/cli/azure/ext/azure-iot/dt)
+* [*az dt* command reference](/cli/azure/dt)
digital-twins Tutorial Command Line Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-command-line-cli.md
In this tutorial, you'll build a graph in Azure Digital Twins using models, twins, and relationships. The tool for this tutorial is the [Azure Digital Twins command set for the **Azure CLI**](how-to-use-cli.md).
-You can use the CLI commands to perform essential Azure Digital Twins actions such as uploading models, creating and modifying twins, and creating relationships. You can also look at the [reference documentation for *az dt* command set](/cli/azure/ext/azure-iot/dt?preserve-view=true&view=azure-cli-latest) to see the full set of CLI commands.
+You can use the CLI commands to perform essential Azure Digital Twins actions such as uploading models, creating and modifying twins, and creating relationships. You can also look at the [reference documentation for *az dt* command set](/cli/azure/dt) to see the full set of CLI commands.
In this tutorial, you will... > [!div class="checklist"]
After designing models, you need to upload them to your Azure Digital Twins inst
Navigate to the *Room.json* file on your machine and select "Open." Then, repeat this step for *Floor.json*.
-1. Next, use the [**az dt model create**](/cli/azure/ext/azure-iot/dt/model?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_model_create) command as shown below to upload your updated *Room* model to your Azure Digital Twins instance. The second command uploads another model, *Floor*, which you'll also use in the next section to create different types of twins.
+1. Next, use the [**az dt model create**](/cli/azure/dt/model#az_dt_model_create) command as shown below to upload your updated *Room* model to your Azure Digital Twins instance. The second command uploads another model, *Floor*, which you'll also use in the next section to create different types of twins.
```azurecli-interactive az dt model create -n <ADT_instance_name> --models Room.json
After designing models, you need to upload them to your Azure Digital Twins inst
The output from each command will show information about the successfully uploaded model. >[!TIP]
- >You can also upload all models within a directory at the same time, by using the `--from-directory` option for the model create command. For more information, see [Optional parameters for *az dt model create*](/cli/azure/ext/azure-iot/dt/model?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_model_create-optional-parameters).
+ >You can also upload all models within a directory at the same time, by using the `--from-directory` option for the model create command. For more information, see [Optional parameters for *az dt model create*](/cli/azure/dt/model#az_dt_model_create-optional-parameters).
-1. Verify the models were created with the [**az dt model list**](/cli/azure/ext/azure-iot/dt/model?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_model_list) command as shown below. This will print a list of all models that have been uploaded to the Azure Digital Twins instance with their full information.
+1. Verify the models were created with the [**az dt model list**](/cli/azure/dt/model#az_dt_model_list) command as shown below. This will print a list of all models that have been uploaded to the Azure Digital Twins instance with their full information.
```azurecli-interactive az dt model list -n <ADT_instance_name> --definition
As models cannot be overwritten, this will now return an error code of `ModelIdA
Now that some models have been uploaded to your Azure Digital Twins instance, you can create [**digital twins**](concepts-twins-graph.md) based on the model definitions. Digital twins represent the entities within your business environmentΓÇöthings like sensors on a farm, rooms in a building, or lights in a car.
-To create a digital twin, you use the [**az dt twin create**](/cli/azure/ext/azure-iot/dt/twin?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_twin_create) command. You must reference the model that the twin is based on, and can optionally define initial values for any properties in the model. You do not have to pass any relationship information at this stage.
+To create a digital twin, you use the [**az dt twin create**](/cli/azure/dt/twin#az_dt_twin_create) command. You must reference the model that the twin is based on, and can optionally define initial values for any properties in the model. You do not have to pass any relationship information at this stage.
1. Run this code in the Cloud Shell to create several twins, based on the *Room* model you updated earlier and another model, *Floor*. Recall that *Room* has three properties, so you can provide arguments with the initial values for these. (Initializing property values is optional in general, but they're needed for this tutorial.)
To create a digital twin, you use the [**az dt twin create**](/cli/azure/ext/azu
The output from each command will show information about the successfully created twin (including properties for the room twins that were initialized with them).
-1. You can verify that the twins were created with the [**az dt twin query**](/cli/azure/ext/azure-iot/dt/twin?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_twin_query) command as shown below. The query shown finds all the digital twins in your Azure Digital Twins instance.
+1. You can verify that the twins were created with the [**az dt twin query**](/cli/azure/dt/twin#az_dt_twin_query) command as shown below. The query shown finds all the digital twins in your Azure Digital Twins instance.
```azurecli-interactive az dt twin query -n <ADT_instance_name> -q "SELECT * FROM DIGITALTWINS"
To create a digital twin, you use the [**az dt twin create**](/cli/azure/ext/azu
You can also modify the properties of a twin you've created.
-1. Run this [**az dt twin update**](/cli/azure/ext/azure-iot/dt/twin?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_twin_update) command to change *room0*'s RoomName from *Room0* to *PresidentialSuite*:
+1. Run this [**az dt twin update**](/cli/azure/dt/twin#az_dt_twin_update) command to change *room0*'s RoomName from *Room0* to *PresidentialSuite*:
```azurecli-interactive az dt twin update -n <ADT_instance_name> --twin-id room0 --json-patch '{"op":"add", "path":"/RoomName", "value": "PresidentialSuite"}'
You can also modify the properties of a twin you've created.
:::image type="content" source="media/tutorial-command-line/cli/output-update-twin.png" alt-text="Screenshot of Cloud Shell showing result of the update command, which includes a RoomName of PresidentialSuite." lightbox="media/tutorial-command-line/cli/output-update-twin.png":::
-1. You can verify the update succeeded by running the [**az dt twin show**](/cli/azure/ext/azure-iot/dt/twin?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_twin_show) command to see *room0*'s information:
+1. You can verify the update succeeded by running the [**az dt twin show**](/cli/azure/dt/twin#az_dt_twin_show) command to see *room0*'s information:
```azurecli-interactive az dt twin show -n <ADT_instance_name> --twin-id room0
Next, you can create some **relationships** between these twins, to connect them
The types of relationships that you can create from one twin to another are defined within the [models](#model-a-physical-environment-with-dtdl) that you uploaded earlier. The [model definition for *Floor*](https://github.com/azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/Models/Floor.json) specifies that floors can have a type of relationship called *contains*. This makes it possible to create a *contains*-type relationship from each *Floor* twin to the corresponding room that it contains.
-To add a relationship, use the [**az dt twin relationship create**](/cli/azure/ext/azure-iot/dt/twin/relationship?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_twin_relationship_create) command. Specify the twin that the relationship is coming from, the type of relationship, and the twin that the relationship is connecting to. Lastly, give the relationship a unique ID. If a relationship was defined to have properties, you can initialize the relationship properties in this command as well.
+To add a relationship, use the [**az dt twin relationship create**](/cli/azure/dt/twin/relationship#az_dt_twin_relationship_create) command. Specify the twin that the relationship is coming from, the type of relationship, and the twin that the relationship is connecting to. Lastly, give the relationship a unique ID. If a relationship was defined to have properties, you can initialize the relationship properties in this command as well.
1. Run the following code to add a *contains*-type relationship from each of the *Floor* twins you created earlier to the corresponding *Room* twin. The relationships are named *relationship0* and *relationship1*.
The twins and relationships you have set up in this tutorial form the following
## Query the twin graph to answer environment questions
-A main feature of Azure Digital Twins is the ability to [query](concepts-query-language.md) your twin graph easily and efficiently to answer questions about your environment. In the Azure CLI, this is done with the [**az dt twin query**](/cli/azure/ext/azure-iot/dt/twin?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_twin_query) command.
+A main feature of Azure Digital Twins is the ability to [query](concepts-query-language.md) your twin graph easily and efficiently to answer questions about your environment. In the Azure CLI, this is done with the [**az dt twin query**](/cli/azure/dt/twin#az_dt_twin_query) command.
Run the following queries in the Cloud Shell to answer some questions about the sample environment.
After completing this tutorial, you can choose which resources you'd like to rem
* **If you plan to continue to the next tutorial**, you can keep the resources you set up here and reuse the Azure Digital Twins instance without clearing anything in between.
-* **If you'd like to continue using the Azure Digital Twins instance, but clear out all of its models, twins, and relationships**, you can use the [**az dt twin relationship delete**](/cli/azure/ext/azure-iot/dt/twin/relationship?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_twin_relationship_delete), [**az dt twin delete**](/cli/azure/ext/azure-iot/dt/twin?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_twin_delete), and [**az dt model delete**](/cli/azure/ext/azure-iot/dt/model?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_model_delete) commands to clear the relationships, twins, and models in your instance, respectively.
+* **If you'd like to continue using the Azure Digital Twins instance, but clear out all of its models, twins, and relationships**, you can use the [**az dt twin relationship delete**](/cli/azure/dt/twin/relationship#az_dt_twin_relationship_delete), [**az dt twin delete**](/cli/azure/dt/twin#az_dt_twin_delete), and [**az dt model delete**](/cli/azure/dt/model#az_dt_model_delete) commands to clear the relationships, twins, and models in your instance, respectively.
[!INCLUDE [digital-twins-cleanup-basic.md](../../includes/digital-twins-cleanup-basic.md)]
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-end-to-end.md
After completing this tutorial, you can choose which resources you'd like to rem
[!INCLUDE [digital-twins-cleanup-basic.md](../../includes/digital-twins-cleanup-basic.md)]
-* **If you'd like to continue using the Azure Digital Twins instance you set up in this article, but clear out some or all of its models, twins, and relationships**, you can use the [az dt](/cli/azure/ext/azure-iot/dt) CLI commands in an [Azure Cloud Shell](https://shell.azure.com) window to delete the elements you'd like to remove.
+* **If you'd like to continue using the Azure Digital Twins instance you set up in this article, but clear out some or all of its models, twins, and relationships**, you can use the [az dt](/cli/azure/dt) CLI commands in an [Azure Cloud Shell](https://shell.azure.com) window to delete the elements you'd like to remove.
This option will not remove any of the other Azure resources created in this tutorial (IoT Hub, Azure Functions app, etc.). You can delete these individually using the [dt commands](/cli/azure/reference-index) appropriate for each resource type.
expressroute Expressroute Howto Erdirect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-erdirect.md
Create a circuit on the ExpressRoute Direct resource.
AllowClassicOperations : False GatewayManagerEtag ```-
+## Delete the resource
+Prior to deleting the ExpressRoute Direct resource, you first need to delete any ExpressRoute circuits created on the ExpressRoute Direct port pair.
+You can delete the ExpressRoute Direct resource by running the following command:
+ ```powershell
+ Remove-azexpressrouteport -Name $Name -Resourcegroupname -$ResourceGroupName
+ ```
## Next steps For more information about ExpressRoute Direct, see the [Overview](expressroute-erdirect-about.md).
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-features.md
Previously updated : 03/30/2021 Last updated : 04/07/2021
Azure Firewall Premium Preview has the following known issues:
|ESNI support for FQDN resolution in HTTPS|Encrypted SNI isn't supported in HTTPS handshake.|Today only Firefox supports ESNI through custom configuration. Suggested workaround is to disable this feature.| |Client Certificates (TLS)|Client certificates are used to build a mutual identity trust between the client and the server. Client certificates are used during a TLS negotiation. Azure firewall renegotiates a connection with the server and has no access to the private key of the client certificates.|None| |QUIC/HTTP3|QUIC is the new major version of HTTP. It's a UDP-based protocol over 80 (PLAN) and 443 (SSL). FQDN/URL/TLS inspection won't be supported.|Configure passing UDP 80/443 as network rules.|
-|Secure Hub and forced tunneling not supported in Premium|Currently the Firewall Premium SKU isn't supported in Secure Hub deployments and forced tunnel configurations.|Fix scheduled for GA.|
Untrusted customer signed certificates|Customer signed certificates are not trusted by the firewall once received from an intranet-based web server.|Fix scheduled for GA. |Wrong source and destination IP addresses in Alerts for IDPS with TLS inspection.|When you enable TLS inspection and IDPS issues a new alert, the displayed source/destination IP address is wrong (the internal IP address is displayed instead of the original IP address).|Fix scheduled for GA.| |Wrong source IP address in Alerts with IDPS for HTTP (without TLS inspection).|When plain text HTTP traffic is in use, and IDPS issues a new alert, and the destination is public an IP address, the displayed source IP address is wrong (the internal IP address is displayed instead of the original IP address).|Fix scheduled for GA.|
firewall Tutorial Firewall Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/tutorial-firewall-deploy-portal.md
Previously updated : 02/19/2021 Last updated : 04/08/2021 #Customer intent: As an administrator new to this service, I want to control outbound network access from resources located in an Azure subnet.
Now create the workload virtual machine, and place it in the **Workload-SN** sub
|Resource group |**Test-FW-RG**| |Virtual machine name |**Srv-Work**| |Region |Same as previous|
- |Image|Windows Server 2019 Datacenter|
+ |Image|Windows Server 2016 Datacenter|
|Administrator user name |Type a user name| |Password |Type a password|
hdinsight Connect Install Beeline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/connect-install-beeline.md
description: Learn how to connect to the Apache Beeline client to run Hive queri
Previously updated : 05/27/2020 Last updated : 04/07/2021 # Connect to Apache Beeline on HDInsight or install it locally
Apache Spark provides its own implementation of HiveServer2, which is sometimes
#### Through public or private endpoints
-The connection string used is slightly different. Instead of containing `httpPath=/hive2` it uses `httpPath/sparkhive2`. Replace `clustername` with the name of your HDInsight cluster. Replace `admin` with the cluster login account for your cluster. For ESP clusters, use the full UPN (for example, user@domain.com). Replace `password` with the password for the cluster login account.
+The connection string used is slightly different. Instead of containing `httpPath=/hive2` it uses `httpPath/sparkhive2`. Replace `clustername` with the name of your HDInsight cluster. Replace `admin` with the cluster login account for your cluster. Replace `password` with the password for the cluster login account.
+> [!NOTE]
+> For ESP clusters, replace `admin` with full UPN (for example, user@domain.com).
```bash beeline -u 'jdbc:hive2://clustername.azurehdinsight.net:443/;ssl=true;transportMode=http;httpPath=/sparkhive2' -n admin -p 'password'
Although Beeline is included on the head nodes, you may want to install it local
## Next steps * For examples using the Beeline client with Apache Hive, see [Use Apache Beeline with Apache Hive](apache-hadoop-use-hive-beeline.md)
-* For more general information on Hive in HDInsight, see [Use Apache Hive with Apache Hadoop on HDInsight](hdinsight-use-hive.md)
+* For more general information on Hive in HDInsight, see [Use Apache Hive with Apache Hadoop on HDInsight](hdinsight-use-hive.md)
iot-dps About Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/about-iot-dps.md
-# Provisioning devices with Azure IoT Hub Device Provisioning Service
+# What is Azure IoT Hub Device Provisioning Service?
Microsoft Azure provides a rich set of integrated public cloud services for all your IoT solution needs. The IoT Hub Device Provisioning Service (DPS) is a helper service for IoT Hub that enables zero-touch, just-in-time provisioning to the right IoT hub without requiring human intervention. DPS enables the provisioning of millions of devices in a secure and scalable manner. ## When to use Device Provisioning Service
iot-dps How To Legacy Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/how-to-legacy-device-symm-key.md
This article is oriented toward a Windows-based workstation. However, you can pe
> [!NOTE] > The sample used in this article is written in C. There is also a [C# device provisioning symmetric key sample](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/master/provisioning/Samples/device/SymmetricKeySample) available. To use this sample, download or clone the [azure-iot-samples-csharp](https://github.com/Azure-Samples/azure-iot-samples-csharp) repository and follow the in-line instructions in the sample code. You can follow the instructions in this article to create a symmetric key enrollment group using the portal and to find the ID Scope and enrollment group primary and secondary keys needed to run the sample. You can also create individual enrollments using the sample.
-## Overview
+## Prerequisites
-A unique registration ID will be defined for each device based on information that identifies that device. For example, the MAC address or a serial number.
+* Completion of the [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md) quickstart.
-An enrollment group that uses [symmetric key attestation](concepts-symmetric-key-attestation.md) will be created with the Device Provisioning Service. The enrollment group will include a group master key. That master key will be used to hash each unique registration ID to produce a unique device key for each device. The device will use that derived device key with its unique registration ID to attest with the Device Provisioning Service and be assigned to an IoT hub.
+The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation.
-The device code demonstrated in this article will follow the same pattern as the [Quickstart: Provision a simulated device with symmetric keys](quick-create-simulated-device-symm-key.md). The code will simulate a device using a sample from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The simulated device will attest with an enrollment group instead of an individual enrollment as demonstrated in the quickstart.
+* [Visual Studio](https://visualstudio.microsoft.com/vs/) 2019 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015 and Visual Studio 2017 are also supported.
+* Latest version of [Git](https://git-scm.com/download/) installed.
+## Overview
-## Prerequisites
+A unique registration ID will be defined for each device based on information that identifies that device. For example, the MAC address or a serial number.
-* Completion of the [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md) quickstart.
+An enrollment group that uses [symmetric key attestation](concepts-symmetric-key-attestation.md) will be created with the Device Provisioning Service. The enrollment group will include a group master key. That master key will be used to hash each unique registration ID to produce a unique device key for each device. The device will use that derived device key with its unique registration ID to attest with the Device Provisioning Service and be assigned to an IoT hub.
-The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation.
+The device code demonstrated in this article will follow the same pattern as the [Quickstart: Provision a simulated device with symmetric keys](quick-create-simulated-device-symm-key.md). The code will simulate a device using a sample from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The simulated device will attest with an enrollment group instead of an individual enrollment as demonstrated in the quickstart.
-* [Visual Studio](https://visualstudio.microsoft.com/vs/) 2019 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015 and Visual Studio 2017 are also supported.
-* Latest version of [Git](https://git-scm.com/download/) installed.
## Prepare an Azure IoT C SDK development environment
Be aware that this leaves the derived device key included as part of the image f
## Next steps
-* To learn more Reprovisioning, see [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md)
-* [Quickstart: Provision a simulated device with symmetric keys](quick-create-simulated-device-symm-key.md)
-* To learn more Deprovisioning, see [How to deprovision devices that were previously auto-provisioned](how-to-unprovision-devices.md)
+* To learn more about Reprovisioning, see
+
+> [!div class="nextstepaction"]
+> [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md)
+
+> [!div class="nextstepaction"]
+> [Quickstart: Provision a simulated device with symmetric keys](quick-create-simulated-device-symm-key.md)
+
+* To learn more about Deprovisioning, see
+
+> [!div class="nextstepaction"]
+> [How to deprovision devices that were previously auto-provisioned](how-to-unprovision-devices.md)
iot-dps How To Provision Multitenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/how-to-provision-multitenant.md
It is common to combine these two scenarios. For example, a multitenant IoT solu
This article uses a simulated device sample from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) to demonstrate how to provision devices in a multitenant scenario across regions. You will perform the following steps in this article:
-* Use the Azure CLI to create two regional IoT hubs (**West US** and **East US**)
-* Create a multitenant enrollment
-* Use the Azure CLI to create two regional Linux VMs to act as devices in the same regions (**West US** and **East US**)
-* Set up the development environment for the Azure IoT C SDK on both Linux VMs
-* Simulate the devices to see that they are provisioned for the same tenant in the closest region.
+> [!div class="checklist"]
+> * Use the Azure CLI to create two regional IoT hubs (**West US** and **East US**)
+> * Create a multitenant enrollment
+> * Use the Azure CLI to create two regional Linux VMs to act as devices in the same regions (**West US** and **East US**)
+> * Set up the development environment for the Azure IoT C SDK on both Linux VMs
+> * Simulate the devices to see that they are provisioned for the same tenant in the closest region.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
To delete the resource group by name:
## Next steps
+* To learn more about reprovisioning, see
+ > [!div class="nextstepaction"]
-> To learn more about reprovisioning, see [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md)
+> [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md)
+* To learn more about deprovisioning, see
> [!div class="nextstepaction"]
-> To learn more about deprovisioning, see [How to deprovision devices that were previously auto-provisioned](how-to-unprovision-devices.md)
+> [How to deprovision devices that were previously auto-provisioned](how-to-unprovision-devices.md)
iot-dps Tutorial Custom Allocation Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/tutorial-custom-allocation-policies.md
To delete the resource group by name:
## Next steps
-* For a more in-depth custom allocation policy example, see [How to use custom allocation policies](how-to-use-custom-allocation-policies.md).
-* To learn more Reprovisioning, see [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md).
-* To learn more Deprovisioning, see [How to deprovision devices that were previously autoprovisioned](how-to-unprovision-devices.md).
+For a more in-depth custom allocation policy example, see
+
+> [!div class="nextstepaction"]
+> [How to use custom allocation policies](how-to-use-custom-allocation-policies.md)
+
+* To learn more Reprovisioning, see
+
+> [!div class="nextstepaction"]
+> [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md)
+
+* To learn more Deprovisioning, see
+
+> [!div class="nextstepaction"]
+> [How to deprovision devices that were previously autoprovisioned](how-to-unprovision-devices.md)
iot-dps Tutorial Provision Device To Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/tutorial-provision-device-to-hub.md
In this tutorial, you learned how to:
> * Start the device > * Verify the device is registered
-Advance to the next tutorial to learn how to provision multiple devices across load-balanced hubs.
+Advance to the next tutorial to learn how to provision multiple devices across load-balanced hubs
> [!div class="nextstepaction"] > [Provision devices across load-balanced IoT hubs](./tutorial-provision-multiple-hubs.md)
iot-dps Tutorial Set Up Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/tutorial-set-up-cloud.md
In this tutorial, you learned how to:
> * Link the IoT hub to the Device Provisioning Service > * Set the allocation policy on the Device Provisioning Service
-Advance to the next tutorial to learn how to set up your device for provisioning.
+Advance to the next tutorial to learn how to set up your device for provisioning
> [!div class="nextstepaction"] > [Set up device for provisioning](tutorial-set-up-device.md)
lighthouse Publish Managed Services Offers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/publish-managed-services-offers.md
In this article, you'll learn how to publish a public or private Managed Service
## Publishing requirements
-You need to have a valid [account in Partner Center](../../marketplace/partner-center-portal/create-account.md) to create and publish offers. If you don't have an account already, the [sign-up process](https://aka.ms/joinmarketplace) will lead you through the steps of creating an account in Partner Center and enrolling in the Commercial Marketplace program.
+You need to have a valid [account in Partner Center](../../marketplace/create-account.md) to create and publish offers. If you don't have an account already, the [sign-up process](https://aka.ms/joinmarketplace) will lead you through the steps of creating an account in Partner Center and enrolling in the Commercial Marketplace program.
Per the [Managed Service offer certification requirements](/legal/marketplace/certification-policies#700-managed-services), you must have a [Silver or Gold Cloud Platform competency level](/partner-center/learn-about-competencies) or be an [Azure Expert MSP](https://partner.microsoft.com/membership/azure-expert-msp) in order to publish a Managed Service offer. You must also [enter a lead destination that will create a record in your CRM system](../../marketplace/plan-managed-service-offer.md#customer-leads) each time a customer deploys your offer.
The following table can help determine whether to onboard customers by publishin
|**Consideration** |**Managed Service offer** |**ARM templates** | ||||
-|Requires [Partner Center account](../../marketplace/partner-center-portal/create-account.md) |Yes |No |
+|Requires [Partner Center account](../../marketplace/create-account.md) |Yes |No |
|Requires [Silver or Gold Cloud Platform competency level](/partner-center/learn-about-competencies) or [Azure Expert MSP](https://partner.microsoft.com/membership/azure-expert-msp) |Yes |No | |Available to new customers through Azure Marketplace |Yes |No | |Can limit offer to specific customers |Yes (only with private offers, which can't be used with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program) |Yes |
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-managed-service-identity.md
To run the [Snapshot Blob operation](/rest/api/storageservices/snapshot-blob), t
|-|-||-| | **Method** | Yes | `PUT`| The HTTP method that the Snapshot Blob operation uses | | **URI** | Yes | `https://{storage-account-name}.blob.core.windows.net/{blob-container-name}/{folder-name-if-any}/{blob-file-name-with-extension}` | The resource ID for an Azure Blob Storage file in the Azure Global (public) environment, which uses this syntax |
-| **Headers** | For Azure Storage | `x-ms-blob-type` = `BlockBlob` <p>`x-ms-version` = `2019-02-02` <p>`x-ms-date` = `@{formatDateTime(utcNow(),'r'}` | The `x-ms-blob-type`, `x-ms-version`, and `x-ms-date` header values are required for Azure Storage operations. <p><p>**Important**: In outgoing HTTP trigger and action requests for Azure Storage, the header requires the `x-ms-version` property and the API version for the operation that you want to run. The `x-ms-date` must be the current date. Otherwise, your logic app fails with a `403 FORBIDDEN` error. To get the current date in the required format, you can use the expression in the example value. <p>For more information, see these topics: <p><p>- [Request headers - Snapshot Blob](/rest/api/storageservices/snapshot-blob#request) <br>- [Versioning for Azure Storage services](/rest/api/storageservices/versioning-for-the-azure-storage-services#specifying-service-versions-in-requests) |
+| **Headers** | For Azure Storage | `x-ms-blob-type` = `BlockBlob` <p>`x-ms-version` = `2019-02-02` <p>`x-ms-date` = `@{formatDateTime(utcNow(),'r')}` | The `x-ms-blob-type`, `x-ms-version`, and `x-ms-date` header values are required for Azure Storage operations. <p><p>**Important**: In outgoing HTTP trigger and action requests for Azure Storage, the header requires the `x-ms-version` property and the API version for the operation that you want to run. The `x-ms-date` must be the current date. Otherwise, your logic app fails with a `403 FORBIDDEN` error. To get the current date in the required format, you can use the expression in the example value. <p>For more information, see these topics: <p><p>- [Request headers - Snapshot Blob](/rest/api/storageservices/snapshot-blob#request) <br>- [Versioning for Azure Storage services](/rest/api/storageservices/versioning-for-the-azure-storage-services#specifying-service-versions-in-requests) |
| **Queries** | Only for the Snapshot Blob operation | `comp` = `snapshot` | The query parameter name and value for the operation. | |||||
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/azure-machine-learning-release-notes.md
In this article, learn about Azure Machine Learning releases. For the full SDK
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://docs.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us` +
+## 2021-04-05
+
+### Azure Machine Learning SDK for Python v1.26.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Fixed an issue where Naive models would be recommended in AutoMLStep runs and fail with lag or rolling window features. These models will not be recommended when target lags or target rolling window size are set.
+ + Changed console output when submitting an AutoML run to show a portal link to the run.
+ + **azureml-core**
+ + Added HDFS mode in documentation.
+ + Added support to understand File Dataset partitions based on glob structure.
+ + Added support for update container registry associated with AzureML Workspace.
+ + Deprecated Environment attributes under the DockerSection - "enabled", "shared_volume" and "arguments" are a part of DockerConfiguration in RunConfiguration now.
+ + Updated Pipeline CLI clone documentation
+ + Updated portal URIs to include tenant for authentication
+ + Removed experiment name from run URIs to avoid redirects
+ + Updated experiment URO to use experiment ID.
+ + Bug fixes for attaching remote compute with AzureML CLI.
+ + Updated portal URIs to include tenant for authentication.
+ + Updated experiment URI to use experiment Id.
+ + **azureml-interpret**
+ + azureml-interpret updated to use interpret-community 0.17.0
+ + **azureml-opendatasets**
+ + Input start date and end date type validation and error indication if it's not datetime type.
+ + **azureml-parallel-run**
+ + [Experimental feature] Add `partition_keys` parameter to ParallelRunConfig, if specified, the input dataset(s) would be partitioned into mini-batches by the keys specified by it. It requires all input datasets to be partitioned dataset.
+ + **azureml-pipeline-steps**
+ + Bugfix - supporting path_on_compute while passing dataset configuration as download.
+ + Deprecate RScriptStep in favor of using CommandStep for running R scripts in pipelines.
+ + Deprecate EstimatorStep in favor of using CommandStep for running ML training (including distributed training) in pipelines.
+ + **azureml-sdk**
+ + Update python_requires to < 3.9 for azureml-sdk
+ + **azureml-train-automl-client**
+ + Changed console output when submitting an AutoML run to show a portal link to the run.
+ + **azureml-train-core**
+ + Deprecated DockerSection's 'enabled', 'shared_volume', and 'arguments' attributes in favor of using DockerConfiguration with ScriptRunConfig.
+ + Use Azure Open Datasets for MNIST dataset
+ + Hyperdrive error messages have been updated.
++
+## 2021-03-22
+
+### Azure Machine Learning SDK for Python v1.25.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Changed console output when submitting an AutoML run to show a portal link to the run.
+ + **azureml-core**
+ + Starts to support updating container registry for workspace in SDK and CLI
+ + Deprecated DockerSection's 'enabled', 'shared_volume', and 'arguments' attributes in favor of using DockerConfiguration with ScriptRunConfig.
+ + Updated Pipeline CLI clone documentation
+ + Updated portal URIs to include tenant for authentication
+ + Removed experiment name from run URIs to avoid redirects
+ + Updated experiment URO to use experiment ID.
+ + Bug fixes for attaching remote compute using az cli
+ + Updated portal URIs to include tenant for authentication.
+ + Added support to understand File Dataset partitions based on glob structure.
+ + **azureml-interpret**
+ + azureml-interpret updated to use interpret-community 0.17.0
+ + **azureml-opendatasets**
+ + Input start date and end date type validation and error indication if it's not datetime type.
+ + **azureml-pipeline-core**
+ + Bugfix - supporting path_on_compute while passing dataset configuration as download.
+ + **azureml-pipeline-steps**
+ + Bugfix - supporting path_on_compute while passing dataset configuration as download.
+ + Deprecate RScriptStep in favor of using CommandStep for running R scripts in pipelines.
+ + Deprecate EstimatorStep in favor of using CommandStep for running ML training (including distributed training) in pipelines.
+ + **azureml-train-automl-runtime**
+ + Changed console output when submitting an AutoML run to show a portal link to the run.
+ + **azureml-train-core**
+ + Deprecated DockerSection's 'enabled', 'shared_volume', and 'arguments' attributes in favor of using DockerConfiguration with ScriptRunConfig.
+ + Use Azure Open Datasets for MNIST dataset
+ + Hyperdrive error messages have been updated.
++ ## 2021-03-31 ### Azure Machine Learning Studio Notebooks Experience (March Update) + **New features**
__RSS feed__: Get notified when this page is updated by copying and pasting the
+ Links are now clickable in Terminal + Improved Intellisense performance + ## 2021-03-08 ### Azure Machine Learning SDK for Python v1.24.0
__RSS feed__: Get notified when this page is updated by copying and pasting the
+ Added functionality to filter Tabular Datasets by column values and File Datasets by metadata. + **azureml-contrib-fairness** + Include JSON schema in wheel for `azureml-contrib-fairness`
- + **azureml-contrib-k8s**
- + Must now provide resource_id to attach instead of resource group and cluster name.
+ **azureml-contrib-mir** + With setting show_output to True when deploy models, inference configuration and deployment configuration will be replayed before sending the request to server. + **azureml-core**
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-automated-ml.md
Automated machine learning, also referred to as automated ML or AutoML, is the p
Traditional machine learning model development is resource-intensive, requiring significant domain knowledge and time to produce and compare dozens of models. With automated machine learning, you'll accelerate the time it takes to get production-ready ML models with great ease and efficiency.
+## AutoML in Azure Machine Learning
+
+Azure Machine Learning offers two experiences for working with automated ML:
+
+* For code experienced customers, [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro). Get started with [Tutorial: Use automated machine learning to predict taxi fares](tutorial-auto-train-models.md).
+
+* For limited/no code experience customers, Azure Machine Learning studio at [https://ml.azure.com](https://ml.azure.com/). Get started with these tutorials:
+ * [Tutorial: Create a classification model with automated ML in Azure Machine Learning](tutorial-first-experiment-automated-ml.md).
+ * [Tutorial: Forecast demand with automated machine learning](tutorial-automated-ml-forecast.md)
++ ## When to use AutoML: classify, regression, & forecast Apply automated ML when you want Azure Machine Learning to train and tune a model for you using the target metric you specify. Automated ML democratizes the machine learning model development process, and empowers its users, no matter their data science expertise, to identify an end-to-end machine learning pipeline for any problem.
For example, building a model __for each instance or individual__ in the followi
* Predictive maintenance for hundreds of oil wells * Tailoring an experience for individual users.
-## AutoML in Azure Machine Learning
-
-Azure Machine Learning offers two experiences for working with automated ML:
-
-* For code experienced customers, [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro)
-
-* For limited/no code experience customers, Azure Machine Learning studio at [https://ml.azure.com](https://ml.azure.com/)
- <a name="parity"></a> ### Experiment settings
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-attach-kubernetes.md
Previously updated : 03/11/2021 Last updated : 04/08/2021 # Create and attach an Azure Kubernetes Service cluster
Azure Machine Learning can deploy trained machine learning models to Azure Kuber
- __Do not directly update the cluster by using a YAML configuration__. While Azure Kubernetes Services supports updates via YAML configuration, Azure Machine Learning deployments will override your changes. The only two YAML fields that will not overwritten are __request limits__ and and __cpu and memory__.
+- Creating an AKS cluster using the Azure Machine Learning studio UI, SDK, or CLI extension is __not__ idempotent. Attempting to create the resource again will result in an error that a cluster with the same name already exists.
+
+ - Using an Azure Resource Manager template and the [Microsoft.MachineLearningServices/workspaces/computes](/azure/templates/microsoft.machinelearningservices/2019-11-01/workspaces/computes) resource to create an AKS cluster is also __not__ idempotent. If you attempt to use the template again to update an already existing resource, you will receive the same error.
+ ## Azure Kubernetes Service version Azure Kubernetes Service allows you to create a cluster using a variety of Kubernetes versions. For more information on available versions, see [supported Kubernetes versions in Azure Kubernetes Service](../aks/supported-kubernetes-versions.md).
machine-learning How To Deploy Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-azure-kubernetes-service.md
At model deployment time, for a successful model deployment AKS node should be a
After the model is deployed and service starts, azureml-fe will automatically discover it using AKS API and will be ready to route request to it. It must be able to communicate to model PODs. >[!Note]
->If the deployed model requires any connectivity (e.g. querying external database or other REST service, downloading a BLOG etc), then both DNS resolution and outbound communication for these services should be enabled.
+>If the deployed model requires any connectivity (e.g. querying external database or other REST service, downloading a BLOB etc), then both DNS resolution and outbound communication for these services should be enabled.
## Deploy to AKS
machine-learning How To Identity Based Data Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-identity-based-data-access.md
Title: Identity-based data access to storage services on Azure-
-description: Learn how to use identity-based data access to connect to storage services on Azure.
+
+description: Learn how to use identity-based data access to connect to storage services on Azure with Azure Machine Learning datastores and the Machine Learning Python SDK.
Last updated 02/22/2021
-# Customer intent: As an experienced Python developer, I need to make my data in Azure storage available to my compute to train my machine learning models.
+# Customer intent: As an experienced Python developer, I need to make my data in Azure Storage available to my compute to train my machine learning models.
-# Connect to storage with identity-based data access (preview)
+# Connect to storage by using identity-based data access (preview)
>[!IMPORTANT]
-> The functionalities presented in this article are in preview, and should be considered [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features that may change at any time.
+> The features presented in this article are in preview. They should be considered [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features that might change at any time.
-In this article, you learn how to connect to storage services on Azure with identity-based data access and Azure Machine Learning datastores via the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro).
+In this article, you learn how to connect to storage services on Azure by using identity-based data access and Azure Machine Learning datastores via the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
-Typically, datastores use credential-based data access to confirm you have permission to access the storage service. They keep connection information, like your subscription ID and token authorization, in your [Key Vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace. When you create a datastore that uses identity-based data access, your Azure login ([Azure Active Directory token](../active-directory/fundamentals/active-directory-whatis.md)) is used to confirm you have permission to access the storage service. In this scenario, no authentication credentials are saved, and only the storage account information is stored in the datastore.
+Typically, datastores use credential-based data access to confirm you have permission to access the storage service. They keep connection information, like your subscription ID and token authorization, in the [key vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace. When you create a datastore that uses identity-based data access, your Azure account ([Azure Active Directory token](../active-directory/fundamentals/active-directory-whatis.md)) is used to confirm you have permission to access the storage service. In this scenario, no authentication credentials are saved. Only the storage account information is stored in the datastore.
-To create datastores that use credential-based authentication, like with access keys or service principals, see [Connect to storage services on Azure](how-to-access-data.md).
+To create datastores that use credential-based authentication, like access keys or service principals, see [Connect to storage services on Azure](how-to-access-data.md).
## Identity-based data access in Azure Machine Learning
-There are two areas to apply identity-based data access in Azure Machine Learning. Especially, when working with confidential data and need more granular data access management.
+There are two scenarios in which you can apply identity-based data access in Azure Machine Learning. These scenarios are a good fit for identity-based access when you're working with confidential data and need more granular data access management:
-1. Accessing storage services.
-1. Training machine learning models with private data.
+- Accessing storage services
+- Training machine learning models with private data
### Accessing storage services You can connect to storage services via identity-based data access with Azure Machine Learning datastores or [Azure Machine Learning datasets](how-to-create-register-datasets.md).
-Usually, your authentication credentials are kept in a datastore, which is used to ensure you have permission to access the storage service. When these credentials are registered with datastores, any user with the workspace *Reader* role is able to retrieve them--which can be a security concern for some organizations. [Learn more about the workspace *Reader* role](how-to-assign-roles.md#default-roles).
+Your authentication credentials are usually kept in a datastore, which is used to ensure you have permission to access the storage service. When these credentials are registered via datastores, any user with the workspace Reader role can retrieve them. That scale of access can be a security concern for some organizations. [Learn more about the workspace Reader role.](how-to-assign-roles.md#default-roles)
-When you use identity-based data access, Azure Machine Learning prompts you for your Azure Active Directory token for data access authentication, instead of keeping your credentials in the datastore. Which allows for data access management at the storage level and keeps credentials confidential.
+When you use identity-based data access, Azure Machine Learning prompts you for your Azure Active Directory token for data access authentication instead of keeping your credentials in the datastore. That approach allows for data access management at the storage level and keeps credentials confidential.
-The same behavior applies when you,
+The same behavior applies when you:
-* [Create a dataset directly from storage urls](#use-data-in-storage).
-* Work with data interactively via a Jupyter notebook on your local machine or [compute instance](concept-compute-instance.md).
+* [Create a dataset directly from storage URLs](#use-data-in-storage).
+* Work with data interactively via a Jupyter Notebook on your local computer or [compute instance](concept-compute-instance.md).
> [!NOTE]
-> Credentials stored using credential-based authentication include: subscription ID, shared access signature (SAS) tokens, storage access keys and service principal information like, client ID and tenant ID.
+> Credentials stored via credential-based authentication include subscription IDs, shared access signature (SAS) tokens, and storage access key and service principal information, like client IDs and tenant IDs.
### Model training on private data
-Certain machine learning scenarios involve training models with private data. In such cases, data scientists need to run training workflows without exposure to the confidential input data. In this scenario, a managed identity of the training compute is used for data access authentication. This way, storage admins can grant **Storage Blob Data Reader** access to the managed identity that the training compute uses to run the training job, instead of the individual data scientists. Learn how to [set up managed identity on a compute](how-to-create-attach-compute-cluster.md#managed-identity).
+Certain machine learning scenarios involve training models with private data. In such cases, data scientists need to run training workflows without being exposed to the confidential input data. In this scenario, a managed identity of the training compute is used for data access authentication. This approach allows storage admins to grant Storage Blob Data Reader access to the managed identity that the training compute uses to run the training job. The individual data scientists don't need to be granted access. For more information, see [Set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#managed-identity).
## Prerequisites - An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://aka.ms/AMLFree). -- An Azure storage account with a supported storage type. The following storage types are supported in preview.
+- An Azure storage account with a supported storage type. These storage types are supported in preview:
- [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md)
- - [Azure Data Lake Gen 1](../data-lake-store/index.yml)
- - [Azure Data Lake Gen 2](../storage/blobs/data-lake-storage-introduction.md)
- - [Azure SQL database](../azure-sql/database/sql-database-paas-overview.md)
+ - [Azure Data Lake Storage Gen1](../data-lake-store/index.yml)
+ - [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md)
+ - [Azure SQL Database](../azure-sql/database/sql-database-paas-overview.md)
- The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/install).
Certain machine learning scenarios involve training models with private data. In
## Storage access permissions
-To ensure you securely connect to your storage service on Azure, Azure Machine Learning requires that you have permission to access the corresponding data storage.
+To help ensure that you securely connect to your storage service on Azure, Azure Machine Learning requires that you have permission to access the corresponding data storage.
-Identity-based data access only supports connections to the following storage
+Identity-based data access supports connections to only the following storage
* Azure Blob Storage
-* Azure Data Lake Generation 1
-* Azure Data Lake Generation 2
-* Azure SQL database
+* Azure Data Lake Storage Gen1
+* Azure Data Lake Storage Gen2
+* Azure SQL Database
-To access these storage services, you must have at minimum **Storage Blob Data Reader** access. Learn more about [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader). Only storage account owners can [change your access level via the Azure portal](../storage/common/storage-auth-aad-rbac-portal.md).
+To access these storage services, you must have at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access. Only storage account owners can [change your access level via the Azure portal](../storage/common/storage-auth-aad-rbac-portal.md).
-If you are training a model on a remote compute target, the compute identity must be granted with at least **Storage Blob Data Reader** role from the storage service. Learn how to [set up managed identity on compute](how-to-create-attach-compute-cluster.md#managed-identity).
+If you're training a model on a remote compute target, the compute identity must be granted at least the Storage Blob Data Reader role from the storage service. Learn how to [set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#managed-identity).
## Work with virtual networks
-By default, Azure Machine Learning cannot communicate with a storage account that is behind a firewall or within a virtual network.
+By default, Azure Machine Learning can't communicate with a storage account that's behind a firewall or in a virtual network.
-Storage accounts can be configured to allow access only from within specific virtual networks, which requires additional configurations to ensure data is not leaked outside of the network. This behavior is the same for credential-based data access, see [what configurations are needed and how to apply them for virtual network scenarios](how-to-access-data.md#virtual-network).
+You can configure storage accounts to allow access only from within specific virtual networks. This configuration requires additional steps to ensure data isn't leaked outside of the network. This behavior is the same for credential-based data access. For more information, see [How to configure virtual network scenarios](how-to-access-data.md#virtual-network).
## Create and register datastores
-When you register a storage service on Azure as a datastore, you automatically create and register that datastore to a specific workspace. Review these sections: [storage access permissions](#storage-access-permissions) for guidance on required permission types, and [work with virtual network](#work-with-virtual-networks) for details on how to connect to data storage behind virtual networks.
+When you register a storage service on Azure as a datastore, you automatically create and register that datastore to a specific workspace. See [Storage access permissions](#storage-access-permissions) for guidance on required permission types. See [Work with virtual networks](#work-with-virtual-networks) for details on how to connect to data storage behind virtual networks.
-In the following code, notice the absence of authentication parameters, like `sas_token`, `account_key`, `subscription_id`, or service principal `client_id`. This omission, indicates that Azure Machine Learning is to use identity-based data access to for authentication. Because creation of datastores typically happens interactively in a notebook or via the studio, your Azure Active Directory token is used for data access authentication.
+In the following code, notice the absence of authentication parameters like `sas_token`, `account_key`, `subscription_id`, and the service principal `client_id`. This omission indicates that Azure Machine Learning will use identity-based data access for authentication. Creation of datastores typically happens interactively in a notebook or via the studio. So your Azure Active Directory token is used for data access authentication.
> [!NOTE]
-> Datastore names should only consist of lowercase letters, digits and underscores.
+> Datastore names should consist only of lowercase letters, numbers, and underscores.
### Azure blob container To register an Azure blob container as a datastore, use [`register_azure_blob_container()`](/python/api/azureml-core/azureml.core.datastore%28class%29#register-azure-blob-container-workspace--datastore-name--container-name--account-name--sas-token-none--account-key-none--protocol-none--endpoint-none--overwrite-false--create-if-not-exists-false--skip-validation-false--blob-cache-timeout-none--grant-workspace-access-false--subscription-id-none--resource-group-none-).
-The following code creates and registers the `credentialless_blob` datastore to the `ws` workspace and assigns it to the variable, `blob_datastore`. This datastore accesses the `my_container_name` blob container on the `my-account-name` storage account.
+The following code creates the `credentialless_blob` datastore, registers it to the `ws` workspace, and assigns it to the `blob_datastore` variable. This datastore accesses the `my_container_name` blob container on the `my-account-name` storage account.
```Python
-# create blob datastore without credentials
+# Create blob datastore without credentials.
blob_datastore = Datastore.register_azure_blob_container(workspace=ws, datastore_name='credentialless_blob', container_name='my_container_name', account_name='my_account_name') ```
-### Azure Data Lake Storage Generation 1
+### Azure Data Lake Storage Gen1
-For an Azure Data Lake Storage Generation 1 (ADLS Gen 1) datastore, use [register_azure_data_lake()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-data-lake-workspace--datastore-name--store-name--tenant-id-none--client-id-none--client-secret-none--resource-url-none--authority-url-none--subscription-id-none--resource-group-none--overwrite-false--grant-workspace-access-false-) to register a datastore that connects to an Azure DataLake Generation 1 storage.
+Use [register_azure_data_lake()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-data-lake-workspace--datastore-name--store-name--tenant-id-none--client-id-none--client-secret-none--resource-url-none--authority-url-none--subscription-id-none--resource-group-none--overwrite-false--grant-workspace-access-false-) to register a datastore that connects to Azure Data Lake Storage Gen1.
-The following code creates and registers the `credentialless_adls1` datastore to the `workspace` workspace and assigns it to the variable, `adls_dstore`. This datastore accesses the `adls_storage` Azure Data Lake Store storage account.
+The following code creates the `credentialless_adls1` datastore, registers it to the `workspace` workspace, and assigns it to the `adls_dstore` variable. This datastore accesses the `adls_storage` Azure Data Lake Storage account.
```Python
-# create adls gen1 without credentials
+# Create Azure Data Lake Storage Gen1 datastore without credentials.
adls_dstore = Datastore.register_azure_data_lake(workspace = workspace, datastore_name='credentialless_adls1', store_name='adls_storage') ```
-### Azure Data Lake Storage Generation 2
+### Azure Data Lake Storage Gen2
-For an Azure Data Lake Storage Generation 2 (ADLS Gen 2) datastore, use [register_azure_data_lake_gen2()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-data-lake-gen2-workspace--datastore-name--filesystem--account-name--tenant-id--client-id--client-secret--resource-url-none--authority-url-none--protocol-none--endpoint-none--overwrite-false-) to register a datastore that connects to an Azure DataLake Gen 2 storage.
+Use [register_azure_data_lake_gen2()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-data-lake-gen2-workspace--datastore-name--filesystem--account-name--tenant-id--client-id--client-secret--resource-url-none--authority-url-none--protocol-none--endpoint-none--overwrite-false-) to register a datastore that connects to Azure Data Lake Storage Gen2.
-The following code creates and registers the `credentialless_adls2` datastore to the `ws` workspace and assigns it to the variable, `adls2_dstore`. This datastore accesses the file system `tabular` in the `myadls2` storage account.
+The following code creates the `credentialless_adls2` datastore, registers it to the `ws` workspace, and assigns it to the `adls2_dstore` variable. This datastore accesses the file system `tabular` in the `myadls2` storage account.
```python
-# createn adls2 datastore without credentials
+# Create Azure Data Lake Storage Gen2 datastore without credentials.
adls2_dstore = Datastore.register_azure_data_lake_gen2(workspace=ws, datastore_name='credentialless_adls2', filesystem='tabular',
adls2_dstore = Datastore.register_azure_data_lake_gen2(workspace=ws,
## Use data in storage
-[Azure Machine Learning datasets](how-to-create-register-datasets.md) are the recommended way to interact with your data in storage with Azure Machine Learning.
+We recommend that you use [Azure Machine Learning datasets](how-to-create-register-datasets.md) when you interact with your data in storage with Azure Machine Learning.
-Datasets package your data into a lazily evaluated consumable object for machine learning tasks, like training. Also, with datasets you can [download or mount](how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services like, Azure Blob storage and Azure Data Lakes, to a compute target.
+Datasets package your data into a lazily evaluated consumable object for machine learning tasks like training. Also, with datasets you can [download or mount](how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services like Azure Blob Storage and Azure Data Lake Storage to a compute target.
-**To create datasets with identity-based data access**, you have the following options. This type of dataset creation employs your Azure Active Directory token for data access authentication.
+To create datasets with identity-based data access, you have the following options. This type of dataset creation uses your Azure Active Directory token for data access authentication.
* Reference paths from datastores that also use identity-based data access.
-<br>In the following example, `blob_datastore` was previously created using identity-based data access.
+<br>In the following example, `blob_datastore` already exists and uses identity-based data access.
```python blob_dataset = Dataset.Tabular.from_delimited_files(blob_datastore,'test.csv') ```
-* Skip datastore creation and create datasets directly from storage urls. Currently this functionality only supports Azure Blobs and Azure Data Lake Storage Generations 1 and 2.
+* Skip datastore creation and create datasets directly from storage URLs. This functionality currently supports only Azure blobs and Azure Data Lake Storage Gen1 and Gen2.
```python blob_dset = Dataset.File.from_files('https://myblob.blob.core.windows.net/may/keras-mnist-fashion/') ```
-**However, when you submit a training job that consumes a dataset created with identity-based data access**, the managed identity of the training compute is used for data access authentication, instead of your Azure Active Directory token. For this scenario, ensure that the managed identity of the compute is granted with at least **Storage Blob Data Reader** role from the storage service. Learn how to [set up managed identity on compute](how-to-create-attach-compute-cluster.md#managed-identity).
+When you submit a training job that consumes a dataset created with identity-based data access, the managed identity of the training compute is used for data access authentication. Your Azure Active Directory token isn't used. For this scenario, ensure that the managed identity of the compute is granted at least the Storage Blob Data Reader role from the storage service. For more information, see [Set up managed identity on compute clusters](how-to-create-attach-compute-cluster.md#managed-identity).
## Next steps
-* [Create an Azure machine learning dataset](how-to-create-register-datasets.md).
-* [Train with datasets](how-to-train-with-datasets.md).
-* [Create a datastore with key-based data access](how-to-access-data.md).
+* [Create an Azure Machine Learning dataset](how-to-create-register-datasets.md)
+* [Train with datasets](how-to-train-with-datasets.md)
+* [Create a datastore with key-based data access](how-to-access-data.md)
machine-learning How To Link Synapse Ml Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-link-synapse-ml-workspaces.md
# Link Azure Synapse Analytics and Azure Machine Learning workspaces (preview)
-In this article, you learn how to create a linked service that links your [Azure Synapse Analytics](/synapse-analytics/overview-what-is.md) workspace and [Azure Machine Learning workspace](concept-workspace.md).
+In this article, you learn how to create a linked service that links your [Azure Synapse Analytics](/azure/synapse-analytics/overview-what-is) workspace and [Azure Machine Learning workspace](concept-workspace.md).
With your Azure Machine Learning workspace linked with your Azure Synapse workspace, you can attach an Apache Spark pool as a dedicated compute for data wrangling at scale and conduct model training from the same notebook.
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-troubleshoot-auto-ml.md
Previously updated : 03/08/2020 Last updated : 03/08/2021
machine-learning How To Use Labeled Dataset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-labeled-dataset.md
imgplot = plt.imshow(gray_image)
## Next steps
-* See the [dataset with labels notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/labeled-datasets/labeled-datasets.ipynb) for complete training sample.
+* See the [dataset with labels notebook](/azure/machine-learning/how-to-use-labeled-dataset) for complete training sample.
machine-learning How To Use Pipeline Parameter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-pipeline-parameter.md
Previously updated : 03/19/2021 Last updated : 04/09/2020
If you want to submit your pipeline with variable datasets, you must promote you
You can now specify a different dataset by using the pipeline parameter the next time you run the pipeline.
-## Attach module parameter to pipeline parameter
+## Attach and detach module parameter to pipeline parameter
-In this section, you will learn how to attach module parameter to pipeline parameter.
+In this section, you will learn how to attach and detach module parameter to pipeline parameter.
+
+### Attach module parameter to pipeline parameter
You can attach the same module parameters of duplicated modules to the same pipeline parameter if you want to alter the value at one time when triggering the pipeline run.
The following example has duplicated **Clean Missing Data** module. For each **C
![Screenshot that shows how to attach a pipeline parameter](media/how-to-use-pipeline-parameter/attach-replace-value-to-pipeline-parameter.png)
-You have successfully attached the **Replacement value** field to your pipeline parameter. The **Replacement value** in the modules are non-actionable.
+You have successfully attached the **Replacement value** field to your pipeline parameter.
- ![Screenshot that shows non-actionable after attaching to pipeline parameter](media/how-to-use-pipeline-parameter/non-actionable-module-parameter.png)
+### Detach module parameter to pipeline parameter
+
+After you attach **Replacement value** to pipeline parameter, it is non-actionable.
+
+You can detach module parameter to pipeline parameter by clicking the ellipses (**...**) next to the module parameter, and select **Detach from pipeline parameter**.
+
+ ![Screenshot that shows non-actionable after attaching to pipeline parameter](media/how-to-use-pipeline-parameter/non-actionable-module-parameter.png)
## Update and delete pipeline parameters
Use the following steps to update a module pipeline parameter:
### Delete a dataset pipeline parameter
-Use the following steps to detach a dataset pipeline parameter:
+Use the following steps to delete a dataset pipeline parameter:
1. Select the dataset module. 1. Uncheck the option **Set as pipeline parameter**.
Use the following steps to delete a module pipeline parameter:
1. Select the ellipses (**...**) next to the pipeline parameter.
- This view shows you which modules the pipeline parameter is attached to. To delete a pipeline parameter, you must first detach it from any module parameters.
-
- ![Screenshot that shows the current pipeline parameter applied to a module](media/how-to-use-pipeline-parameter/current-pipeline-parameter.png)
+ This view shows you which modules the pipeline parameter is attached to.
-1. In the canvas, select a module that the pipeline parameter is still attached to.
-1. In the module properties pane to the right, find the field that the pipeline parameter is attached to.
-1. Mouseover the attached field. Then, select the ellipses (**...**) that appear.
-1. Select **Detach from pipeline parameter**
+ ![Screenshot that shows the current pipeline parameter applied to a module](media/how-to-use-pipeline-parameter/delete-pipeline-parameter2.png)
- ![Screenshot that shows detaching from pipeline parameters](media/how-to-use-pipeline-parameter/detach-from-pipeline-parameter.png)
-
-1. Repeat the previous steps until you detached the pipeline parameter from all fields.
-1. Select the ellipses (**...**) next to the pipeline parameter.
1. Select **Delete parameter** to delete the pipeline parameter.
- ![Screenshot that shows deleting pipeline parameters](media/how-to-use-pipeline-parameter/delete-pipeline-parameter.png)
+ > [!NOTE]
+ > Deleting a pipeline parameter will cause all attached module parameters to be detached and the value of detached module parameters will keep current pipeline parameter value.
## Trigger a pipeline run with pipeline parameters
machine-learning Tutorial 1St Experiment Sdk Setup Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-sdk-setup-local.md
Title: "Tutorial: Get started with machine learning - Python"
-description: In this tutorial, you'll get started with the Azure Machine Learning SDK for Python running in your personal development environment.
+description: Get started with the Azure Machine Learning SDK for Python running in your personal development environment.
In part 1 of this tutorial series, you will:
> * Set up the directory structure for code. > * Create an Azure Machine Learning workspace. > * Configure your local development environment.
-> * Set up a compute cluster.
+> * Set up a compute cluster, a cloud-based resource for training your models.
-> [!NOTE]
-> This tutorial series focuses on the Azure Machine Learning concepts required to submit **batch jobs** - this is where the code is submitted to the cloud to run in the background without any user interaction. This is useful for finished scripts or code you wish to run repeatedly, or for compute-intensive machine learning tasks. If you are more interested in an exploratory workflow, you could instead use [Jupyter or RStudio on an Azure Machine Learning compute instance](tutorial-1st-experiment-sdk-setup.md).
+This tutorial series focuses on the Azure Machine Learning concepts required to submit **batch jobs** - this is where the code is submitted to the cloud to run in the background without any user interaction. This is useful for finished scripts or code you wish to run repeatedly, or for compute-intensive machine learning tasks.
+
+Another great way to start using Azure Machine Learning is with a Jupyter notebooks. See [Tutorial: Get started with Azure Machine Learning in Jupyter Notebooks](tutorial-1st-experiment-sdk-setup.md).
## Prerequisites - An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try [Azure Machine Learning](https://aka.ms/AMLFree). - [Anaconda](https://www.anaconda.com/download/) or [Miniconda](https://www.anaconda.com/download/) to manage Python virtual environments and install packages. - If you're not familiar with using conda, see [Getting started with conda](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html).
+- Any IDE or text editor to create your Python scripts.
## Install the Azure Machine Learning SDK
machine-learning Tutorial 1St Experiment Sdk Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-sdk-setup.md
In this tutorial, you:
> [!div class="checklist"] > * Create an [Azure Machine Learning workspace](concept-workspace.md) to use in other Jupyter Notebook tutorials. > * Clone the tutorials notebook to your folder in the workspace.
-> * Create a cloud-based compute instance with the Azure Machine Learning Python SDK installed and preconfigured.
+> * Create a cloud-based compute instance, which gives you an environment with Azure Machine Learning Python SDK already installed and configured for you.
+
+This tutorial prepares you to run Jupyter notebooks on a compute resource in your workspace.
+
+Another great way to start with Azure Machine learning is by submitting batch jobs. See [Tutorial: Get started with Azure Machine Learning in your development environment](tutorial-1st-experiment-sdk-setup-local.md).
+
+## Prerequisites
If you donΓÇÖt have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://aka.ms/AMLFree) today.
machine-learning Tutorial Auto Train Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-auto-train-models.md
Title: 'Regression tutorial: Automated ML'
+ Title: 'Tutorial: Regression with automated machine learning'
-description: Create an automated machine learning experiment that generates a regression model for you based on the training data and configuration settings you provide.
+description: Write code with the Python SDK to create an automated machine learning experiment that generates a regression model for you.
# Tutorial: Use automated machine learning to predict taxi fares -
-In this tutorial, you use automated machine learning in Azure Machine Learning to create a regression model to predict NYC taxi fare prices. This process accepts training data and configuration settings, and automatically iterates through combinations of different feature normalization/standardization methods, models, and hyperparameter settings to arrive at the best model.
+In this tutorial, you use automated machine learning in the Azure Machine Learning SDK to create a [regression model](concept-automated-ml.md#regression) to predict NYC taxi fare prices. This process accepts training data and configuration settings, and automatically iterates through combinations of different feature normalization/standardization methods, models, and hyperparameter settings to arrive at the best model.
![Flow diagram](./media/tutorial-auto-train-models/flow2.png)
-In this tutorial you learn the following tasks:
+You'll write code using the Python SDK in this tutorial. You'll learn the following tasks:
> [!div class="checklist"] > * Download, transform, and clean data using Azure Open Datasets > * Train an automated machine learning regression model > * Calculate model accuracy
-If you donΓÇÖt have an Azure subscription, create a free account before you begin. Try the [free or paid version](https://aka.ms/AMLFree) of Azure Machine Learning today.
+Also try automated machine learning for these other model types:
+
+* [Tutorial: Create a classification model with automated ML in Azure Machine Learning](tutorial-first-experiment-automated-ml.md) - a no-code example.
+* [Tutorial: Forecast demand with automated machine learning](tutorial-automated-ml-forecast.md) - a no-code example.
## Prerequisites
+If you donΓÇÖt have an Azure subscription, create a free account before you begin. Try the [free or paid version](https://aka.ms/AMLFree) of Azure Machine Learning today.
+ * Complete the [setup tutorial](tutorial-1st-experiment-sdk-setup.md) if you don't already have an Azure Machine Learning workspace or notebook virtual machine. * After you complete the setup tutorial, open the *tutorials/regression-automl-nyc-taxi-data/regression-automated-ml.ipynb* notebook using the same notebook server.
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-automated-ml-forecast.md
Title: 'Tutorial: Demand forecasting & AutoML'
-description: Learn how to train and deploy a demand forecasting model with automated machine learning in Azure Machine Learning studio.
+description: Train and deploy a deman