Updates from: 03/12/2022 02:09:57
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Previously updated : 02/15/2022 Last updated : 03/11/2022 -+
For the first test scenario, configure the authentication policy where the Issue
### Test multifactor authentication
-For the next test scenario, configure the authentication policy where the Issuer subject rule satisfies multifactor authentication.
+For the next test scenario, configure the authentication policy where the **policyOID** rule satisfies multifactor authentication.
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/multifactor.png" alt-text="Screenshot of the Authentication policy configuration showing multifactor authentication required." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/multifactor.png":::
active-directory Cloudknox Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-aws.md
Previously updated : 03/09/2022 Last updated : 03/10/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-> [!Note]
-> Sign up for the CloudKnox Permissions Management public preview by filling [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9AT7gfYe2NPtdIbYxQQX45UNEpIVjY4WUJNSUhMVjcyNzdYOFY2NFhISi4u).
This article describes how to onboard an Amazon Web Services (AWS) account on CloudKnox Permissions Management (CloudKnox). > [!NOTE] > A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable CloudKnox on your Azure Active Directory tenant](cloudknox-onboard-enable-tenant.md).
-## Prerequisites
--- To enable the CloudKnox **Feature highlights** tile in the Azure AD portal, [select this link to run the script in your browser](https://aka.ms/ciem-prod).-- To use the CloudKnox public preview, we encourage you to fill out a consent form that provides other terms and conditions for the public preview product. To open the form, select [CloudKnox Permissions Management Public Preview: Terms and Conditions](https://aka.ms/ciem-terms). ## View a training video on configuring and onboarding an AWS account
active-directory Cloudknox Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-azure.md
Previously updated : 03/09/2022 Last updated : 03/10/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-> [!Note]
-> Sign up for the CloudKnox Permissions Management public preview by filling [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9AT7gfYe2NPtdIbYxQQX45UNEpIVjY4WUJNSUhMVjcyNzdYOFY2NFhISi4u).
- This article describes how to onboard a Microsoft Azure subscription or subscriptions on CloudKnox Permissions Management (CloudKnox). Onboarding a subscription creates a new authorization system to represent the Azure subscription in CloudKnox. > [!NOTE]
This article describes how to onboard a Microsoft Azure subscription or subscrip
To add CloudKnox to your Azure AD tenant: - You must have an Azure AD user account and an Azure command-line interface (Azure CLI) on your system, or an Azure subscription. If you don't already have one, [create a free account](https://azure.microsoft.com/free/). - You must have **Microsoft.Authorization/roleAssignments/write** permission at the subscription or management group scope to perform these tasks. If you don't have this permission, you can ask someone who has this permission to perform these tasks for you.-- To enable the CloudKnox **Feature highlights** tile in the Azure AD portal, [select this link to run the script in your browser](https://aka.ms/ciem-prod).-- To use the CloudKnox public preview, we encourage you to fill out a consent form that provides other terms and conditions for the public preview product. To open the form, select [CloudKnox Permissions Management Public Preview: Terms and Conditions](https://aka.ms/ciem-terms).+ ## View a training video on enabling CloudKnox in your Azure AD tenant
active-directory Cloudknox Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-enable-tenant.md
Previously updated : 03/09/2022 Last updated : 03/10/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-> [!Note]
-> Sign up for the CloudKnox Permissions Management public preview by filling [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9AT7gfYe2NPtdIbYxQQX45UNEpIVjY4WUJNSUhMVjcyNzdYOFY2NFhISi4u).
This article describes how to enable CloudKnox Permissions Management (CloudKnox) in your organization. Once you've enabled CloudKnox, you can connect it to your Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) platforms.
To enable CloudKnox in your organization:
- You must have an Azure AD tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/). - You must be eligible for or have an active assignment to the global administrator role as a user in that tenant.-- To enable the CloudKnox **Feature highlights** tile in the Azure AD portal, [select this link to run the script in your browser](https://aka.ms/ciem-prod).-- To use the CloudKnox public preview, we encourage you to fill out a consent form that provides other terms and conditions for the public preview product. To open the form, select [CloudKnox Permissions Management Public Preview: Terms and Conditions](https://aka.ms/ciem-terms).+ > [!NOTE] > During public preview, CloudKnox doesn't perform a license check.
active-directory Cloudknox Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-gcp.md
Previously updated : 02/24/2022 Last updated : 03/10/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-> [!Note]
-> Sign up for the CloudKnox Permissions Management public preview by filling [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9AT7gfYe2NPtdIbYxQQX45UNEpIVjY4WUJNSUhMVjcyNzdYOFY2NFhISi4u).
This article describes how to onboard a Google Cloud Platform (GCP) project on CloudKnox Permissions Management (CloudKnox). > [!NOTE] > A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable CloudKnox on your Azure Active Directory tenant](cloudknox-onboard-enable-tenant.md).
-## Prerequisites
--- To enable the CloudKnox **Feature highlights** tile in the Azure AD portal, [select this link to run the script in your browser](https://aka.ms/ciem-prod).-- To use the CloudKnox public preview, we encourage you to fill out a consent form that provides other terms and conditions for the public preview product. To open the form, select [CloudKnox Permissions Management Public Preview: Terms and Conditions](https://aka.ms/ciem-terms). ## Onboard a GCP project
active-directory Cloudknox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-overview.md
Previously updated : 02/23/2022 Last updated : 03/10/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-> [!Note]
-> Sign up for the CloudKnox Permissions Management public preview by filling [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9AT7gfYe2NPtdIbYxQQX45UNEpIVjY4WUJNSUhMVjcyNzdYOFY2NFhISi4u).
- ## Overview CloudKnox Permissions Management (CloudKnox) is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multi-cloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
active-directory Migrate Spa Implicit To Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-spa-implicit-to-auth-code.md
For additional changes you might need to make to your code, see the [migration g
## Disable implicit grant settings
-Once you've updated all your production applications that use this app registration and its client ID to MSAL 2.x and the authorization code flow, you should uncheck the implicit grant settings in the app registration.
+Once you've updated all your production applications that use this app registration and its client ID to MSAL 2.x and the authorization code flow, you should uncheck the implicit grant settings under the **Authentication** menu of the app registration.
When you uncheck the implicit grant settings in the app registration, the implicit flow is disabled for all applications using registration and its client ID.
active-directory Msal Js Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-sso.md
When your application is open in multiple tabs and you first sign in the user on
By default, MSAL.js uses `sessionStorage`, which doesn't allow the session to be shared between tabs. To get SSO between tabs, make sure to set the `cacheLocation` in MSAL.js to `localStorage` as shown below. ```javascript+ const config = { auth: { clientId: "abcd-ef12-gh34-ikkl-ashdjhlhsdg",
const config = {
}, };
-const myMSALObj = new UserAgentApplication(config);
+const msalInstance = new msal.PublicClientApplication(config);
``` ## SSO between apps
var request = {
sid: sid, };
-userAgentApplication
- .acquireTokenSilent(request)
+ msalInstance.acquireTokenSilent(request)
.then(function (response) { const token = response.accessToken; })
var request = {
extraQueryParameters: { domain_hint: "organizations" }, };
-userAgentApplication.loginRedirect(request);
+ msalInstance.loginRedirect(request);
``` To get the values for login_hint and domain_hint by reading the claims returned in the ID token for the user.
To get the values for login_hint and domain_hint by reading the claims returned
- **domain_hint** is only required to be passed when using the /common authority. The domain hint is determined by tenant ID(tid). If the `tid` claim in the ID token is `9188040d-6c67-4c5b-b112-36a304b66dad` it's consumers. Otherwise, it's organizations.
-For more information about **login_hint** and **domain_hint**, see [Implicit grant flow](v2-oauth2-implicit-grant-flow.md).
+For more information about **login_hint** and **domain_hint**, see [auth code grant](v2-oauth2-auth-code-flow.md).
## SSO without MSAL.js login
var request = {
extraQueryParameters: { domain_hint: "organizations" }, };
-userAgentApplication
- .acquireTokenSilent(request)
+msalInstance.acquireTokenSilent(request)
.then(function (response) { const token = response.accessToken; })
MSAL.js brings feature parity with ADAL.js for Azure AD authentication scenarios
To take advantage of the SSO behavior when updating from ADAL.js, you'll need to ensure the libraries are using `localStorage` for caching tokens. Set the `cacheLocation` to `localStorage` in both the MSAL.js and ADAL.js configuration at initialization as follows: ```javascript+ // In ADAL.js window.config = { clientId: "g075edef-0efa-453b-997b-de1337c29185",
const config = {
}, };
-const myMSALObj = new UserAgentApplication(config);
+const msalInstance = new msal.PublicClientApplication(config);
``` Once the `cacheLocation` is configured, MSAL.js can read the cached state of the authenticated user in ADAL.js and use that to provide SSO in MSAL.js.
active-directory Secure Group Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/secure-group-access-control.md
+
+ Title: Secure access control using groups in Azure AD - Microsoft identity platform
+description: Learn about how groups are used to securely control access to resources in Azure AD.
++++++++ Last updated : 2/21/2022++++
+# Customer intent: As a developer, I want to learn how to most securely use Azure AD groups to control access to resources.
++
+# Secure access control using groups in Azure AD
+
+Azure Active Directory (Azure AD) allows the use of groups to manage access to resources in an organization. You should use groups for access control when you want to manage and minimize access to applications. When groups are used, only members of those groups can access the resource. Using groups also allows you to benefit from several Azure AD group management features, such as attribute-based dynamic groups, external groups synced from on-premises Active Directory, and Administrator managed or self-service managed groups. To learn more about the benefits of groups for access control, see [manage access to an application](../manage-apps/what-is-access-management.md).
+
+While developing an application, you can authorize access with the [groups claim](/graph/api/resources/application?view=graph-rest-1.0#properties&preserve-view=true). To learn more, see how to [configure group claims for applications with Azure AD](../hybrid/how-to-connect-fed-group-claims.md).
+
+Today, many applications select a subset of groups with the *securityEnabled* flag set to *true* to avoid scale challenges, that is, to reduce the number of groups returned in the token. Setting the *securityEnabled* flag to be true for a group doesn't guarantee that the group is securely managed. Therefore, we suggest following the best practices described below:
++
+## Best practices to mitigate risk
+
+This table presents several security best practices for security groups and the potential security risks each practice mitigates.
+
+|Security best practice |Security risk mitigated |
+|--||
+|**Ensure resource owner and group owner are the same principal**. Applications should build their own group management experience and create new groups to manage access. For example, an application can create groups with *Group. Create* permission and add itself as the owner of the group. This way the application has control over its groups without being over privileged to modify other groups in the tenant.|When group owners and resource owners are different users or entities, group owners can add users to the group who aren't supposed to get access to the resource and thus give access to the resource unintentionally.|
+|**Build an implicit contract between resource owner(s) and group owner(s)**. The resource owner and the group owner should align on the group purpose, policies, and members that can be added to the group to get access to the resource. This level of trust is non-technical and relies on human or business contract.|When group owners and resource owners have different intentions, the group owner may add users to the group the resource owner didn't intend on giving access to. This can result in unnecessary and potentially risky access.|
+|**Use private groups for access control**. Microsoft 365 groups are managed by the [visibility concept](/graph/api/resources/group?view=graph-rest-1.0#group-visibility-options&preserve-view=true). This property controls the join policy of the group and visibility of group resources. Security groups have join policies that either allow anyone to join or require owner approval. On-premises-synced groups can also be public or private. When they're used to give access to a resource in the cloud, users joining this group on-premises can get access to the cloud resource as well.|When you use a *Public* group for access control, any member can join the group and get access to the resource. When a *Public* group is used to give access to an external resource, the risk of elevation of privilege exists.|
+|**Group nesting**. When you use a group for access control and it has other groups as its members, members of the subgroups can get access to the resource. In this case, there are multiple group owners - owners of the parent group and the subgroups.|Aligning with multiple group owners on the purpose of each group and how to add the right members to these groups is more complex and more prone to accidental grant of access. Therefore, you should limit the number of nested groups or don't use them at all if possible.|
+
+## Next steps
+
+For more information about groups in Azure AD, see the following:
+
+- [Manage app and resource access using Azure Active Directory groups](../fundamentals/active-directory-manage-groups.md)
+- [Access with Azure Active Directory groups](/azure/devops/organizations/accounts/manage-azure-active-directory-groups)
+- [Restrict your Azure AD app to a set of users in an Azure AD tenant](./howto-restrict-your-app-to-a-set-of-users.md)
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
For on-premises environments, users with this role can configure domain names fo
In February 2021 we have added following 37 new applications in our App gallery with Federation support:
-[Loop Messenger Extension](https://loopworks.com/loop-flow-messenger/), [Silverfort Azure AD Adapter](http://www.silverfort.com/), [Interplay Learning](https://skilledtrades.interplaylearning.com/#login), [Nura Space](https://dashboard.nuraspace.com/login), [Yooz EU](https://eu1.getyooz.com/?kc_idp_hint=microsoft), [UXPressia](https://uxpressia.com/users/sign-in), [introDus Pre- and Onboarding Platform](http://app.introdus.dk/login), [Happybot](https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize?client_id=34353e1e-dfe5-4d2f-bb09-2a5e376270c8&response_type=code&redirect_uri=https://api.happyteams.io/microsoft/integrate&response_mode=query&scope=offline_access%20User.Read%20User.Read.All), [LeaksID](https://app.leaksid.com/), [ShiftWizard](http://www.shiftwizard.com/), [PingFlow SSO](https://app.pingview.io/), [Swiftlane](https://admin.swiftlane.com/login), [Quasydoc SSO](https://www.quasydoc.eu/login), [Fenwick Gold Account](https://businesscentral.dynamics.com/), [SeamlessDesk](https://www.seamlessdesk.com/login), [Learnsoft LMS & TMS](http://www.learnsoft.com/), [P-TH+](https://p-th.jp/), [myViewBoard](https://api.myviewboard.com/auth/microsoft/), [Tartabit IoT Bridge](https://bridge-us.tartabit.com/), [AKASHI](../saas-apps/akashi-tutorial.md), [Rewatch](../saas-apps/rewatch-tutorial.md), [Zuddl](../saas-apps/zuddl-tutorial.md), [Parkalot - Car park management](../saas-apps/parkalot-car-park-management-tutorial.md), [HSB ThoughtSpot](../saas-apps/hsb-thoughtspot-tutorial.md), [IBMid](../saas-apps/ibmid-tutorial.md), [SharingCloud](../saas-apps/sharingcloud-tutorial.md), [PoolParty Semantic Suite](../saas-apps/poolparty-semantic-suite-tutorial.md), [GlobeSmart](../saas-apps/globesmart-tutorial.md), [Samsung Knox and Business Services](../saas-apps/samsung-knox-and-business-services-tutorial.md), [Penji](../saas-apps/penji-tutorial.md), [Kendis- Scaling Agile Platform](../saas-apps/kendis-scaling-agile-platform-tutorial.md), [Maptician](../saas-apps/maptician-tutorial.md), [Olfeo SAAS](../saas-apps/olfeo-saas-tutorial.md), [Sigma Computing](../saas-apps/sigma-computing-tutorial.md), [CloudKnox Permissions Management Platform](../saas-apps/cloudknox-permissions-management-platform-tutorial.md), [Klaxoon SAML](../saas-apps/klaxoon-saml-tutorial.md), [Enablon](../saas-apps/enablon-tutorial.md)
+[Loop Messenger Extension](https://loopworks.com/loop-flow-messenger/), [Silverfort Azure AD Adapter](http://www.silverfort.com/), [Interplay Learning](https://skilledtrades.interplaylearning.com/#login), [Nura Space](https://dashboard.nuraspace.com/login), [Yooz EU](https://eu1.getyooz.com/?kc_idp_hint=microsoft), [UXPressia](https://uxpressia.com/users/sign-in), [introDus Pre- and Onboarding Platform](http://app.introdus.dk/login), [Happybot](https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize?client_id=34353e1e-dfe5-4d2f-bb09-2a5e376270c8&response_type=code&redirect_uri=https://api.happyteams.io/microsoft/integrate&response_mode=query&scope=offline_access%20User.Read%20User.Read.All), [LeaksID](https://leaksid.com/), [ShiftWizard](http://www.shiftwizard.com/), [PingFlow SSO](https://app.pingview.io/), [Swiftlane](https://admin.swiftlane.com/login), [Quasydoc SSO](https://www.quasydoc.eu/login), [Fenwick Gold Account](https://businesscentral.dynamics.com/), [SeamlessDesk](https://www.seamlessdesk.com/login), [Learnsoft LMS & TMS](http://www.learnsoft.com/), [P-TH+](https://p-th.jp/), [myViewBoard](https://api.myviewboard.com/auth/microsoft/), [Tartabit IoT Bridge](https://bridge-us.tartabit.com/), [AKASHI](../saas-apps/akashi-tutorial.md), [Rewatch](../saas-apps/rewatch-tutorial.md), [Zuddl](../saas-apps/zuddl-tutorial.md), [Parkalot - Car park management](../saas-apps/parkalot-car-park-management-tutorial.md), [HSB ThoughtSpot](../saas-apps/hsb-thoughtspot-tutorial.md), [IBMid](../saas-apps/ibmid-tutorial.md), [SharingCloud](../saas-apps/sharingcloud-tutorial.md), [PoolParty Semantic Suite](../saas-apps/poolparty-semantic-suite-tutorial.md), [GlobeSmart](../saas-apps/globesmart-tutorial.md), [Samsung Knox and Business Services](../saas-apps/samsung-knox-and-business-services-tutorial.md), [Penji](../saas-apps/penji-tutorial.md), [Kendis- Scaling Agile Platform](../saas-apps/kendis-scaling-agile-platform-tutorial.md), [Maptician](../saas-apps/maptician-tutorial.md), [Olfeo SAAS](../saas-apps/olfeo-saas-tutorial.md), [Sigma Computing](../saas-apps/sigma-computing-tutorial.md), [CloudKnox Permissions Management Platform](../saas-apps/cloudknox-permissions-management-platform-tutorial.md), [Klaxoon SAML](../saas-apps/klaxoon-saml-tutorial.md), [Enablon](../saas-apps/enablon-tutorial.md)
You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
For more information about how to better secure your organization by using autom
In January 2022, weΓÇÖve added the following 47 new applications in our App gallery with Federation support
-[Jooto](../saas-apps/jooto-tutorial.md), [Proprli](https://app.proprli.com/), [Pace Scheduler](https://www.pacescheduler.com/accounts/login/), [DRTrack](../saas-apps/drtrack-tutorial.md), [Dining Sidekick](../saas-apps/dining-sidekick-tutorial.md), [Cryotos](https://app.cryotos.com/oauth2/authorization/azure-client), [Emergency Management Systems](https://secure.emsystems.com.au/), [Manifestly Checklists](../saas-apps/manifestly-checklists-tutorial.md), [eLearnPOSH](../saas-apps/elearnposh-tutorial.md), [Scuba Analytics](../saas-apps/scuba-analytics-tutorial.md), [Athena Systems Login Platform](../saas-apps/athena-systems-login-platform-tutorial.md), [TimeTrack](../saas-apps/timetrack-tutorial.md), [MiHCM](../saas-apps/mihcm-tutorial.md), [Health Note](https://auth.healthnote.works/oauth), [Active Directory SSO for DoubleYou](../saas-apps/active-directory-sso-for-doubleyou-tutorial.md), [Emplifi platform](../saas-apps/emplifi-platform-tutorial.md), [Flexera One](../saas-apps/flexera-one-tutorial.md), [Hypothesis](https://web.hypothes.is/help/authorizing-hypothesis-from-the-azure-ad-app-gallery/), [Recurly](../saas-apps/recurly-tutorial.md), [XpressDox AU Cloud](https://au.xpressdox.com/Authentication/Login.aspx), [Zoom for Intune](https://zoom.us/), [UPWARD AGENT](https://app.upward.jp/login/), [Linux Foundation ID](https://openprofile.dev/), [Asset Planner](../saas-apps/asset-planner-tutorial.md), [Kiho](https://v3.kiho.fi/index/sso), [chezie](https://app.chezie.co/), [Excelity HCM](../saas-apps/excelity-hcm-tutorial.md), [yuccaHR](https://app.yuccahr.com/), [Blue Ocean Brain](../saas-apps/blue-ocean-brain-tutorial.md), [EchoSpan](../saas-apps/echospan-tutorial.md), [Archie](../saas-apps/archie-tutorial.md), [Equifax Workforce Solutions](../saas-apps/equifax-workforce-solutions-tutorial.md), [Palantir Foundry](../saas-apps/palantir-foundry-tutorial.md), [ATP SpotLight and ChronicX](../saas-apps/atp-spotlight-and-chronicx-tutorial.md), [DigiSign](https://app.digisign.org/selfcare/sso), [mConnect](https://mconnect.skooler.com/), [BrightHR](https://login.brighthr.com/), [Mural Identity](../saas-apps/mural-identity-tutorial.md), [NordPass SSO](https://app.nordpass.com/login%20use%20%22Log%20in%20to%20business%22%20option), [CloudClarity](https://portal.cloudclarity.app/dashboard), [Twic](../saas-apps/twic-tutorial.md), [Eduhouse Online](https://app.eduhouse.fi/palvelu/kirjaudu/microsoft), [Bealink](../saas-apps/bealink-tutorial.md), [Time Intelligence Bot](https://teams.microsoft.com/), [SentinelOne](https://sentinelone.com/)
+[Jooto](../saas-apps/jooto-tutorial.md), [Proprli](https://app.proprli.com/), [Pace Scheduler](https://www.pacescheduler.com/accounts/login/), [DRTrack](../saas-apps/drtrack-tutorial.md), [Dining Sidekick](../saas-apps/dining-sidekick-tutorial.md), [Cryotos](https://app.cryotos.com/oauth2/authorization/azure-client), [Emergency Management Systems](https://secure.emsystems.com.au/), [Manifestly Checklists](../saas-apps/manifestly-checklists-tutorial.md), [eLearnPOSH](../saas-apps/elearnposh-tutorial.md), [Scuba Analytics](../saas-apps/scuba-analytics-tutorial.md), [Athena Systems Login Platform](../saas-apps/athena-systems-login-platform-tutorial.md), [TimeTrack](../saas-apps/timetrack-tutorial.md), [MiHCM](../saas-apps/mihcm-tutorial.md), [Health Note](https://www.healthnote.com/), [Active Directory SSO for DoubleYou](../saas-apps/active-directory-sso-for-doubleyou-tutorial.md), [Emplifi platform](../saas-apps/emplifi-platform-tutorial.md), [Flexera One](../saas-apps/flexera-one-tutorial.md), [Hypothesis](https://web.hypothes.is/help/authorizing-hypothesis-from-the-azure-ad-app-gallery/), [Recurly](../saas-apps/recurly-tutorial.md), [XpressDox AU Cloud](https://au.xpressdox.com/Authentication/Login.aspx), [Zoom for Intune](https://zoom.us/), [UPWARD AGENT](https://app.upward.jp/login/), [Linux Foundation ID](https://openprofile.dev/), [Asset Planner](../saas-apps/asset-planner-tutorial.md), [Kiho](https://v3.kiho.fi/index/sso), [chezie](https://app.chezie.co/), [Excelity HCM](../saas-apps/excelity-hcm-tutorial.md), [yuccaHR](https://app.yuccahr.com/), [Blue Ocean Brain](../saas-apps/blue-ocean-brain-tutorial.md), [EchoSpan](../saas-apps/echospan-tutorial.md), [Archie](../saas-apps/archie-tutorial.md), [Equifax Workforce Solutions](../saas-apps/equifax-workforce-solutions-tutorial.md), [Palantir Foundry](../saas-apps/palantir-foundry-tutorial.md), [ATP SpotLight and ChronicX](../saas-apps/atp-spotlight-and-chronicx-tutorial.md), [DigiSign](https://app.digisign.org/selfcare/sso), [mConnect](https://mconnect.skooler.com/), [BrightHR](https://login.brighthr.com/), [Mural Identity](../saas-apps/mural-identity-tutorial.md), [NordPass SSO](https://app.nordpass.com/login%20use%20%22Log%20in%20to%20business%22%20option), [CloudClarity](https://portal.cloudclarity.app/dashboard), [Twic](../saas-apps/twic-tutorial.md), [Eduhouse Online](https://app.eduhouse.fi/palvelu/kirjaudu/microsoft), [Bealink](../saas-apps/bealink-tutorial.md), [Time Intelligence Bot](https://teams.microsoft.com/), [SentinelOne](https://sentinelone.com/)
You can also find the documentation of all the applications from: https://aka.ms/AppsTutorial,
active-directory Concept Azure Ad Connect Sync Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-azure-ad-connect-sync-architecture.md
When sync engine finds a staging object that matches by distinguished name but n
* If the object located in the connector space has no anchor, then sync engine removes this object from the connector space and marks the metaverse object it is linked to as **retry provisioning on next synchronization run**. Then it creates the new import object. * If the object located in the connector space has an anchor, then sync engine assumes that this object has either been renamed or deleted in the connected directory. It assigns a temporary, new distinguished name for the connector space object so that it can stage the incoming object. The old object then becomes **transient**, waiting for the Connector to import the rename or deletion to resolve the situation.
+Transient objects are not always a problem, and you might see them even in a healthy environment. With [Azure AD Connect sync V2 endpoint API](how-to-connect-sync-endpoint-api-v2.md), transient objects should auto-resolve in subsequent delta synchronization cycles. A common example where you might find transient objects being generated occurs on Azure AD Connect servers installed in staging mode, when an admin permanently deletes an object directly in Azure AD using PowerShell and later synchronizes the object again.
+ If sync engine locates a staging object that corresponds to the object specified in the Connector, it determines what kind of changes to apply. For example, sync engine might rename or delete the object in the connected data source, or it might only update the objectΓÇÖs attribute values. Staging objects with updated data are marked as pending import. Different types of pending imports are available. Depending on the result of the import process, a staging object in the connector space has one of the following pending import types:
active-directory How To Connect Install Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-custom.md
On the next page, you can select optional features for your scenario.
| Optional features | Description | | | | | Exchange hybrid deployment |The Exchange hybrid deployment feature allows for the coexistence of Exchange mailboxes both on-premises and in Microsoft 365. Azure AD Connect synchronizes a specific set of [attributes](reference-connect-sync-attributes-synchronized.md#exchange-hybrid-writeback) from Azure AD back into your on-premises directory. |
-| Exchange mail public folders | The Exchange mail public folders feature allows you to synchronize mail-enabled public-folder objects from your on-premises instance of Active Directory to Azure AD. |
+| Exchange mail public folders | The Exchange mail public folders feature allows you to synchronize mail-enabled public-folder objects from your on-premises instance of Active Directory to Azure AD. Note that it is not supported to sync groups that contain public folders as members, and attempting to do so will result in a synchronization error. |
| Azure AD app and attribute filtering |By enabling Azure AD app and attribute filtering, you can tailor the set of synchronized attributes. This option adds two more configuration pages to the wizard. For more information, see [Azure AD app and attribute filtering](#azure-ad-app-and-attribute-filtering). | | Password hash synchronization |If you selected federation as the sign-in solution, you can enable password hash synchronization. Then you can use it as a backup option. </br></br>If you selected pass-through authentication, you can enable this option to ensure support for legacy clients and to provide a backup.</br></br> For more information, see [Password hash synchronization](how-to-connect-password-hash-synchronization.md).| | Password writeback |Use this option to ensure that password changes that originate in Azure AD are written back to your on-premises directory. For more information, see [Getting started with password management](../authentication/tutorial-enable-sspr.md). |
active-directory How To Connect Pta Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-quick-start.md
Ensure that the following prerequisites are in place.
### In the Azure Active Directory admin center
-1. Create a cloud-only global administrator account on your Azure AD tenant. This way, you can manage the configuration of your tenant should your on-premises services fail or become unavailable. Learn about [adding a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Completing this step is critical to ensure that you don't get locked out of your tenant.
+1. Create a cloud-only global administrator account or a Hybrid Identity administrator account on your Azure AD tenant. This way, you can manage the configuration of your tenant should your on-premises services fail or become unavailable. Learn about [adding a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Completing this step is critical to ensure that you don't get locked out of your tenant.
2. Add one or more [custom domain names](../fundamentals/add-custom-domain.md) to your Azure AD tenant. Your users can sign in with one of these domain names. ### In your on-premises environment
active-directory How To Connect Pta Security Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-security-deep-dive.md
The following sections discuss these phases in detail.
### Authentication Agent installation
-Only global administrators can install an Authentication Agent (by using Azure AD Connect or standalone) on an on-premises server. Installation adds two new entries to the **Control Panel** > **Programs** > **Programs and Features** list:
+Only global administrators or Hybrid Identity administrators can install an Authentication Agent (by using Azure AD Connect or standalone) on an on-premises server. Installation adds two new entries to the **Control Panel** > **Programs** > **Programs and Features** list:
- The Authentication Agent application itself. This application runs with [NetworkService](/windows/win32/services/networkservice-account) privileges. - The Updater application that's used to auto-update the Authentication Agent. This application runs with [LocalSystem](/windows/win32/services/localsystem-account) privileges.
The Authentication Agents use the following steps to register themselves with Az
![Agent registration](./media/how-to-connect-pta-security-deep-dive/pta1.png)
-1. Azure AD first requests that a global administrator sign in to Azure AD with their credentials. During sign-in, the Authentication Agent acquires an access token that it can use on behalf of the global administrator.
+1. Azure AD first requests that a global administrator or hybrid identity administrator sign in to Azure AD with their credentials. During sign-in, the Authentication Agent acquires an access token that it can use on behalf of the global administrator or hybrid identity administrator.
2. The Authentication Agent then generates a key pair: a public key and a private key. - The key pair is generated through standard RSA 2048-bit encryption. - The private key stays on the on-premises server where the Authentication Agent resides.
The Authentication Agents use the following steps to register themselves with Az
- The access token acquired in step 1. - The public key generated in step 2. - A Certificate Signing Request (CSR or Certificate Request). This request applies for a digital identity certificate, with Azure AD as its certificate authority (CA).
-4. Azure AD validates the access token in the registration request and verifies that the request came from a global administrator.
+4. Azure AD validates the access token in the registration request and verifies that the request came from a global administrator or hybrid identity administrator.
5. Azure AD then signs and sends a digital identity certificate back to the Authentication Agent. - The root CA in Azure AD is used to sign the certificate.
To renew an Authentication Agent's trust with Azure AD:
- A Certificate Signing Request (CSR or Certificate Request). This request applies for a new digital identity certificate, with Azure AD as its certificate authority. 4. Azure AD validates the existing certificate in the certificate renewal request. Then it verifies that the request came from an Authentication Agent registered on your tenant. 5. If the existing certificate is still valid, Azure AD then signs a new digital identity certificate, and issues the new certificate back to the Authentication Agent.
-6. If the existing certificate has expired, Azure AD deletes the Authentication Agent from your tenantΓÇÖs list of registered Authentication Agents. Then a global administrator needs to manually install and register a new Authentication Agent.
+6. If the existing certificate has expired, Azure AD deletes the Authentication Agent from your tenantΓÇÖs list of registered Authentication Agents. Then a global administrator or hybrid identity administrator needs to manually install and register a new Authentication Agent.
- Use the Azure AD root CA to sign the certificate. - Set the certificateΓÇÖs subject (Distinguished Name or DN) to your tenant ID, a GUID that uniquely identifies your tenant. The DN scopes the certificate to your tenant only. 6. Azure AD stores the new public key of the Authentication Agent in a database in Azure SQL Database that only it has access to. It also invalidates the old public key associated with the Authentication Agent. 7. The new certificate (issued in step 5) is then stored on the server in the Windows certificate store (specifically in the [CERT_SYSTEM_STORE_CURRENT_USER](/windows/win32/seccrypto/system-store-locations#CERT_SYSTEM_STORE_CURRENT_USER) location).
- - Because the trust renewal procedure happens non-interactively (without the presence of the global administrator), the Authentication Agent no longer has access to update the existing certificate in the CERT_SYSTEM_STORE_LOCAL_MACHINE location.
+ - Because the trust renewal procedure happens non-interactively (without the presence of the global administrator or hybrid identity administrator), the Authentication Agent no longer has access to update the existing certificate in the CERT_SYSTEM_STORE_LOCAL_MACHINE location.
> [!NOTE] > This procedure does not remove the certificate itself from the CERT_SYSTEM_STORE_LOCAL_MACHINE location.
active-directory How To Connect Sso Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sso-quick-start.md
After completion of the wizard, Seamless SSO is enabled on your tenant.
Follow these instructions to verify that you have enabled Seamless SSO correctly:
-1. Sign in to the [Azure Active Directory administrative center](https://aad.portal.azure.com) with the global administrator credentials for your tenant.
+1. Sign in to the [Azure Active Directory administrative center](https://aad.portal.azure.com) with the global administrator or hybrid identity administrator credentials for your tenant.
2. Select **Azure Active Directory** in the left pane. 3. Select **Azure AD Connect**. 4. Verify that the **Seamless single sign-on** feature appears as **Enabled**.
active-directory Tshoot Connect Pass Through Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-pass-through-authentication.md
This article helps you find troubleshooting information about common issues regarding Azure AD Pass-through Authentication. > [!IMPORTANT]
-> If you are facing user sign-in issues with Pass-through Authentication, don't disable the feature or uninstall Pass-through Authentication Agents without having a cloud-only Global Administrator account to fall back on. Learn about [adding a cloud-only Global Administrator account](../fundamentals/add-users-azure-active-directory.md). Doing this step is critical and ensures that you don't get locked out of your tenant.
+> If you are facing user sign-in issues with Pass-through Authentication, don't disable the feature or uninstall Pass-through Authentication Agents without having a cloud-only Global Administrator account or a Hybrid Identity Administrator account to fall back on. Learn about [adding a cloud-only Global Administrator account](../fundamentals/add-users-azure-active-directory.md). Doing this step is critical and ensures that you don't get locked out of your tenant.
## General issues
Ensure that the server on which the Authentication Agent has been installed can
### Registration of the Authentication Agent failed due to token or account authorization errors
-Ensure that you use a cloud-only Global Administrator account for all Azure AD Connect or standalone Authentication Agent installation and registration operations. There is a known issue with MFA-enabled Global Administrator accounts; turn off MFA temporarily (only to complete the operations) as a workaround.
+Ensure that you use a cloud-only Global Administrator account or a Hybrid Identity Administrator account for all Azure AD Connect or standalone Authentication Agent installation and registration operations. There is a known issue with MFA-enabled Global Administrator accounts; turn off MFA temporarily (only to complete the operations) as a workaround.
### An unexpected error occurred
active-directory Tshoot Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-sso.md
If troubleshooting didn't help, you can manually reset the feature on your tenan
### Step 2: Get the list of Active Directory forests on which Seamless SSO has been enabled
-1. Run PowerShell as an administrator. In PowerShell, call `New-AzureADSSOAuthenticationContext`. When prompted, enter your tenant's global administrator credentials.
+1. Run PowerShell as an administrator. In PowerShell, call `New-AzureADSSOAuthenticationContext`. When prompted, enter your tenant's global administrator or hybrid identity administrator credentials.
2. Call `Get-AzureADSSOStatus`. This command provides you with the list of Active Directory forests (look at the "Domains" list) on which this feature has been enabled. ### Step 3: Disable Seamless SSO for each Active Directory forest where you've set up the feature
active-directory Tutorial Phs Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-phs-backup.md
Do the following:
1. Double-click the Azure AD Connect icon that was created on the desktop 2. Click **Configure**. 3. On the Additional tasks page, select **Customize synchronization options** and click **Next**.
-4. Enter the username and password for your global administrator. This account was created [here](tutorial-federation.md#create-a-global-administrator-in-azure-ad) in the previous tutorial.
+4. Enter the username and password for your global administrator or your hybrid identity administrator. This account was created [here](tutorial-federation.md#create-a-global-administrator-in-azure-ad) in the previous tutorial.
5. On the **Connect your directories** screen, click **Next**. 6. On the **Domain and OU filtering** screen, click **Next**. 7. On the **Optional features** screen, check **Password hash synchronization** and click **Next**.
Now, we will show you how to switch over to password hash synchronization. Befor
2. Click **Configure**. 3. Select **Change user sign-in** and click **Next**. ![Change](media/tutorial-phs-backup/backup2.png)</br>
-4. Enter the username and password for your global administrator. This account was created [here](tutorial-federation.md#create-a-global-administrator-in-azure-ad) in the previous tutorial.
+4. Enter the username and password for your global administrator or your hybrid identity administrator. This account was created [here](tutorial-federation.md#create-a-global-administrator-in-azure-ad) in the previous tutorial.
5. On the **User sign-in** screen, select **Password Hash Synchronization** and place a check in the **Do not convert user accounts** box. 6. Leave the default **Enable single sign-on** selected and click **Next**. 7. On the **Enable single sign-on** screen click **Next**.
Now, we will show you how to switch back to federation. To do this, do the foll
1. Double-click the Azure AD Connect icon that was created on the desktop 2. Click **Configure**. 3. Select **Change user sign-in** and click **Next**.
-4. Enter the username and password for your global administrator. This is the account that was created [here](tutorial-federation.md#create-a-global-administrator-in-azure-ad) in the previous tutorial.
+4. Enter the username and password for your global administrator or your hybrid identity administrator. This is the account that was created [here](tutorial-federation.md#create-a-global-administrator-in-azure-ad) in the previous tutorial.
5. On the **User sign-in** screen, select **Federation with AD FS** and click **Next**. 6. On the Domain Administrator credentials page, enter the contoso\Administrator username and password and click **Next.** 7. On the AD FS farm screen, click **Next**.
Now we need to reset the trust between AD FS and Azure.
3. Select **Manage Federation** and click **Next**. 4. Select **Reset Azure AD trust** and click **Next**. ![Reset](media/tutorial-phs-backup/backup6.png)</br>
-5. On the **Connect to Azure AD** screen enter the username and password for your global administrator.
+5. On the **Connect to Azure AD** screen enter the username and password for your global administrator or your hybrid identity administrator.
6. On the **Connect to AD FS** screen, enter the contoso\Administrator username and password and click **Next.** 7. On the **Certificates** screen, click **Next**.
You have now successfully setup a hybrid identity environment that you can use t
- [Hardware and prerequisites](how-to-connect-install-prerequisites.md) - [Express settings](how-to-connect-install-express.md)-- [Password hash synchronization](how-to-connect-password-hash-synchronization.md)
+- [Password hash synchronization](how-to-connect-password-hash-synchronization.md)
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| Azure Automation | [Azure Automation account authentication overview](../../automation/automation-security-overview.md#managed-identities) | | Azure Batch | [Configure customer-managed keys for your Azure Batch account with Azure Key Vault and Managed Identity](../../batch/batch-customer-managed-key.md) </BR> [Configure managed identities in Batch pools](../../batch/managed-identity-pools.md) | | Azure Blueprints | [Stages of a blueprint deployment](../../governance/blueprints/concepts/deployment-stages.md) |
+| Azure Cache for Redis | [Managed identity for storage accounts with Azure Cache for Redis](../../azure-cache-for-redis/cache-managed-identity.md) |
| Azure Container Instance | [How to use managed identities with Azure Container Instances](../../container-instances/container-instances-managed-identity.md) | | Azure Container Registry | [Use an Azure-managed identity in ACR Tasks](../../container-registry/container-registry-tasks-authentication-managed-identity.md) | | Azure Cognitive Services | [Configure customer-managed keys with Azure Key Vault for Cognitive Services](../../cognitive-services/encryption/cognitive-services-encryption-keys-portal.md) |
active-directory Concept Activity Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md
na Previously updated : 04/09/2020 Last updated : 03/11/2022 -+
You can route Azure AD audit logs and sign-in logs to your Azure Storage account
* **Audit logs**: The [audit logs activity report](concept-audit-logs.md) gives you access to information about changes applied to your tenant, such as users and group management, or updates applied to your tenantΓÇÖs resources. * **Sign-in logs**: With the [sign-in activity report](concept-sign-ins.md), you can determine who performed the tasks that are reported in the audit logs.
-> [!NOTE]
-> B2C-related audit and sign-in activity logs are not supported at this time.
->
+ ## Prerequisites
active-directory Betterworks Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/betterworks-tutorial.md
Title: 'Tutorial: Azure AD SSO integration with BetterWorks'
-description: Learn how to configure single sign-on between Azure Active Directory and BetterWorks.
+ Title: 'Tutorial: Azure AD SSO integration with Betterworks'
+description: Learn how to configure single sign-on between Azure Active Directory and Betterworks.
Last updated 10/07/2021
-# Tutorial: Azure AD SSO integration with BetterWorks
+# Tutorial: Azure AD SSO integration with Betterworks
-In this tutorial, you'll learn how to integrate BetterWorks with Azure Active Directory (Azure AD). When you integrate BetterWorks with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Betterworks with Azure Active Directory (Azure AD). When you integrate Betterworks with Azure AD, you can:
-* Control in Azure AD who has access to BetterWorks.
-* Enable your users to be automatically signed-in to BetterWorks with their Azure AD accounts.
+* Control in Azure AD who has access to Betterworks.
+* Enable your users to be automatically signed-in to Betterworks with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate BetterWorks with Azure Active Di
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* BetterWorks single sign-on (SSO) enabled subscription.
+* Betterworks single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* BetterWorks supports **SP and IDP** initiated SSO.
+* Betterworks supports **SP and IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Add BetterWorks from the gallery
+## Add Betterworks from the gallery
-To configure the integration of BetterWorks into Azure AD, you need to add BetterWorks from the gallery to your list of managed SaaS apps.
+To configure the integration of Betterworks into Azure AD, you need to add Betterworks from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **BetterWorks** in the search box.
-1. Select **BetterWorks** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **Betterworks** in the search box.
+1. Select **Betterworks** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for BetterWorks
+## Configure and test Azure AD SSO for Betterworks
-Configure and test Azure AD SSO with BetterWorks using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in BetterWorks.
+Configure and test Azure AD SSO with Betterworks using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Betterworks.
-To configure and test Azure AD SSO with BetterWorks, perform the following steps:
+To configure and test Azure AD SSO with Betterworks, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure BetterWorks SSO](#configure-betterworks-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create BetterWorks test user](#create-betterworks-test-user)** - to have a counterpart of B.Simon in BetterWorks that is linked to the Azure AD representation of user.
+1. **[Configure Betterworks SSO](#configure-betterworks-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Betterworks test user](#create-betterworks-test-user)** - to have a counterpart of B.Simon in Betterworks that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **BetterWorks** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Betterworks** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://app.betterworks.com` > [!NOTE]
- > If you're a European Union customer of BetterWorks, please use `eu.betterworks.com` as the domain name instead of `app.betterworks.com` in these URLs.
+ > If you're a European Union customer of Betterworks, please use `eu.betterworks.com` as the domain name instead of `app.betterworks.com` in these URLs.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/metadataxml.png)
-1. On the **Set up BetterWorks** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up Betterworks** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to BetterWorks.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Betterworks.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **BetterWorks**.
+1. In the applications list, select **Betterworks**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure BetterWorks SSO
+## Configure Betterworks SSO
-To configure single sign-on on **BetterWorks** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [BetterWorks support team](mailto:support@betterworks.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Betterworks** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Betterworks support team](mailto:support@betterworks.com). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create BetterWorks test user
+### Create Betterworks test user
-In this section, you create a user called Britta Simon in BetterWorks. Work with [BetterWorks support team](mailto:support@betterworks.com) to add the users in the BetterWorks platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in Betterworks. Work with [Betterworks support team](mailto:support@betterworks.com) to add the users in the Betterworks platform. Users must be created and activated before you use single sign-on.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to BetterWorks Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Betterworks Sign on URL where you can initiate the login flow.
-* Go to BetterWorks Sign-on URL directly and initiate the login flow from there.
+* Go to Betterworks Sign-on URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the BetterWorks for which you set up the SSO.
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Betterworks for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the BetterWorks tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the BetterWorks for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Betterworks tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Betterworks for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure BetterWorks you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure Betterworks you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
advisor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Advisor description: Sample Azure Resource Graph queries for Azure Advisor showing use of resource types and tables to access Azure Advisor related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
api-management Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-bicep.md
+
+ Title: Quickstart - Create Azure API Management instance by using Bicep
+description: Learn how to create an Azure API Management instance in the Developer tier by using Bicep.
+++
+tags: azure-resource-manager, bicep
++ Last updated : 03/10/2022++
+# Quickstart: Create a new Azure API Management service instance using Bicep
+
+This quickstart describes how to use a Bicep file to create an Azure API Management (APIM) service instance. APIM helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. APIM enables you to create and manage modern API gateways for existing backend services hosted anywhere. For more information, see the [Overview](api-management-key-concepts.md).
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-api-management-create/).
++
+The following resource is defined in the Bicep file:
+
+- **[Microsoft.ApiManagement/service](/azure/templates/microsoft.apimanagement/service)**
+
+In this example, the Bicep file configures the API Management instance in the Developer tier, an economical option to evaluate Azure API Management. This tier isn't for production use.
+
+More Azure API Management Bicep samples can be found in [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Apimanagement&pageNumber=1&sort=Popular).
+
+## Deploy the Bicep file
+
+You can use Azure CLI or Azure PowerShell to deploy the Bicep file. For more information about deploying Bicep files, see [Deploy](../azure-resource-manager/bicep/deploy-cli.md).
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters publisherEmail=<publisher-email> publishername=<publisher-name>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -publisherEmail "<publisher-email>" -publisherName "<publisher-name>"
+ ```
+
+
+
+ Replace **\<publisher-name\>** and **\<publisher-email\>** with the name of the API publisher's organization and the email address to receive notifications.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI or Azure PowerShell to list the deployed App Configuration resource in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+When your API Management service instance is online, you're ready to use it. Start with the tutorial to [import and publish](import-and-publish.md) your first API.
+
+## Clean up resources
+
+If you plan to continue working with subsequent tutorials, you might want to leave the API Management instance in place. When no longer needed, delete the resource group, which deletes the resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Import and publish your first API](import-and-publish.md)
app-service Deploy Container Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-container-github-action.md
OpenID Connect is an authentication method that uses short-lived tokens. Setting
az ad sp create --id $appId ```
-1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
```azurecli-interactive
- az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal
+ az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName
+/providers/Microsoft.Web/sites/
``` 1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
app-service Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-github-actions.md
OpenID Connect is an authentication method that uses short-lived tokens. Setting
az ad sp create --id $appId ```
-1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
```azurecli-interactive
- az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal
+ az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/--assignee-principal-type ServicePrincipal
``` 1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
app-service App Service App Service Environment Control Inbound Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-control-inbound-traffic.md
The following list contains the ports used by an App Service Environment. All po
* 4016: Used for remote debugging with Visual Studio 2012. This port can be safely blocked if the feature isn't being used. On an ILB-enabled ASE, this port is bound to the ILB address of the ASE. * 4018: Used for remote debugging with Visual Studio 2013. This port can be safely blocked if the feature isn't being used. On an ILB-enabled ASE, this port is bound to the ILB address of the ASE. * 4020: Used for remote debugging with Visual Studio 2015. This port can be safely blocked if the feature isn't being used. On an ILB-enabled ASE, this port is bound to the ILB address of the ASE.
+* 4022: Used for remote debugging with Visual Studio 2017. This port can be safely blocked if the feature isn't being used. On an ILB-enabled ASE, this port is bound to the ILB address of the ASE.
+* 4024 Used for remote debugging with Visual Studio 2019. This port can be safely blocked if the feature isn't being used. On an ILB-enabled ASE, this port is bound to the ILB address of the ASE.
+* 4026: Used for remote debugging with Visual Studio 2022. This port can be safely blocked if the feature isn't being used. On an ILB-enabled ASE, this port is bound to the ILB address of the ASE.
## Outbound Connectivity and DNS Requirements For an App Service Environment to function properly, it also requires outbound access to various endpoints. A full list of the external endpoints used by an ASE is in the "Required Network Connectivity" section of the [Network Configuration for ExpressRoute](app-service-app-service-environment-network-configuration-expressroute.md#required-network-connectivity) article.
app-service Network Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/network-info.md
If you put a *deny everything else* rule before the default rules, you prevent t
If you assigned an IP address to your app, make sure you keep the ports open. To see the ports, select **App Service Environment** > **IP addresses**.  
-All the items shown in the following outbound rules are needed, except for the last item. They enable network access to the App Service Environment dependencies that were noted earlier in this article. If you block any of them, your App Service Environment stops working. The last item in the list enables your App Service Environment to communicate with other resources in your virtual network.
+All the items shown in the following outbound rules are needed, except for the rule named **ASE-internal-outbound**. They enable network access to the App Service Environment dependencies that were noted earlier in this article. If you block any of them, your App Service Environment stops working. The rule named **ASE-internal-outbound** in the list enables your App Service Environment to communicate with other resources in your virtual network.
![Screenshot that shows outbound security rules.][5]
+> [!NOTE]
+> The IP range in the ASE-internal-outbound rule is only an example and should be changed to match the subnet range for the App Service Environment subnet.
+ After your NSGs are defined, assign them to the subnet. If you don't remember the virtual network or subnet, you can see it from the App Service Environment portal. To assign the NSG to your subnet, go to the subnet UI and select the NSG. ## Routes
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md
In addition to configuring the Health check options, you can also configure the
Health check integrates with App Service's [authentication and authorization features](overview-authentication-authorization.md). No additional settings are required if these security features are enabled.
-If you're using your own authentication system, the Health check path must allow anonymous access. To secure the Health check endpoint, you should first use features such as [IP restrictions](app-service-ip-restrictions.md#set-an-ip-address-based-rule), [client certificates](app-service-ip-restrictions.md#set-an-ip-address-based-rule), or a Virtual Network to restrict application access. You can secure the Health check endpoint by requiring the `User-Agent` of the incoming request matches `HealthCheck/1.0`. The User-Agent can't be spoofed since the request would already secured by prior security features.
+If you're using your own authentication system, the Health check path must allow anonymous access. To secure the Health check endpoint, you should first use features such as [IP restrictions](app-service-ip-restrictions.md#set-an-ip-address-based-rule), [client certificates](app-service-ip-restrictions.md#set-an-ip-address-based-rule), or a Virtual Network to restrict application access. You can secure the Health check endpoint by requiring the `User-Agent` of the incoming request matches `HealthCheck/1.0`. The User-Agent can't be spoofed since the request would already be secured by prior security features.
## Monitoring
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
For more information on deployment slots, see [Set up staging environments in Az
| Setting name| Description | Example | |-|-|-| |`WEBSITE_SLOT_NAME`| Read-only. Name of the current deployment slot. The name of the production slot is `Production`. ||
-|`WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS`| By default, the versions for site extensions are specific to each slot. This prevents unanticipated application behavior due to changing extension versions after a swap. If you want the extension versions to swap as well, set to `1` on *all slots*. ||
+|`WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS`| By default, the versions for site extensions are specific to each slot. This prevents unanticipated application behavior due to changing extension versions after a swap. If you want the extension versions to swap as well, set to `0` on *all slots*. ||
|`WEBSITE_OVERRIDE_PRESERVE_DEFAULT_STICKY_SLOT_SETTINGS`| Designates certain settings as [sticky or not swappable by default](deploy-staging-slots.md#which-settings-are-swapped). Default is `true`. Set this setting to `false` or `0` for *all deployment slots* to make them swappable instead. There's no fine-grain control for specific setting types. || |`WEBSITE_SWAP_WARMUP_PING_PATH`| Path to ping to warm up the target slot in a swap, beginning with a slash. The default is `/`, which pings the root path over HTTP. | `/statuscheck` | |`WEBSITE_SWAP_WARMUP_PING_STATUSES`| Valid HTTP response codes for the warm-up operation during a swap. If the returned status code isn't in the list, the warmup and swap operations are stopped. By default, all response codes are valid. | `200,202` |
application-gateway Application Gateway Backend Health Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
This behavior can occur for one or more of the following reasons:
d. If an NSG is configured, search for that NSG resource on the **Search** tab or under **All resources**.
- e. In the **Inbound Rules** section, add an inbound rule to allow destination port range 65503-65534 for v1 SKU or 65200-65535 v2 SKU with the **Source** set as **Any** or **Internet**.
+ e. In the **Inbound Rules** section, add an inbound rule to allow destination port range 65503-65534 for v1 SKU or 65200-65535 v2 SKU with the **Source** set as **GatewayManager** service tag.
f. Select **Save** and verify that you can view the backend as Healthy. Alternatively, you can do that through [PowerShell/CLI](../virtual-network/manage-network-security-group.md).
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
Previously updated : 11/02/2021 Last updated : 03/11/2022 recommendations: false
The business card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from business card images. The API analyzes printed business cards; extracts key information such as first name, last name, company name, email address, and phone number; and returns a structured JSON data representation.
-***Sample business card processed with Form Recognizer Studio***
+***Sample business card processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)***
:::image type="content" source="./media/studio/overview-business-card-studio.png" alt-text="sample business card" lightbox="./media/overview-business-card.jpg":::
See how data, including name, job title, address, email, and company name, is ex
#### Sample Labeling tool
-You will need a business card document. You can use our [sample business card document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/businessCard.png).
+You'll need a business card document. You can use our [sample business card document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/businessCard.png).
1. On the Sample Labeling tool home page, select **Use prebuilt model to get data**.
You will need a business card document. You can use our [sample business card do
* Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location. * For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed). * The file size must be less than 50 MB.
-* Image dimensions must be between 50 x 50 pixels and 10000 x 10000 pixels.
+* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
* PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller. * The total size of the training data is 500 pages or less. * If your PDFs are password-locked, you must remove the lock before submission.
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
Previously updated : 11/02/2021 Last updated : 03/11/2022 recommendations: false
The ID document model combines Optical Character Recognition (OCR) with deep learning models to analyze and extracts key information from US Drivers Licenses (all 50 states and District of Columbia) and international passport biographical pages (excludes visa and other travel documents). The API analyzes identity documents, extracts key information, and returns a structured JSON data representation.
-***Sample U.S. Driver's License processed with Form Recognizer Studio***
+***Sample U.S. Driver's License processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)***
:::image type="content" source="media/studio/analyze-drivers-license.png" alt-text="Image of a sample driver's license." lightbox="media/overview-id.jpg":::
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
Previously updated : 11/02/2021 Last updated : 03/11/2022 recommendations: false
The Form Recognizer Layout API extracts text, tables, selection marks, and structure information from documents (PDF, TIFF) and images (JPG, PNG, BMP).
-***Sample form processed with [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/) layout feature***
+***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***
**Data extraction features**
You'll need a form document. You can use our [sample form document](https://raw.
* Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location. * For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed). * The file size must be less than 50 MB.
-* Image dimensions must be between 50 x 50 pixels and 10000 x 10000 pixels.
+* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
> [!NOTE] > The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service. ## Supported languages and locales
- Form Recognizer preview version introduces additional language support for the layout model. *See* our [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
+ Form Recognizer preview version introduces additional language support for the layout model. *See* [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
## Features ### Tables and table headers
-Layout API extracts tables in the `pageResults` section of the JSON output. Documents can be scanned, photographed, or digitized. Tables can be complex with merged cells or columns, with or without borders, and with odd angles. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding box is output along with information whether it's recognized as part of a header or not. The model predicted header cells can span multiple rows and are not necessarily the first rows in a table. They also work with rotated tables. Each table cell also includes the full text with references to the individual words in the `readResults` section.
+Layout API extracts tables in the `pageResults` section of the JSON output. Documents can be scanned, photographed, or digitized. Tables can be complex with merged cells or columns, with or without borders, and with odd angles. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding box is output along with information whether it's recognized as part of a header or not. The model predicted header cells can span multiple rows and aren't necessarily the first rows in a table. They also work with rotated tables. Each table cell also includes the full text with references to the individual words in the `readResults` section.
:::image type="content" source="./media/layout-table-headers-example.png" alt-text="Layout table headers output":::
Layout API also extracts selection marks from documents. Extracted selection mar
### Text lines and words
-Layout API extracts text from documents and images with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Text is extracted with information provided on lines, words, bounding boxes, confidence scores, and style (handwritten or other). All the text information is included in the `readResults` section of the JSON output.
+Layout API extracts text from documents and images with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Text is extracted with information provided in lines, words, and bounding boxes. All the text information is included in the `readResults` section of the JSON output.
:::image type="content" source="./media/layout-text-extraction.png" alt-text="Layout text extraction output":::
Layout API extracts text from documents and images with multiple text angles and
In Form Recognizer v2.1, you can specify the order in which the text lines are output with the `readingOrder` query parameter. Use `natural` for a more human-friendly reading order output as shown in the following example. This feature is only supported for Latin languages.
-In Form Recognizer v3.0, the natural reading order output is used by the service in all cases. Therefore, there is no `readingOrder` parameter provided in this version.
+In Form Recognizer v3.0, the natural reading order output is used by the service in all cases. Therefore, there's no `readingOrder` parameter provided in this version.
### Handwritten classification for text lines (Latin only)
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
Previously updated : 11/02/2021 Last updated : 03/11/2022 recommendations: false
The receipt model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from sales receipts. Receipts can be of various formats and quality including printed and handwritten receipts. The API extracts key information such as merchant name, merchant phone number, transaction date, tax, and transaction total and returns a structured JSON data representation.
-***Sample receipt processed with [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)***:
+***Sample receipt processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
:::image type="content" source="media/studio/overview-receipt.png" alt-text="sample receipt" lightbox="media/overview-receipt.jpg":::
applied-ai-services Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities.md
To get started, you'll need:
## Managed identity assignments
-There are two types of managed identity: **system-assigned** and **user-assigned**. Currently, Form Recognizer supports system-assigned managed identity:
+There are two types of managed identity: **system-assigned** and **user-assigned**. Currently, Form Recognizer only supports system-assigned managed identity:
* A system-assigned managed identity is **enabled** directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting.
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
sudo python onboarding.py --deregister --endpoint="<URL>" --key="<PrimaryAccessK
> [!NOTE] > - This script doesn't remove the Log Analytics agent for Linux from the machine. It only removes the functionality and configuration of the Hybrid Runbook Worker role. </br> > - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
+> - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
## Remove a Hybrid Worker group
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-windows-hrw-install.md
Modules that are installed must be in a location referenced by the `PSModulePath
Remove-HybridRunbookWorker -Url <URL> -Key <primaryAccessKey> -MachineName <computerName> ``` > [!NOTE]
-> After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
+> - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
+> - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
## Remove a Hybrid Worker group
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-app-configuration Use Key Vault References Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-spring-boot.md
To add a secret to the vault, you need to take just a few additional steps. In t
```azurecli az ad sp show --id <clientId-of-your-service-principal>
- az role assignment create --role "App Configuration Data Reader" --assignee-object-id <objectId-of-your-service-principal> --resource-group <your-resource-group>
+ az role assignment create --role "App Configuration Data Reader" --scope /subscriptions/<subscriptionId>/resourceGroups/<group-name> --assignee-principal-type --assignee-object-id <objectId-of-your-service-principal> --resource-group <your-resource-group>
``` 1. Create the environment variables **AZURE_CLIENT_ID**, **AZURE_CLIENT_SECRET**, and **AZURE_TENANT_ID**. Use the values for the service principal that were displayed in the previous steps. At the command line, run the following commands and restart the command prompt to allow the change to take effect:
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
To create the Azure Arc data controller using Kubernetes tools you will need to
### Cleanup from past installations
-If you installed the Azure Arc data controller in the past on the same cluster and deleted the Azure Arc data controller, there may be some cluster level objects that would still need to be deleted. Run the following commands to delete the Azure Arc data controller cluster level objects:
+If you installed the Azure Arc data controller in the past on the same cluster and deleted the Azure Arc data controller, there may be some cluster level objects that would still need to be deleted.
+
+For some of the tasks, you'll need to replace `{namespace}` with the value for your namespace. Substitute the name of the namespace the data controller was deployed in into `{namespace}`. If unsure, get the name of the `mutatingwebhookconfiguration` using `kubectl get clusterrolebinding`.
+
+Run the following commands to delete the Azure Arc data controller cluster level objects:
```console # Cleanup azure arc data service artifacts
kubectl delete crd sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.co
kubectl delete crd dags.sql.arcdata.microsoft.com kubectl delete crd exporttasks.tasks.arcdata.microsoft.com kubectl delete crd monitors.arcdata.microsoft.com
+kubectl delete crd activedirectoryconnectors.arcdata.microsoft.com
+
+# Substitute the name of the namespace the data controller was deployed in into {namespace}.
# Cluster roles and role bindings kubectl delete clusterrole arcdataservices-extension
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022 #
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled Kubernetes description: Sample Azure Resource Graph queries for Azure Arc-enabled Kubernetes showing use of resource types and tables to access Azure Arc-enabled Kubernetes related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v
Information on how OSM issues and manages certificates to Envoy proxies running on application pods can be found on the [OSM docs site](https://docs.openservicemesh.io/docs/guides/certificates/). ### 14. Upgrade Envoy
-When a new pod is created in a namespace monitored by the add-on, OSM will inject an [envoy proxy sidecar](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) in that pod. If the envoy version needs to be updated, steps to do so can be found in the [Upgrade Guide](https://docs.openservicemesh.io/docs/getting_started/upgrade/#envoy) on the OSM docs site.
+When a new pod is created in a namespace monitored by the add-on, OSM will inject an [envoy proxy sidecar](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) in that pod. If the envoy version needs to be updated, steps to do so can be found in the [Upgrade Guide](https://release-v0-11.docs.openservicemesh.io/docs/getting_started/upgrade/#envoy) on the OSM docs site.
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
For usage details, see the following documents:
* [Flux Kustomize controller](https://fluxcd.io/docs/components/kustomize/) * [Kustomize reference documents](https://kubectl.docs.kubernetes.io/references/kustomize/) * [The kustomization file](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/)
-* [Kustomize project](https://kubernetes-sigs.github.io/kustomize/)
+* [Kustomize project](https://kubectl.docs.kubernetes.io/references/kustomize/)
* [Kustomize guides](https://kubectl.docs.kubernetes.io/guides/config_management/) ## Manage Helm chart releases by using the Flux Helm controller
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc description: Sample Azure Resource Graph queries for Azure Arc showing use of resource types and tables to access Azure Arc related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled servers description: Sample Azure Resource Graph queries for Azure Arc-enabled servers showing use of resource types and tables to access Azure Arc-enabled servers related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
azure-cache-for-redis Cache Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-managed-identity.md
Title: Managed Identity
+ Title: Managed identity for storage accounts
description: Learn to Azure Cache for Redis Previously updated : 01/21/2022 Last updated : 03/10/2022 +
-# Managed identity with Azure Cache for Redis (Preview)
+# Managed identity for storage (Preview)
[Managed identities](../active-directory/managed-identities-azure-resources/overview.md) are a common tool used in Azure to help developers minimize the burden of managing secrets and login information. Managed identities are useful when Azure services connect to each other. Instead of managing authorization between each service, [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) can be used to provide a managed identity that makes the authentication process more streamlined and secure.
-## Managed identity with storage accounts
+## Use managed identity with storage accounts
-Azure Cache for Redis can use a managed identity to connect with a storage account, useful in two scenarios:
+Presently, Azure Cache for Redis can use a managed identity to connect with a storage account, useful in two scenarios:
- [Data Persistence](cache-how-to-premium-persistence.md)--scheduled backups of data in your cache through an RDB or AOF file.
Set-AzRedisCache -ResourceGroupName \"MyGroup\" -Name \"MyCache\" -IdentityType
## Next steps - [Learn more](cache-overview.md#service-tiers) about Azure Cache for Redis features-- [What are managed identifies](../active-directory/managed-identities-azure-resources/overview.md)
+- [What are managed identifies](../active-directory/managed-identities-azure-resources/overview.md)
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-functions Functions Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-powershell.md
Depending on your use case, Durable Functions may significantly improve scalabil
### Considerations for using concurrency
-PowerShell is a _single threaded_ scripting language by default. However, concurrency can be added by using multiple PowerShell runspaces in the same process. The amount of runspaces created will match the ```PSWorkerInProcConcurrencyUpperBound``` application setting. The throughput will be impacted by the amount of CPU and memory available in the selected plan.
+PowerShell is a _single threaded_ scripting language by default. However, concurrency can be added by using multiple PowerShell runspaces in the same process. The amount of runspaces created, and therefore the number of concurrent threads per worker, is limited by the ```PSWorkerInProcConcurrencyUpperBound``` application setting. By default, the number of runspaces is set to 1,000 in version 4.x of the Functions runtime. In versions 3.x and below, the maximum number of runspaces is set to 1. The throughput will be impacted by the amount of CPU and memory available in the selected plan.
Azure PowerShell uses some _process-level_ contexts and state to help save you from excess typing. However, if you turn on concurrency in your function app and invoke actions that change state, you could end up with race conditions. These race conditions are difficult to debug because one invocation relies on a certain state and the other invocation changed the state.
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
The Azure Monitor agent is implemented as an [Azure VM extension](../../virtual-
|:|:|:| | Publisher | Microsoft.Azure.Monitor | Microsoft.Azure.Monitor | | Type | AzureMonitorWindowsAgent | AzureMonitorLinuxAgent |
-| TypeHandlerVersion | 1.0 | 1.5 |
+| TypeHandlerVersion | 1.2 | 1.15 |
## Extension versions We strongly recommended to update to generally available versions listed as follows instead of using preview or intermediate versions.
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
Since you're charged for any data collected in a Log Analytics workspace, you sh
To specify additional filters, you must use Custom configuration and specify an XPath that filters out the events you don't. XPath entries are written in the form `LogName!XPathQuery`. For example, you may want to return only events from the Application event log with an event ID of 1035. The XPathQuery for these events would be `*[System[EventID=1035]]`. Since you want to retrieve the events from the Application event log, the XPath would be `Application!*[System[EventID=1035]]`
+### Extracting XPath queries from Windows Event Viewer
+One of the ways to create XPath quries is to use Windows Event Viewer to extract XPath queries as shown below.
+*In step 5 when pasting over the 'Select Path' parameter value, you must append the log type category followed by '!' and then paste the copied value.
+
+[![Extract XPath](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png#lightbox)
+ See [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitations) for a list of limitations in the XPath supported by Windows event log. > [!TIP]
-> Use this **shortcut** to create syntactically correct XPath queries: [Extract XPath queries from Windows Event Viewer](https://azurecloudai.blog/2021/08/10/shortcut-way-to-create-your-xpath-queries-for-azure-sentinel-dcrs/)
->
-> Alternatively you can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPathQuery. The following script shows an example.
+> You can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPathQuery locally on your machine first. The following script shows an example.
> > ```powershell > $XPath = '*[System[EventID=1035]]'
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
This section shows you how to download the auto-instrumentation jar file.
#### Download the jar file
-Download the [applicationinsights-agent-3.2.7.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.2.7/applicationinsights-agent-3.2.7.jar) file.
+Download the [applicationinsights-agent-3.2.8.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.2.8/applicationinsights-agent-3.2.8.jar) file.
> [!WARNING] >
Download the [applicationinsights-agent-3.2.7.jar](https://github.com/microsoft/
#### Point the JVM to the jar file
-Add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to your application's JVM args.
+Add `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` to your application's JVM args.
> [!TIP] > For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
Add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to your application
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=... ```
- - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.2.7.jar` with the following content:
+ - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.2.8.jar` with the following content:
```json {
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Configure [App Services](../../app-service/configure-language-java.md#set-java-r
## Spring Boot
-Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` somewhere before `-jar`, for example:
```
-java -javaagent:path/to/applicationinsights-agent-3.2.7.jar -jar <myapp.jar>
+java -javaagent:path/to/applicationinsights-agent-3.2.8.jar -jar <myapp.jar>
``` ## Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.2.7.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.2.8.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.2.7.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.2.8.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.7.jar -jar <myapp.jar>
+ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.8.jar -jar <myapp.jar>
``` ## Tomcat 8 (Linux)
ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.7.jar -jar <mya
If you installed Tomcat via `apt-get` or `yum`, then you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.7.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.8.jar"
``` ### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.7.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), then you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.7.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.8.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and a
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.7.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.8.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.7.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.8.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` to `CATALINA_OPTS`.
### Running Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.2.7.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.2.8.jar -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to the existing `jv
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.2.7.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.2.8.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`
``` --exec--javaagent:path/to/applicationinsights-agent-3.2.7.jar
+-javaagent:path/to/applicationinsights-agent-3.2.8.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.2.7.jar>
+ -javaagent:path/to/applicationinsights-agent-3.2.8.jar>
</jvm-options> ... </java-config>
Java and Process Management > Process definition > Java Virtual Machine
``` In "Generic JVM arguments" add the following: ```--javaagent:path/to/applicationinsights-agent-3.2.7.jar
+-javaagent:path/to/applicationinsights-agent-3.2.8.jar
``` After that, save and restart the application server.
After that, save and restart the application server.
Create a new file `jvm.options` in the server directory (for example `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.2.7.jar
+-javaagent:path/to/applicationinsights-agent-3.2.8.jar
``` ## Others
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
You will find more details and additional configuration options below.
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.2.7.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.2.8.jar`.
You can specify your own configuration file path using either * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable, or * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.7.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.8.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the json configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
You can also set the connection string using the environment variable `APPLICATI
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.7.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.8.jar` is located.
```json {
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
## HTTP headers
-Starting from 3.2.7, you can capture request and response headers on your server (request) telemetry:
+Starting from 3.2.8, you can capture request and response headers on your server (request) telemetry:
```json {
Again, the header names are case-insensitive, and the examples above will be cap
By default, http server requests that result in 4xx response codes are captured as errors.
-Starting from version 3.2.7, you can change this behavior to capture them as success if you prefer:
+Starting from version 3.2.8, you can change this behavior to capture them as success if you prefer:
```json {
Starting from version 3.2.0, the following preview instrumentations can be enabl
``` > [!NOTE] > Akka instrumentation is available starting from version 3.2.2
-> Vertx HTTP Library instrumentation is available starting from version 3.2.7
+> Vertx HTTP Library instrumentation is available starting from version 3.2.8
## Metric interval
and the console, corresponding to this configuration:
`level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.2.7.jar` is located.
+`applicationinsights-agent-3.2.8.jar` is located.
`maxSizeMb` is the max size of the log file before it rolls over. `maxHistory` is the number of rolled over log files that are retained (in addition to the current log file).
-Starting from version 3.0.2, you can also set the self-diagnostics `level` using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`
+Starting from version 3.0.2, you can also set the self-diagnostics `level` using the environment variable
+`APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`
(which will then take precedence over self-diagnostics level specified in the json configuration).
+And starting from version 3.0.3, you can also set the self-diagnostics file location using the environment variable
+`APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_FILE_PATH`
+(which will then take precedence over self-diagnostics file path specified in the json configuration).
+ ## An example This is just an example to show what a configuration file looks like with multiple components.
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-troubleshoot.md
In this article, we cover some of the common issues that you might face while in
## Check the self-diagnostic log file By default, Application Insights Java 3.x produces a log file named `applicationinsights.log` in the same directory
-that holds the `applicationinsights-agent-3.2.7.jar` file.
+that holds the `applicationinsights-agent-3.2.8.jar` file.
This log file is the first place to check for hints to any issues you might be experiencing. If no log file is generated, check that your Java application has write permission to the directory that holds the
-`applicationinsights-agent-3.2.7.jar` file.
+`applicationinsights-agent-3.2.8.jar` file.
If still no log file is generated, check the stdout log from your Java application. Application Insights Java 3.x should log any errors to stdout that would prevent it from logging to its normal location.
azure-monitor Autoscale Predictive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-predictive.md
You receive the error message as seen below:
*Predictive autoscale is based on the metric percentage CPU of the current resource. Choose this metric in the scale up trigger rules*. This message means you attempted to enable predictive autoscale before you enabled standard autoscale and set it up to use the *Percentage CPU* metric with the *Average* aggregation type.
You won't see data on the predictive charts under certain conditions. This isn'
When predictive autoscale is disabled, you instead receive a message beginning with "No data to show..." and giving you instructions on what to enable so you can see a predictive chart.
- :::image type="content" source="media/autoscale-predictive/message-no-data-to-show-11.png" alt-text="Screenshot of message No data to show":::
+ :::image type="content" source="media/autoscale-predictive/error-no-data-to-show.png" alt-text="Screenshot of message No data to show":::
When you first create a virtual machine scale set and enable forecast only mode, you receive a message telling you "Predictive data is being trained.." and a time to return to see the chart.
azure-monitor Change Analysis Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-troubleshoot.md
+
+ Title: Troubleshoot Application Change Analysis - Azure Monitor
+description: Learn how to troubleshoot problems in Application Change Analysis.
+++
+ms.contributor: cawa
Last updated : 03/11/2022 ++++
+# Troubleshoot Application Change Analysis (preview)
+
+## Trouble registering Microsoft.ChangeAnalysis resource provider from Change history tab.
+
+If you're viewing Change history after its first integration with Application Change Analysis, you will see it automatically registering the **Microsoft.ChangeAnalysis** resource provider. The resource may fail and incur the following error messages:
+
+### You don't have enough permissions to register Microsoft.ChangeAnalysis resource provider.
+You're receiving this error message because your role in the current subscription is not associated with the **Microsoft.Support/register/action** scope. For example, you are not the owner of your subscription and instead received shared access permissions through a coworker (like view access to a resource group).
+
+To resolve the issue, contact the owner of your subscription to register the **Microsoft.ChangeAnalysis** resource provider.
+1. In the Azure portal, search for **Subscriptions**.
+1. Select your subscription.
+1. Navigate to **Resource providers** under **Settings** in the side menu.
+1. Search for **Microsoft.ChangeAnalysis** and register via the UI, Azure PowerShell, or Azure CLI.
+
+ Example for registering the resource provider through PowerShell:
+ ```PowerShell
+ # Register resource provider
+ Register-AzResourceProvider -ProviderNamespace "Microsoft.ChangeAnalysis"
+ ```
+
+### Failed to register Microsoft.ChangeAnalysis resource provider.
+This error message is likely a temporary internet connectivity issue, since:
+* The UI sent the resource provider registration request.
+* You've resolved your [permissions issue](#you-dont-have-enough-permissions-to-register-microsoftchangeanalysis-resource-provider).
+
+Try refreshing the page and checking your internet connection. If the error persists, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+
+### This is taking longer than expected.
+You'll receive this error message when the registration takes longer than 2 minutes. While unusual, it doesn't mean something went wrong. Restart your web app to see your registration changes. Changes should show up within a few hours of app restart.
+
+If your changes still don't show after 6 hours, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+
+## Azure Lighthouse subscription is not supported.
+
+### Failed to query Microsoft.ChangeAnalysis resource provider.
+Often, this message includes: `Azure Lighthouse subscription is not supported, the changes are only available in the subscription's home tenant`.
+
+Currently, the Change Analysis resource provider is limited to registration through Azure Lighthouse subscription for users outside of home tenant. We are working on addressing this limitation.
+
+If this is a blocking issue for you, we can provide a workaround that involves creating a service principal and explicitly assigning the role to allow the access. Contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com) to learn more about it.
+
+## An error occurred while getting changes. Please refresh this page or come back later to view changes.
+
+When changes can't be loaded, Application Change Analysis service presents this general error message. A few known causes are:
+
+- Internet connectivity error from the client device.
+- Change Analysis service being temporarily unavailable.
+
+Refreshing the page after a few minutes usually fixes this issue. If the error persists, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+
+## You don't have enough permissions to view some changes. Contact your Azure subscription administrator.
+
+This general unauthorized error message occurs when the current user does not have sufficient permissions to view the change. At minimum,
+* To view infrastructure changes returned by Azure Resource Graph and Azure Resource Manager, reader access is required.
+* For web app in-guest file changes and configuration changes, contributor role is required.
+
+## Cannot see in-guest changes for newly enabled Web App.
+
+You may not immediately see web app in-guest file changes and configuration changes. Restart your web app and you should be able to view changes within 30 minutes. If not, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+
+## Diagnose and solve problems tool for virtual machines
+
+To troubleshoot virtual machine issues using the troubleshooting tool in the Azure portal:
+1. Navigate to your virtual machine.
+1. Select **Diagnose and solve problems** from the side menu.
+1. Browse and select the troubleshooting tool that fits your issue.
+
+![Screenshot of the Diagnose and Solve Problems tool for a Virtual Machine with Troubleshooting tools selected.](./media/change-analysis/vm-dnsp-troubleshootingtools.png)
+
+![Screenshot of the tile for the Analyze recent changes troubleshooting tool for a Virtual Machine.](./media/change-analysis/analyze-recent-changes.png)
+++
+## Next steps
+
+Learn more about [Azure Resource Graph](../../governance/resource-graph/overview.md), which helps power Change Analysis.
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-visualizations.md
+
+ Title: Visualizations for Application Change Analysis - Azure Monitor
+description: Learn how to use visualizations in Application Change Analysis in Azure Monitor.
+++
+ms.contributor: cawa
Last updated : 03/11/2022++++
+# Visualizations for Application Change Analysis (preview)
+
+## Standalone UI
+
+Change Analysis lives in a standalone pane under Azure Monitor, where you can view all changes and application dependency/resource insights. You can access Change Analysis through a couple of entry points:
+
+In the Azure portal, search for Change Analysis to launch the experience.
+
+
+Select one or more subscriptions to view:
+- All of its resources' changes from the past 24 hours.
+- Old and new values to provide insights at one glance.
+
+
+Click into a change to view full Resource Manager snippet and other properties.
+
+
+Send any feedback to the [Change Analysis team](mailto:changeanalysisteam@microsoft.com) from the Change Analysis blade:
+++
+### Multiple subscription support
+
+The UI supports selecting multiple subscriptions to view resource changes. Use the subscription filter:
++
+## Diagnose and solve problems tool
+
+Application Change Analysis is:
+- A standalone detector in the Web App **Diagnose and solve problems** tool.
+- Aggregated in **Application Crashes** and **Web App Down detectors**.
+
+From your resource's overview page in Azure portal, select **Diagnose and solve problems** the left menu. As you enter the Diagnose and Solve Problems tool, the **Microsoft.ChangeAnalysis** resource provider will automatically be registered.
+
+### Diagnose and solve problems tool for Web App
+
+> [!NOTE]
+> You may not immediately see web app in-guest file changes and configuration changes. Restart your web app and you should be able to view changes within 30 minutes. If not, refer to [the troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
+
+1. Select **Availability and Performance**.
+
+ :::image type="content" source="./media/change-analysis/availability-and-performance.png" alt-text="Screenshot of the Availability and Performance troubleshooting options":::
+
+2. Select **Application Changes (Preview)**. The feature is also available in **Application Crashes**.
+
+ :::image type="content" source="./media/change-analysis/application-changes.png" alt-text="Screenshot of the Application Crashes button":::
+
+ The link leads to Application Change Analysis UI scoped to the web app.
+
+3. Enable web app in-guest change tracking if you haven't already.
+
+ :::image type="content" source="./media/change-analysis/enable-changeanalysis.png" alt-text="Screenshot of the Application Crashes options":::
+
+4. Toggle on **Change Analysis** status and select **Save**.
+
+ :::image type="content" source="./media/change-analysis/change-analysis-on.png" alt-text="Screenshot of the Enable Change Analysis user interface":::
+
+ - The tool displays all web apps under an App Service plan, which you can toggle on and off individually.
+
+ :::image type="content" source="./media/change-analysis/change-analysis-on-2.png" alt-text="Screenshot of the Enable Change Analysis user interface expanded":::
++
+You can also view change data via the **Web App Down** and **Application Crashes** detectors. The graph summarizes:
+- The change types over time.
+- Details on those changes.
+
+By default, the graph displays changes from within the past 24 hours help with immediate problems.
++
+### Diagnose and solve problems tool for Virtual Machines
+
+Change Analysis displays as an insight card in a your virtual machine's **Diagnose and solve problems** tool. The insight card displays the number of changes or issues a resource experiences within the past 72 hours.
+
+1. Within your virtual machine, select **Diagnose and solve problems** from the left menu.
+1. Go to **Troubleshooting tools**.
+1. Scroll to the end of the troubleshooting options and select **Analyze recent changes** to view changes on the virtual machine.
+
+ :::image type="content" source="./media/change-analysis/vm-dnsp-troubleshootingtools.png" alt-text="Screenshot of the VM Diagnose and Solve Problems":::
+
+ :::image type="content" source="./media/change-analysis/analyze-recent-changes.png" alt-text="Change analyzer in troubleshooting tools":::
+
+### Diagnose and solve problems tool for Azure SQL Database and other resources
+
+You can view Change Analysis data for [multiple Azure resources](./change-analysis.md#supported-resource-types), but we highlight Azure SQL Database below.
+
+1. Within your resource, select **Diagnose and solve problems** from the left menu.
+1. Under **Common problems**, select **View change details** to view the filtered view from Change Analysis standalone UI.
+
+ :::image type="content" source="./media/change-analysis/change-insight-diagnose-and-solve.png" alt-text="Screenshot of viewing common problems in Diagnose and Solve Problems tool.":::
+
+## Activity Log change history
+
+Use the [View change history](../essentials/activity-log.md#view-change-history) feature to call the Application Change Analysis service backend to view changes associated with an operation. Changes returned include:
+- Resource level changes from [Azure Resource Graph](../../governance/resource-graph/overview.md).
+- Resource properties from [Azure Resource Manager](../../azure-resource-manager/management/overview.md).
+- In-guest changes from PaaS services, such as App Services web app.
+
+1. From within your resource, select **Activity Log** from the side menu.
+1. Select a change from the list.
+1. Select the **Change history (Preview)** tab.
+1. For the Application Change Analysis service to scan for changes in users' subscriptions, a resource provider needs to be registered. Upon selecting the **Change history (Preview)** tab, the tool will automatically register **Microsoft.ChangeAnalysis** resource provider.
+1. Once registered, you can view changes from **Azure Resource Graph** immediately from the past 14 days.
+ - Changes from other sources will be available after ~4 hours after subscription is onboard.
+
+ :::image type="content" source="./media/change-analysis/activity-log-change-history.png" alt-text="Activity Log change history integration":::
+
+## VM Insights integration
+
+If you've enabled [VM Insights](../vm/vminsights-overview.md), you can view changes in your virtual machines that may have caused any spikes in a metric chart, such as CPU or Memory.
+
+1. Within your virtual machine, select **Insights** from under **Monitoring** in the left menu.
+1. Select the **Performance** tab.
+1. Expand the property panel.
+
+ :::image type="content" source="./media/change-analysis/vm-insights.png" alt-text="Virtual machine insights performance and property panel.":::
+
+1. Select the **Changes** tab.
+1. Select the **Investigate Changes** button to view change details in the Application Change Analysis standalone UI.
+
+ :::image type="content" source="./media/change-analysis/vm-insights-2.png" alt-text="View of the property panel, selecting Investigate Changes button.":::
+
+## Drill to Change Analysis logs
+
+You can also drill to Change Analysis logs via a chart you've created or pinned to your resource's **Monitoring** dashboard.
+
+1. Navigate to the resource for which you'd like to view Change Analysis logs.
+1. On the resource's overview page, select the **Monitoring** tab.
+1. Select a chart from the **Key Metrics** dashboard.
+
+ :::image type="content" source="./media/change-analysis/view-change-analysis-1.png" alt-text="Chart from the Monitoring tab of the resource.":::
+
+1. From the chart, select **Drill into logs** and choose **Change Analysis** to view it.
+
+ :::image type="content" source="./media/change-analysis/view-change-analysis-2.png" alt-text="Drill into logs and select to view Change Analysis.":::
+
+## Next steps
+
+- Learn how to [troubleshoot problems in Change Analysis](change-analysis-troubleshoot.md)
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md
+
+ Title: Use Application Change Analysis in Azure Monitor to find web-app issues | Microsoft Docs
+description: Use Application Change Analysis in Azure Monitor to troubleshoot application issues on live sites on Azure App Service.
+++
+ms.contributor: cawa
Last updated : 03/11/2022 ++++
+# Use Application Change Analysis in Azure Monitor (preview)
+
+While standard monitoring solutions might alert you to a live site issue, outage, or component failure, they often don't explain the cause. For example, your site worked five minutes ago, and now it's broken. What changed in the last five minutes?
+
+We've designed Application Change Analysis to answer that question in Azure Monitor.
+
+Building on the power of [Azure Resource Graph](../../governance/resource-graph/overview.md), Change Analysis:
+- Provides insights into your Azure application changes.
+- Increases observability.
+- Reduces mean time to repair (MTTR).
+
+> [!IMPORTANT]
+> Change Analysis is currently in preview. This version:
+>
+> - Is provided without a service-level agreement.
+> - Is not recommended for production workloads.
+> - Includes unsupported features and might have constrained capabilities.
+>
+> For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Overview
+
+Change Analysis detects various types of changes, from the infrastructure layer through application deployment. Change Analysis is a subscription-level Azure resource provider that:
+- Checks resource changes in the subscription.
+- Provides data for various diagnostic tools to help users understand what changes might have caused issues.
+
+The following diagram illustrates the architecture of Change Analysis:
+
+![Architecture diagram of how Change Analysis gets change data and provides it to client tools](./media/change-analysis/overview.png)
+
+## Supported resource types
+
+Application Change Analysis service supports resource property level changes in all Azure resource types, including common resources like:
+- Virtual Machine
+- Virtual machine scale set
+- App Service
+- Azure Kubernetes Service (AKS)
+- Azure Function
+- Networking resources:
+ - Network Security Group
+ - Virtual Network
+ - Application Gateway, etc.
+- Data
+ - Storage
+ - SQL
+ - Redis Cache
+ - Cosmos DB, etc.
+
+## Data sources
+
+Application Change Analysis queries for:
+- Azure Resource Manager tracked properties.
+- Proxied configurations.
+- Web app in-guest changes.
+
+Change Analysis also tracks resource dependency changes to diagnose and monitor an application end-to-end.
+
+### Azure Resource Manager tracked properties changes
+
+Using [Azure Resource Graph](../../governance/resource-graph/overview.md), Change Analysis provides a historical record of how the Azure resources that host your application have changed over time. The following tracked settings can be detected:
+- Managed identities
+- Platform OS upgrade
+- Hostnames
+
+### Azure Resource Manager proxied setting changes
+
+Unlike Azure Resource Graph, Change Analysis securely queries and computes IP Configuration rules, TLS settings, and extension versions to provide more change details in the app.
+
+### Changes in web app deployment and configuration (in-guest changes)
+
+Every 30 minutes, Change Analysis captures the deployment and configuration state of an application. For example, it can detect changes in the application environment variables. The tool computes the differences and presents the changes.
+
+Unlike Azure Resource Manager changes, code deployment change information might not be available immediately in the Change Analysis tool. To view the latest changes in Change Analysis, select **Refresh**.
++
+If you don't see changes within 30 minutes, refer to [our troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
+
+Currently, all text-based files under site root **wwwroot** with the following extensions are supported:
+- *.json
+- *.xml
+- *.ini
+- *.yml
+- *.config
+- *.properties
+- *.html
+- *.cshtml
+- *.js
+- requirements.txt
+- Gemfile
+- Gemfile.lock
+- config.gemspec
+
+### Dependency changes
+
+Changes to resource dependencies can also cause issues in a resource. For example, if a web app calls into a Redis cache, the Redis cache SKU could affect the web app performance.
+
+As another example, if port 22 was closed in a virtual machine's Network Security Group, it will cause connectivity errors.
+
+#### Web App diagnose and solve problems navigator (Preview)
+
+To detect changes in dependencies, Change Analysis checks the web app's DNS record. In this way, it identifies changes in all app components that could cause issues.
+
+Currently the following dependencies are supported in **Web App Diagnose and solve problems | Navigator (Preview)**:
+
+- Web Apps
+- Azure Storage
+- Azure SQL
+
+#### Related resources
+
+Change Analysis detects related resources. Common examples are:
+
+- Network Security Group
+- Virtual Network
+- Application Gateway
+- Load Balancer related to a Virtual Machine.
+
+Network resources are usually provisioned in the same resource group as the resources using it. Filter the changes by resource group to show all changes for the virtual machine and its related networking resources.
++
+## Application Change Analysis service enablement
+
+The Application Change Analysis service:
+- Computes and aggregates change data from the data sources mentioned earlier.
+- Provides a set of analytics for users to:
+ - Easily navigate through all resource changes.
+ - Identify relevant changes in the troubleshooting or monitoring context.
+
+You'll need to register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription to make the tracked properties and proxied settings change data available. The `Microsoft.ChangeAnalysis` resource is automatically registered as you either:
+- Enter the Web App **Diagnose and Solve Problems** tool, or
+- Bring up the Change Analysis standalone tab.
+
+For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](change-analysis-visualizations.md#diagnose-and-solve-problems-tool-for-web-app) section.
+
+If you don't see changes within 30 minutes, refer to [the troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
++
+## Cost
+Application Change Analysis is a free service. Once enabled, the Change Analysis **Diagnose and solve problems** tool does not:
+- Incur any billing cost to subscriptions.
+- Have any performance impact for scanning Azure Resource properties changes.
+
+## Enable Change Analysis at scale for Web App in-guest file and environment variable changes
+
+If your subscription includes several web apps, enabling the service at the web app level would be inefficient. Instead, run the following script to enable all web apps in your subscription.
+
+### Pre-requisites
+
+PowerShell Az Module. Follow instructions at [Install the Azure PowerShell module](/powershell/azure/install-az-ps)
+
+### Run the following script:
+
+```PowerShell
+# Log in to your Azure subscription
+Connect-AzAccount
+
+# Get subscription Id
+$SubscriptionId = Read-Host -Prompt 'Input your subscription Id'
+
+# Make Feature Flag visible to the subscription
+Set-AzContext -SubscriptionId $SubscriptionId
+
+# Register resource provider
+Register-AzResourceProvider -ProviderNamespace "Microsoft.ChangeAnalysis"
+
+# Enable each web app
+$webapp_list = Get-AzWebApp | Where-Object {$_.kind -eq 'app'}
+foreach ($webapp in $webapp_list)
+{
+ $tags = $webapp.Tags
+ $tags[ΓÇ£hidden-related:diagnostics/changeAnalysisScanEnabledΓÇ¥]=$true
+ Set-AzResource -ResourceId $webapp.Id -Tag $tags -Force
+}
+
+```
+
+## Next steps
+
+- Learn about [visualizations in Change Analysis](change-analysis-visualizations.md)
+- Learn how to [troubleshoot problems in Change Analysis](change-analysis-troubleshoot.md)
+- Enable Application Insights for [Azure App Services apps](../../azure-monitor/app/azure-web-apps.md).
+- Enable Application Insights for [Azure VM and Azure virtual machine scale set IIS-hosted apps](../../azure-monitor/app/azure-vm-vmss-apps.md).
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-monitor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Monitor description: Sample Azure Resource Graph queries for Azure Monitor showing use of resource types and tables to access Azure Monitor related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 03/02/2022 Last updated : 03/11/2022 # Create and manage Active Directory connections for Azure NetApp Files
Several features of Azure NetApp Files require that you have an Active Directory
* The admin account you use must have the capability to create machine accounts in the organizational unit (OU) path that you will specify. In some cases, `msDS-SupportedEncryptionTypes` write permission is required to set account attributes within AD. +
+* Group Managed Service Accounts (GMSA) cannot be used with the Active Directory connection user account.
+ * If you change the password of the Active Directory user account that is used in Azure NetApp Files, be sure to update the password configured in the [Active Directory Connections](#create-an-active-directory-connection). Otherwise, you will not be able to create new volumes, and your access to existing volumes might also be affected depending on the setup. * Before you can remove an Active Directory connection from your NetApp account, you need to first remove all volumes associated with it.
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-resource-manager Bicep Functions Numeric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-numeric.md
The output from the preceding example with the default values is:
## max
-`max (arg1)`
+`max(arg1)`
Returns the maximum value from an array of integers or a comma-separated list of integers.
The output from the preceding example with the default values is:
## min
-`min (arg1)`
+`min(arg1)`
Returns the minimum value from an array of integers or a comma-separated list of integers.
azure-resource-manager Bicep Functions String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-string.md
description: Describes the functions to use in a Bicep file to work with strings
Previously updated : 02/07/2022 Last updated : 03/10/2022 # String functions for Bicep
The output from the preceding example with the default values is:
## base64ToJson
-`base64tojson`
+`base64ToJson(base64Value)`
Converts a base64 representation to a JSON object.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
## contains
-`contains (container, itemToFind)`
+`contains(container, itemToFind)`
Checks whether an array contains a value, an object contains a key, or a string contains a substring. The string comparison is case-sensitive. However, when testing if an object contains a key, the comparison is case-insensitive.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
## last
-`last (arg1)`
+`last(arg1)`
Returns last character of the string, or the last element of the array.
An integer that represents the last position of the item to find. The value is z
### Examples
-The following example shows how to use the indexOf and lastIndexOf functions:
+The following example shows how to use the `indexOf` and `lastIndexOf` functions:
```bicep output firstT int = indexOf('test', 't')
The output from the preceding example with the default values is:
## trim
-`trim (stringToTrim)`
+`trim(stringToTrim)`
Removes all leading and trailing white-space characters from the specified string.
The output from the preceding example with the default values is:
## uniqueString
-`uniqueString (baseString, ...)`
+`uniqueString(baseString, ...)`
Creates a deterministic hash string based on the values provided as parameters.
output uniqueDeploy string = uniqueString(resourceGroup().id, deployment().name)
## uri
-`uri (baseUri, relativeUri)`
+`uri(baseUri, relativeUri)`
Creates an absolute URI by combining the baseUri and the relativeUri string.
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/key-vault-parameter.md
The following procedure shows how to create a role with the minimum permission,
az role definition create --role-definition "<path-to-role-file>" az role assignment create \ --role "Key Vault resource manager template deployment operator" \
+ --scope /subscriptions/<Subscription-id>/resourceGroups/<resource-group-name> \
--assignee <user-principal-name> \ --resource-group ExampleGroup ```
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-resource-manager Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Resource Manager description: Sample Azure Resource Graph queries for Azure Resource Manager showing use of resource types and tables to access Azure Resource Manager related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
azure-resource-manager Child Resource Name Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/child-resource-name-type.md
Each parent resource accepts only certain resource types as child resources. The
In an Azure Resource Manager template (ARM template), you can specify the child resource either within the parent resource or outside of the parent resource. The values you provide for the resource name and resource type vary based on whether the child resource is defined inside or outside of the parent resource. > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [child resources](../bicep/child-resource-name-type.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [child resources](../bicep/child-resource-name-type.md).
## Within parent resource
azure-resource-manager Conditional Resource Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/conditional-resource-deployment.md
Sometimes you need to optionally deploy a resource in an Azure Resource Manager
> Conditional deployment doesn't cascade to [child resources](child-resource-name-type.md). If you want to conditionally deploy a resource and its child resources, you must apply the same condition to each resource type. > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [conditional deployments](../bicep/conditional-resource-deployment.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [conditional deployments](../bicep/conditional-resource-deployment.md).
## Deploy condition
azure-resource-manager Deploy To Management Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-to-management-group.md
As your organization matures, you can deploy an Azure Resource Manager template (ARM template) to create resources at the management group level. For example, you may need to define and assign [policies](../../governance/policy/overview.md) or [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) for a management group. With management group level templates, you can declaratively apply policies and assign roles at the management group level. > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [management group deployments](../bicep/deploy-to-management-group.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [management group deployments](../bicep/deploy-to-management-group.md).
## Supported resources
azure-resource-manager Deploy To Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-to-resource-group.md
This article describes how to scope your deployment to a resource group. You use an Azure Resource Manager template (ARM template) for the deployment. The article also shows how to expand the scope beyond the resource group in the deployment operation. > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [resource group deployments](../bicep/deploy-to-resource-group.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [resource group deployments](../bicep/deploy-to-resource-group.md).
## Supported resources
azure-resource-manager Deploy To Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-to-subscription.md
To simplify the management of resources, you can use an Azure Resource Manager t
To deploy templates at the subscription level, use Azure CLI, PowerShell, REST API, or the portal. > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [subscription deployments](../bicep/deploy-to-subscription.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [subscription deployments](../bicep/deploy-to-subscription.md).
## Supported resources
azure-resource-manager Deploy To Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-to-tenant.md
As your organization matures, you may need to define and assign [policies](../../governance/policy/overview.md) or [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) across your Azure AD tenant. With tenant level templates, you can declaratively apply policies and assign roles at a global level. > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [tenant deployments](../bicep/deploy-to-tenant.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [tenant deployments](../bicep/deploy-to-tenant.md).
## Supported resources
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/key-vault-parameter.md
For other users, grant the `Microsoft.KeyVault/vaults/deploy/action` permission.
az role definition create --role-definition "<path-to-role-file>" az role assignment create \ --role "Key Vault resource manager template deployment operator" \
+ --scope /subscriptions/<Subscription-id>/resourceGroups/<resource-group-name> \
--assignee <user-principal-name> \ --resource-group ExampleGroup ```
azure-resource-manager Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/outputs.md
This article describes how to define output values in your Azure Resource Manage
The format of each output value must resolve to one of the [data types](data-types.md). > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [outputs](../bicep/outputs.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [outputs](../bicep/outputs.md).
## Define output values
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/parameters.md
Resource Manager resolves parameter values before starting the deployment operat
Each parameter must be set to one of the [data types](data-types.md). > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [parameters](../bicep/parameters.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [parameters](../bicep/parameters.md).
## Minimal declaration
azure-resource-manager Resource Declaration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-declaration.md
Last updated 01/19/2022
To deploy a resource through an Azure Resource Manager template (ARM template), you add a resource declaration. Use the `resources` array in a JSON template. > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [resource declaration](../bicep/resource-declaration.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [resource declaration](../bicep/resource-declaration.md).
## Set resource type and version
azure-resource-manager Template Functions Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-array.md
Title: Template functions - arrays description: Describes the functions to use in an Azure Resource Manager template (ARM template) for working with arrays. Previously updated : 02/11/2022 Last updated : 03/10/2022 # Array functions for ARM templates
To get an array of string values delimited by a value, see [split](template-func
Converts the value to an array.
+In Bicep, use the [array](../bicep/bicep-functions-array.md#array) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Combines multiple arrays and returns the concatenated array, or combines multiple string values and returns the concatenated string.
+In Bicep, use the [concat](../bicep/bicep-functions-array.md#concat) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Checks whether an array contains a value, an object contains a key, or a string contains a substring. The string comparison is case-sensitive. However, when testing if an object contains a key, the comparison is case-insensitive.
+In Bicep, use the [contains](../bicep/bicep-functions-array.md#contains) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## createArray
-`createArray (arg1, arg2, arg3, ...)`
+`createArray(arg1, arg2, arg3, ...)`
Creates an array from the parameters.
+In Bicep, the `createArray` function isn't supported. To construct an array, see the Bicep [array](../bicep/data-types.md#arrays) data type.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Determines if an array, object, or string is empty.
+In Bicep, use the [empty](../bicep/bicep-functions-array.md#empty) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns the first element of the array, or first character of the string.
+In Bicep, use the [first](../bicep/bicep-functions-array.md#first) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns a single array or object with the common elements from the parameters.
+In Bicep, use the [intersection](../bicep/bicep-functions-array.md#intersection) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## last
-`last (arg1)`
+`last(arg1)`
Returns the last element of the array, or last character of the string.
+In Bicep, use the [last](../bicep/bicep-functions-array.md#last) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns the number of elements in an array, characters in a string, or root-level properties in an object.
+In Bicep, use the [length](../bicep/bicep-functions-array.md#length) function.
+ ### Parameters | Parameter | Required | Type | Description |
For more information about using this function with an array, see [Resource iter
Returns the maximum value from an array of integers or a comma-separated list of integers.
+In Bicep, use the [max](../bicep/bicep-functions-array.md#max) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns the minimum value from an array of integers or a comma-separated list of integers.
+In Bicep, use the [min](../bicep/bicep-functions-array.md#min) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Creates an array of integers from a starting integer and containing a number of items.
+In Bicep, use the [range](../bicep/bicep-functions-array.md#range) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns an array with all the elements after the specified number in the array, or returns a string with all the characters after the specified number in the string.
+In Bicep, use the [skip](../bicep/bicep-functions-array.md#skip) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns an array or string. An array has the specified number of elements from the start of the array. A string has the specified number of characters from the start of the string.
+In Bicep, use the [take](../bicep/bicep-functions-array.md#take) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns a single array or object with all elements from the parameters. For arrays, duplicate values are included once. For objects, duplicate property names are only included once.
+In Bicep, use the [union](../bicep/bicep-functions-array.md#union) function.
+ ### Parameters | Parameter | Required | Type | Description |
azure-resource-manager Template Functions Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-date.md
Title: Template functions - date description: Describes the functions to use in an Azure Resource Manager template (ARM template) to work with dates. Previously updated : 02/11/2022 Last updated : 03/10/2022 # Date functions for ARM templates
Resource Manager provides the following functions for working with dates in your
Adds a time duration to a base value. ISO 8601 format is expected.
+In Bicep, use the [dateTimeAdd](../bicep/bicep-functions-date.md#datetimeadd) function.
+ ### Parameters | Parameter | Required | Type | Description |
The next example template shows how to set the start time for an Automation sche
Returns the current (UTC) datetime value in the specified format. If no format is provided, the ISO 8601 (`yyyyMMddTHHmmssZ`) format is used. **This function can only be used in the default value for a parameter.**
+In Bicep, use the [utcNow](../bicep/bicep-functions-date.md#utcnow) function.
+ ### Parameters | Parameter | Required | Type | Description |
azure-resource-manager Template Functions Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-deployment.md
Title: Template functions - deployment description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve deployment information. Previously updated : 02/11/2022 Last updated : 03/10/2022 # Deployment functions for ARM templates
To get values from resources, resource groups, or subscriptions, see [Resource f
Returns information about the current deployment operation.
+In Bicep, use the [deployment](../bicep/bicep-functions-deployment.md#deployment) function.
+ ### Return value This function returns the object that is passed during deployment. The properties in the returned object differ based on whether you are:
For a subscription deployment, the following example returns a deployment object
Returns information about the Azure environment used for deployment.
+In Bicep, use the [environment](../bicep/bicep-functions-deployment.md#environment) function.
+ ### Return value This function returns properties for the current Azure environment. The following example shows the properties for global Azure. Sovereign clouds may return slightly different properties.
The preceding example returns the following object when deployed to global Azure
Returns a parameter value. The specified parameter name must be defined in the parameters section of the template.
-In Bicep, directly reference parameters by using their symbolic names.
+In Bicep, directly reference [parameters](../bicep/parameters.md) by using their symbolic names.
### Parameters
For more information about using parameters, see [Parameters in ARM templates](.
Returns the value of variable. The specified variable name must be defined in the variables section of the template.
-In Bicep, directly reference variables by using their symbolic names.
+In Bicep, directly reference [variables](../bicep/variables.md) by using their symbolic names.
### Parameters
azure-resource-manager Template Functions Logical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-logical.md
Resource Manager provides several functions for making comparisons in your Azure
Checks whether all parameter values are true.
-The `and` function isn't supported in Bicep, use the [&& operator](../bicep/operators-logical.md#and-) instead.
+The `and` function isn't supported in Bicep. Use the [&& operator](../bicep/operators-logical.md#and-) instead.
### Parameters
The output from the preceding example is:
Converts the parameter to a boolean.
+In Bicep, use the [bool](../bicep/bicep-functions-logical.md#bool) logical function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns false.
-The `false` function isn't available in Bicep. Use the `false` keyword instead.
+The `false` function isn't available in Bicep. Use the `false` keyword instead.
### Parameters
The following [example template](https://github.com/krnese/AzureDeploy/blob/mast
Converts boolean value to its opposite value.
-The `not` function isn't supported in Bicep, use the [! operator](../bicep/operators-logical.md#not-) instead.
+The `not` function isn't supported in Bicep. Use the [! operator](../bicep/operators-logical.md#not-) instead.
### Parameters
The output from the preceding example is:
Checks whether any parameter value is true.
-The `or` function isn't supported in Bicep, use the [|| operator](../bicep/operators-logical.md#or-) instead.
+The `or` function isn't supported in Bicep. Use the [|| operator](../bicep/operators-logical.md#or-) instead.
### Parameters
The output from the preceding example is:
Returns true.
-The `true` function isn't available in Bicep. Use the `true` keyword instead.
+The `true` function isn't available in Bicep. Use the `true` keyword instead.
### Parameters
azure-resource-manager Template Functions Numeric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-numeric.md
Title: Template functions - numeric description: Describes the functions to use in an Azure Resource Manager template (ARM template) to work with numbers. Previously updated : 02/11/2022 Last updated : 03/10/2022 # Numeric functions for ARM templates
Resource Manager provides the following functions for working with integers in y
Returns the sum of the two provided integers.
-The `add` function in not supported in Bicep. Use the [`+` operator](../bicep/operators-numeric.md#add-) instead.
+The `add` function isn't supported in Bicep. Use the [`+` operator](../bicep/operators-numeric.md#add-) instead.
### Parameters
The output from the preceding example with the default values is:
Returns the index of an iteration loop.
+In Bicep, use [iterative loops](../bicep/loops.md).
+ ### Parameters | Parameter | Required | Type | Description |
An integer representing the current index of the iteration.
Returns the integer division of the two provided integers.
-The `div` function in not supported in Bicep. Use the [`/` operator](../bicep/operators-numeric.md#divide-) instead.
+The `div` function isn't supported in Bicep. Use the [`/` operator](../bicep/operators-numeric.md#divide-) instead.
### Parameters
The following example shows how to use float to pass parameters to a Logic App:
Converts the specified value to an integer.
+In Bicep, use the [int](../bicep/bicep-functions-numeric.md#int) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## max
-`max (arg1)`
+`max(arg1)`
Returns the maximum value from an array of integers or a comma-separated list of integers.
+In Bicep, use the [max](../bicep/bicep-functions-numeric.md#max) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## min
-`min (arg1)`
+`min(arg1)`
Returns the minimum value from an array of integers or a comma-separated list of integers.
+In Bicep, use the [min](../bicep/bicep-functions-numeric.md#min) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns the subtraction of the two provided integers.
+The `sub` function isn't supported in Bicep. Use the [- operator](../bicep/operators-numeric.md#subtract--) instead.
+ ### Parameters | Parameter | Required | Type | Description |
azure-resource-manager Template Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-object.md
Title: Template functions - objects description: Describes the functions to use in an Azure Resource Manager template (ARM template) for working with objects. Previously updated : 02/11/2022 Last updated : 03/10/2022 # Object functions for ARM templates
Resource Manager provides several functions for working with objects in your Azu
Checks whether an array contains a value, an object contains a key, or a string contains a substring. The string comparison is case-sensitive. However, when testing if an object contains a key, the comparison is case-insensitive.
+In Bicep, use the [contains](../bicep/bicep-functions-object.md#contains) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Creates an object from the keys and values.
-The `createObject` function isn't supported by Bicep. Construct an object by using `{}`. See [Objects](../bicep/data-types.md#objects).
+The `createObject` function isn't supported by Bicep. Construct an object by using `{}`. See [Objects](../bicep/data-types.md#objects).
### Parameters
The output from the preceding example with the default values is an object named
Determines if an array, object, or string is empty.
+In Bicep, use the [empty](../bicep/bicep-functions-object.md#empty) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns a single array or object with the common elements from the parameters.
+In Bicep, use the [intersection](../bicep/bicep-functions-object.md#intersection) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Converts a valid JSON string into a JSON data type.
+In Bicep, use the [json](../bicep/bicep-functions-object.md#json) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns the number of elements in an array, characters in a string, or root-level properties in an object.
+In Bicep, use the [length](../bicep/bicep-functions-object.md#length) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example is:
Returns a single array or object with all elements from the parameters. For arrays, duplicate values are included once. For objects, duplicate property names are only included once.
+In Bicep, use the [union](../bicep/bicep-functions-object.md#union) function.
+ ### Parameters | Parameter | Required | Type | Description |
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
Title: Template functions - resources description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. Previously updated : 02/11/2022 Last updated : 03/10/2022
To get deployment scope values, see [Scope functions](template-functions-scope.m
Returns the resource ID for an [extension resource](../management/extension-resource-types.md). An extension resource is a resource type that's applied to another resource to add to its capabilities.
+In Bicep, use the [extensionResourceId](../bicep/bicep-functions-resource.md#extensionresourceid) function.
+ ### Parameters | Parameter | Required | Type | Description |
Built-in policy definitions are tenant level resources. For an example of deploy
The syntax for this function varies by name of the list operations. Each implementation returns values for the resource type that supports a list operation. The operation name must start with `list` and may have a suffix. Some common usages are `list`, `listKeys`, `listKeyValue`, and `listSecrets`.
+In Bicep, use the [list*](../bicep/bicep-functions-resource.md#list) function.
+ ### Parameters | Parameter | Required | Type | Description |
The next example shows a `list` function that takes a parameter. In this case, t
Determines whether a resource type supports zones for the specified location or region. This function **only supports zonal resources**. Zone redundant services return an empty array. For more information, see [Azure Services that support Availability Zones](../../availability-zones/az-region.md).
+In Bicep, use the [pickZones](../bicep/bicep-functions-resource.md#pickzones) function.
+ ### Parameters | Parameter | Required | Type | Description |
The following example shows how to use the `pickZones` function to enable zone r
**The providers function has been deprecated.** We no longer recommend using it. If you used this function to get an API version for the resource provider, we recommend that you provide a specific API version in your template. Using a dynamically returned API version can break your template if the properties change between versions.
+In Bicep, the [providers](../bicep/bicep-functions-resource.md#providers) function is deprecated.
+ ## reference `reference(resourceName or resourceIdentifier, [apiVersion], ['Full'])` Returns an object representing a resource's runtime state.
+In Bicep, use the [reference](../bicep/bicep-functions-resource.md#reference) function.
+ ### Parameters | Parameter | Required | Type | Description |
The following example template references a storage account that isn't deployed
See the [resourceGroup scope function](template-functions-scope.md#resourcegroup).
+In Bicep, use the [resourcegroup](../bicep/bicep-functions-scope.md#resourcegroup) scope function.
+ ## resourceId `resourceId([subscriptionId], [resourceGroupName], resourceType, resourceName1, [resourceName2], ...)` Returns the unique identifier of a resource. You use this function when the resource name is ambiguous or not provisioned within the same template. The format of the returned identifier varies based on whether the deployment happens at the scope of a resource group, subscription, management group, or tenant.
+In Bicep, use the [resourceId](../bicep/bicep-functions-resource.md#resourceid) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
See the [subscription scope function](template-functions-scope.md#subscription).
+In Bicep, use the [subscription](../bicep/bicep-functions-scope.md#subscription) scope function.
+ ## subscriptionResourceId `subscriptionResourceId([subscriptionId], resourceType, resourceName1, [resourceName2], ...)` Returns the unique identifier for a resource deployed at the subscription level.
+In Bicep, use the [subscriptionResourceId](../bicep/bicep-functions-resource.md#subscriptionresourceid) function.
+ ### Parameters | Parameter | Required | Type | Description |
The following template assigns a built-in role. You can deploy it to either a re
Returns the unique identifier for a resource deployed at the tenant level.
+In Bicep, use the [tenantResourceId](../bicep/bicep-functions-resource.md#tenantresourceid) function.
+ ### Parameters | Parameter | Required | Type | Description |
azure-resource-manager Template Functions Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-scope.md
Title: Template functions - scope description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about deployment scope. Previously updated : 02/11/2022 Last updated : 03/10/2022 # Scope functions for ARM templates
To get values from parameters, variables, or the current deployment, see [Deploy
Returns an object with properties from the management group in the current deployment.
+In Bicep, use the [managementGroup](../bicep/bicep-functions-scope.md#managementgroup) scope function.
+ ### Remarks `managementGroup()` can only be used on a [management group deployments](deploy-to-management-group.md). It returns the current management group for the deployment operation. Use to get properties for the current management group.
The next example creates a new management group and uses this function to set th
Returns an object that represents the current resource group.
+In Bicep, use the [resourceGroup](../bicep/bicep-functions-scope.md#resourcegroup) scope function.
+ ### Return value The returned object is in the following format:
The preceding example returns an object in the following format:
Returns details about the subscription for the current deployment.
+In Bicep, use the [subscription](../bicep/bicep-functions-scope.md#subscription) scope function.
+ ### Return value The function returns the following format:
The following example shows the subscription function called in the outputs sect
Returns properties about the tenant for the current deployment.
+In Bicep, use the [tenant](../bicep/bicep-functions-scope.md#tenant) scope function.
+ ### Remarks `tenant()` can be used with any deployment scope. It always returns the current tenant. Use this function to get properties for the current tenant.
azure-resource-manager Template Functions String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-string.md
Title: Template functions - string description: Describes the functions to use in an Azure Resource Manager template (ARM template) to work with strings. Previously updated : 02/11/2022 Last updated : 03/10/2022 # String functions for ARM templates
Resource Manager provides the following functions for working with strings in yo
Returns the base64 representation of the input string.
+In Bicep, use the [base64](../bicep/bicep-functions-string.md#base64) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## base64ToJson
-`base64tojson`
+`base64ToJson(base64Value)`
Converts a base64 representation to a JSON object.
+In Bicep, use the [base64ToJson](../bicep/bicep-functions-string.md#base64tojson) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Converts a base64 representation to a string.
+In Bicep, use the [base64ToString](../bicep/bicep-functions-string.md#base64tostring) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## concat
-`concat (arg1, arg2, arg3, ...)`
+`concat(arg1, arg2, arg3, ...)`
Combines multiple string values and returns the concatenated string, or combines multiple arrays and returns the concatenated array.
+In Bicep, use [string interpolation](../bicep/bicep-functions-string.md#concat) instead of the `concat` function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## contains
-`contains (container, itemToFind)`
+`contains(container, itemToFind)`
Checks whether an array contains a value, an object contains a key, or a string contains a substring. The string comparison is case-sensitive. However, when testing if an object contains a key, the comparison is case-insensitive.
+In Bicep, use the [contains](../bicep/bicep-functions-string.md#contains) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Converts a value to a data URI.
+In Bicep, use the [dataUri](../bicep/bicep-functions-string.md#datauri) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Converts a data URI formatted value to a string.
+In Bicep, use the [dataUriToString](../bicep/bicep-functions-string.md#datauritostring) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Determines if an array, object, or string is empty.
+In Bicep, use the [empty](../bicep/bicep-functions-string.md#empty) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Determines whether a string ends with a value. The comparison is case-insensitive.
+In Bicep, use the [endsWith](../bicep/bicep-functions-string.md#endswith) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns the first character of the string, or first element of the array.
+In Bicep, use the [first](../bicep/bicep-functions-string.md#first) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Creates a formatted string from input values.
+In Bicep, use the [format](../bicep/bicep-functions-string.md#format) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Creates a value in the format of a globally unique identifier based on the values provided as parameters.
+In Bicep, use the [guid](../bicep/bicep-functions-string.md#guid) function.
+ ### Parameters | Parameter | Required | Type | Description |
The following example returns results from `guid`:
Returns the first position of a value within a string. The comparison is case-insensitive.
+In Bicep, use the [indexOf](../bicep/bicep-functions-string.md#indexof) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Converts a valid JSON string into a JSON data type. For more information, see [json function](template-functions-object.md#json).
+In Bicep, use the [json](../bicep/bicep-functions-string.md#json) function.
+ ## last
-`last (arg1)`
+`last(arg1)`
Returns last character of the string, or the last element of the array.
+In Bicep, use the [last](../bicep/bicep-functions-string.md#last) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns the last position of a value within a string. The comparison is case-insensitive.
+In Bicep, use the [lastIndexOf](../bicep/bicep-functions-string.md#lastindexof) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns the number of characters in a string, elements in an array, or root-level properties in an object.
+In Bicep, use the [length](../bicep/bicep-functions-string.md#length) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns a value in the format of a globally unique identifier. **This function can only be used in the default value for a parameter.**
+In Bicep, use the [newGuid](../bicep/bicep-functions-string.md#newguid) function.
+ ### Remarks You can only use this function within an expression for the default value of a parameter. Using this function anywhere else in a template returns an error. The function isn't allowed in other parts of the template because it returns a different value each time it's called. Deploying the same template with the same parameters wouldn't reliably produce the same results.
The output from the preceding example varies for each deployment but will be sim
Returns a right-aligned string by adding characters to the left until reaching the total specified length.
+In Bicep, use the [padLeft](../bicep/bicep-functions-string.md#padleft) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns a new string with all instances of one string replaced by another string.
+In Bicep, use the [replace](../bicep/bicep-functions-string.md#replace) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns a string with all the characters after the specified number of characters, or an array with all the elements after the specified number of elements.
+In Bicep, use the [skip](../bicep/bicep-functions-string.md#skip) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns an array of strings that contains the substrings of the input string that are delimited by the specified delimiters.
+In Bicep, use the [split](../bicep/bicep-functions-string.md#split) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Determines whether a string starts with a value. The comparison is case-insensitive.
+In Bicep, use the [startsWith](../bicep/bicep-functions-string.md#startswith) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Converts the specified value to a string.
+In Bicep, use the [string](../bicep/bicep-functions-string.md#string) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns a substring that starts at the specified character position and contains the specified number of characters.
+In Bicep, use the [substring](../bicep/bicep-functions-string.md#substring) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns an array or string. An array has the specified number of elements from the start of the array. A string has the specified number of characters from the start of the string.
+In Bicep, use the [take](../bicep/bicep-functions-string.md#take) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Converts the specified string to lower case.
+In Bicep, use the [toLower](../bicep/bicep-functions-string.md#tolower) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Converts the specified string to upper case.
+In Bicep, use the [toUpper](../bicep/bicep-functions-string.md#toupper) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## trim
-`trim (stringToTrim)`
+`trim(stringToTrim)`
Removes all leading and trailing white-space characters from the specified string.
+In Bicep, use the [trim](../bicep/bicep-functions-string.md#trim) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## uniqueString
-`uniqueString (baseString, ...)`
+`uniqueString(baseString, ...)`
Creates a deterministic hash string based on the values provided as parameters.
+In Bicep, use the [uniqueString](../bicep/bicep-functions-string.md#uniquestring) function.
+ ### Parameters | Parameter | Required | Type | Description |
The following example returns results from `uniquestring`:
## uri
-`uri (baseUri, relativeUri)`
+`uri(baseUri, relativeUri)`
Creates an absolute URI by combining the baseUri and the relativeUri string.
+In Bicep, use the [uri](../bicep/bicep-functions-string.md#uri) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Encodes a URI.
+In Bicep, use the [uriComponent](../bicep/bicep-functions-string.md#uricomponent) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns a string of a URI encoded value.
+In Bicep, use the [uriComponentToString](../bicep/bicep-functions-string.md#uricomponenttostring) function.
+ ### Parameters | Parameter | Required | Type | Description |
azure-resource-manager Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/variables.md
This article describes how to define and use variables in your Azure Resource Ma
Resource Manager resolves variables before starting the deployment operations. Wherever the variable is used in the template, Resource Manager replaces it with the resolved value. > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [variables](../bicep/variables.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [variables](../bicep/variables.md).
## Define variable
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-signalr Server Graceful Shutdown https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/server-graceful-shutdown.md
# Server graceful shutdown
-Microsoft Azure SignalR Service provides two modes for gracefully shutdown a server.
+Microsoft Azure SignalR Service provides two modes for gracefully shutdown a SignalR Hub server when Azure SignalR Service is configured as **Default mode** that Azure SignalR Service acts as a proxy between the SignalR Clients and the SignalR Hub Server.
The key advantage of using this feature is to prevent your customer from experiencing unexpectedly connection drops.
azure-sql Application Authentication Get Client Id Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/application-authentication-get-client-id-keys.md
$svcprincipal = az ad sp create --id $azureAdApplication.ApplicationId
Start-Sleep -s 15 # to avoid a PrincipalNotFound error, pause for 15 seconds # if you still get a PrincipalNotFound error, then rerun the following until successful.
-$roleassignment = az role assignment create --role "Contributor" --assignee $azureAdApplication.ApplicationId.Guid
+$roleassignment = az role assignment create --role "Contributor" --scope /subscriptions/{Subscription-id}/resourceGroups/{resource-group-name} --assignee $azureAdApplication.ApplicationId.Guid
# output the values we need for our C# application to successfully authenticate Write-Output "Copy these values into the C# sample app"
azure-sql Automation Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/automation-manage.md
Azure Automation also has the ability to communicate with SQL servers directly,
The runbook and module galleries for [Azure Automation](../../automation/automation-runbook-gallery.md) offer a variety of runbooks from Microsoft and the community that you can import into Azure Automation. To use one, download a runbook from the gallery, or you can directly import runbooks from the gallery, or from your Automation account in the Azure portal.
+>[!NOTE]
+> The Automation runbook may run from a range of IP addresses at any datacenter in an Azure region. To learn more, see [Automation region DNS records](/azure/automation/how-to/automation-region-dns-records).
+ ## Next steps Now that you've learned the basics of Azure Automation and how it can be used to manage Azure SQL Database, follow these links to learn more about Azure Automation.
azure-sql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/policy-reference.md
Title: Built-in policy definitions for Azure SQL Database description: Lists Azure Policy built-in policy definitions for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-sql Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure SQL Database description: Sample Azure Resource Graph queries for Azure SQL Database showing use of resource types and tables to access Azure SQL Database related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tier-hyperscale.md
The vCore-based service tiers are differentiated based on database availability
|| **General Purpose** | **Hyperscale** | **Business Critical** | |::|::|::|::| | **Best for** | Offers budget oriented balanced compute and storage options.|Most business workloads. Autoscaling storage size up to 100 TB,fast vertical and horizontal compute scaling, fast database restore.| OLTP applications with high transaction rate and low IO latency. Offers highest resilience to failures and fast failovers using multiple synchronously updated replicas.|
-| **Resource type** | SQL Database / SQL Managed Instance | Single database | SQL Database / SQL Managed Instance |
| **Compute size** | 1 to 80 vCores | 1 to 80 vCores<sup>1</sup> | 1 to 80 vCores | | **Storage type** | Premium remote storage (per instance) | De-coupled storage with local SSD cache (per instance) | Super-fast local SSD storage (per instance)| | **Storage size**<sup>1</sup> | 5 GB ΓÇô 4 TB | Up to 100 TB | 5 GB ΓÇô 4 TB |
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/doc-changes-updates-release-notes-whats-new.md
ms.devlang: Previously updated : 03/07/2022 Last updated : 03/10/2022 # What's new in Azure SQL Managed Instance? [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqlmi.md)]
The following table lists the features of Azure SQL Managed Instance that are cu
| [Transactional Replication](replication-transactional-overview.md) | Replicate the changes from your tables into other databases in SQL Managed Instance, SQL Database, or SQL Server. Or update your tables when some rows are changed in other instances of SQL Managed Instance or SQL Server. For information, see [Configure replication in Azure SQL Managed Instance](replication-between-two-instances-configure-tutorial.md). | | [Threat detection](threat-detection-configure.md) | Threat detection notifies you of security threats detected to your database. | | [Windows Auth for Azure Active Directory principals](winauth-azuread-overview.md) | Kerberos authentication for Azure Active Directory (Azure AD) enables Windows Authentication access to Azure SQL Managed Instance. |
-|||
## General availability (GA)
The following table lists the features of Azure SQL Managed Instance that have t
|[Audit management operations](../database/auditing-overview.md#auditing-of-microsoft-support-operations) | March 2021 | Azure SQL audit capabilities enable you to audit operations done by Microsoft support engineers when they need to access your SQL assets during a support request, enabling more transparency in your workforce. | |[Granular permissions for dynamic data masking](../database/dynamic-data-masking-overview.md)| March 2021 | Dynamic data masking helps prevent unauthorized access to sensitive data by enabling customers to designate how much of the sensitive data to reveal with minimal impact on the application layer. It's a policy-based security feature that hides the sensitive data in the result set of a query over designated database fields, while the data in the database is not changed. It's now possible to assign granular permissions for data that's been dynamically masked. To learn more, see [Dynamic data masking](../database/dynamic-data-masking-overview.md#permissions). | |[Machine Learning Service](machine-learning-services-overview.md) | March 2021 | Machine Learning Services is a feature of Azure SQL Managed Instance that provides in-database machine learning, supporting both Python and R scripts. The feature includes Microsoft Python and R packages for high-performance predictive analytics and machine learning. |
-|||
+ ## Documentation changes
Learn about significant changes to the Azure SQL Managed Instance documentation.
| Changes | Details | | | |
-| **GA for maintenance window, preview for advance notifications** | The [maintenance window](../database/maintenance-window.md) feature allows you to configure a maintenance schedule for your Azure SQL Managed Instance and receive advance notifications of maintenance windows. [Maintenance window advance notifications](../database/advance-notifications.md) (preview) are available for databases configured to use a non-default [maintenance window](../database/maintenance-window.md). |
-|**Windows Auth for Azure Active Directory principals preview** | Windows Authentication for managed instances empowers customers to move existing services to the cloud while maintaining a seamless user experience, and provides the basis for infrastructure modernization. Learn more in [Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance](winauth-azuread-overview.md). |
-| **Data virtualization preview** | It's now possible to query data in external sources such as Azure Data Lake Storage Gen2 or Azure Blob Storage, joining it with locally stored relational data. This feature is currently in preview. To learn more, see [Data virtualization](data-virtualization-overview.md). |
-|||
+| **Data virtualization preview** | It's now possible to query data in external sources such as Azure Data Lake Storage Gen2 or Azure Blob Storage, joining it with locally stored relational data. This feature is currently in preview. To learn more, see [Data virtualization](data-virtualization-overview.md). |
+| **Link feature guidance** | We've published a number of guides for using the [link feature](link-feature.md) with SQL Managed Instance, including how to [prepare your environment](managed-instance-link-preparation.md), [configure replication](managed-instance-link-use-ssms-to-replicate-database.md), [failover your database](managed-instance-link-use-ssms-to-failover-database.md), and some [best practices](link-feature-best-practices.md) when using the link feature. |
+| **Maintenance window GA, advance notifications preview** | The [maintenance window](../database/maintenance-window.md) feature is now generally available, allowing you to configure a maintenance schedule for your Azure SQL Managed Instance. It's also possible to receive advance notifications for planned maintenance events, which is currently in preview. Review [Maintenance window advance notifications (preview)](../database/advance-notifications.md) to learn more. |
+| **Windows Auth for Azure Active Directory principals preview** | Windows Authentication for managed instances empowers customers to move existing services to the cloud while maintaining a seamless user experience, and provides the basis for infrastructure modernization. Learn more in [Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance](winauth-azuread-overview.md). |
++ ### 2021
Learn about significant changes to the Azure SQL Managed Instance documentation.
| **Maintenance window** | The maintenance window feature allows you to configure a maintenance schedule for your Azure SQL Managed Instance. To learn more, see [maintenance window](../database/maintenance-window.md).| | **Service Broker message exchange** | The Service Broker component of Azure SQL Managed Instance allows you to compose your applications from independent, self-contained services, by providing native support for reliable and secure message exchange between the databases attached to the service. Currently in preview. To learn more, see [Service Broker](/sql/database-engine/configure-windows/sql-server-service-broker). | **SQL insights** | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. To learn more, see [SQL insights](../../azure-monitor/insights/sql-insights-overview.md). |
-|||
+ ### 2020
The following changes were added to SQL Managed Instance and the documentation i
| **Enhanced management experience** | Using the new [OPERATIONS API](/rest/api/sql/2021-02-01-preview/managed-instance-operations), it's now possible to check the progress of long-running instance operations. To learn more, see [Management operations](management-operations-overview.md?tabs=azure-portal). | **Machine learning support** | Machine Learning Services with support for R and Python languages now include preview support on Azure SQL Managed Instance (Preview). To learn more, see [Machine learning with SQL Managed Instance](machine-learning-services-overview.md). | | **User-initiated failover** | User-initiated failover is now generally available, providing you with the capability to manually initiate an automatic failover using PowerShell, CLI commands, and API calls, improving application resiliency. To learn more, see, [testing resiliency](../database/high-availability-sla.md#testing-application-fault-resiliency).
-| | |
+
azure-sql Link Feature Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/link-feature-best-practices.md
Previously updated : 03/10/2022 Last updated : 03/11/2022 # Best practices with link feature for Azure SQL Managed Instance (preview) [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
This article outlines best practices when using the link feature for Azure SQL M
## Take log backups regularly
-The link feature replicates data using the [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups) concept based the Always On availability groups technology stack. Data replication with distributed availability groups is based on replicating transaction log records. No transaction log records can be truncated from the database on the primary instance until they're replicated to the database on the secondary instance. If transaction log record replication is slow or blocked due to network connection issues, the log file keeps growing on the primary instance. Growth speed depends on the intensity of workload and the network speed. If there's a prolonged network connection outage and heavy workload on primary instance, the log file may take all available storage space.
+The link feature replicates data using the [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups) concept based on the Always On availability groups technology stack. Data replication with distributed availability groups is based on replicating transaction log records. No transaction log records can be truncated from the database on the primary instance until they're replicated to the database on the secondary instance. If transaction log record replication is slow or blocked due to network connection issues, the log file keeps growing on the primary instance. Growth speed depends on the intensity of workload and the network speed. If there's a prolonged network connection outage and heavy workload on primary instance, the log file may take all available storage space.
-To minimize the risk of running out of space on your primary instance due to log file growth, make sure to take database log backups regularly. By taking log backups regularly, you make your database more resilient to unplanned log growth events. Consider scheduling daily log backup tasks using SQL Server Agent job.
+To minimize the risk of running out of space on your primary instance due to log file growth, make sure to **take database log backups regularly**. By taking log backups regularly, you make your database more resilient to unplanned log growth events. Consider scheduling daily log backup tasks using SQL Server Agent job.
You can use a Transact-SQL (T-SQL) script to back up the log file, such as the sample provided in this section. Replace the placeholders in the sample script with name of your database, name and path of the backup file, and the description.
The query output looks like the following example below for sample database **tp
:::image type="content" source="./media/link-feature-best-practices/database-log-file-size.png" alt-text="Screenshot with results of the command showing log file size and space used":::
-In this example, the database has used 76% of the available log, with an absolute log file size of approximately 27 GB (27,971 MB). The thresholds for action may vary based on your workload, but it's typically an indication that you should take a log backup to truncate log file and free up some space.
+In this example, the database has used 76% of the available log, with an absolute log file size of approximately 27 GB (27,971 MB). The thresholds for action may vary based on your workload, but it's typically an indication that you should take a log backup to truncate the log file and free up some space.
## Add startup trace flags
To get started with the link feature, [prepare your environment for replication]
For more information on the link feature, see the following articles: - [Managed Instance link ΓÇô overview](link-feature.md)-- [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog)
+- [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog)
azure-sql Link Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/link-feature.md
To use the link feature, you'll need:
The underlying technology of near real-time data replication between SQL Server and SQL Managed Instance is based on distributed availability groups, part of the well-known and proven Always On availability group technology stack. Extend your SQL Server on-premises availability group to SQL Managed Instance in Azure in a safe and secure manner.
-There's no need to have an existing availability group or multiple nodes. The link supports single node SQL Server instances without existing availability groups, and also multiple-node SQL Server instances with existing availability groups. Through the link, you can leverage the modern benefits of Azure without migrating your entire SQL Server data estate to the cloud.
+There's no need to have an existing availability group or multiple nodes. The link supports single node SQL Server instances without existing availability groups, and also multiple-node SQL Server instances with existing availability groups. Through the link, you can use the modern benefits of Azure without migrating your entire SQL Server data estate to the cloud.
You can keep running the link for as long as you need it, for months and even years at a time. And for your modernization journey, if or when you're ready to migrate to Azure, the link enables a considerably-improved migration experience with the minimum possible downtime compared to all other options available today, providing a true online migration to SQL Managed Instance. ## Supported scenarios
-Data replicated through the link feature from SQL Server to Azure SQL Managed Instance can be used with a number of scenarios, such as:
+Data replicated through the link feature from SQL Server to Azure SQL Managed Instance can be used with several scenarios, such as:
- **Use Azure services without migrating to the cloud** - **Offload read-only workloads to Azure**
Use the link feature to leverage Azure services using SQL Server data without mi
### Offload workloads to Azure
-You can also use the link feature to offload workloads to Azure. For example, an application could use SQL Server for read / write workloads, while offloading read-only workloads to SQL Managed Instance in any of Azure's 60+ regions worldwide. Once the link is established, the primary database on SQL Server is read/write accessible, while replicated data to SQL Managed Instance in Azure is read-only accessible. This allows for various scenarios where replicated databases on SQL Managed Instance can be used for read scale-out and offloading read-only workloads to Azure. SQL Managed Instance, in parallel, can also host independent read/write databases. This allows for copying the replicated database to another read/write database on the same managed instance for further data processing.
+You can also use the link feature to offload workloads to Azure. For example, an application could use SQL Server for read-write workloads, while offloading read-only workloads to SQL Managed Instance in any of Azure's 60+ regions worldwide. Once the link is established, the primary database on SQL Server is read/write accessible, while replicated data to SQL Managed Instance in Azure is read-only accessible. This allows for various scenarios where replicated databases on SQL Managed Instance can be used for read scale-out and offloading read-only workloads to Azure. SQL Managed Instance, in parallel, can also host independent read/write databases. This allows for copying the replicated database to another read/write database on the same managed instance for further data processing.
The link is database scoped (one link per one database), allowing for consolidation and deconsolidation of workloads in Azure. For example, you can replicate databases from multiple SQL Servers to a single SQL Managed Instance in Azure (consolidation), or replicate databases from a single SQL Server to multiple managed instances via a 1 to 1 relationship between a database and a managed instance - to any of Azure's regions worldwide (deconsolidation). The latter provides you with an efficient way to quickly bring your workloads closer to your customers in any region worldwide, which you can use as read-only replicas.
Managed Instance link has a set of general limitations, and those are listed in
- Replicating Databases using Hekaton (In-Memory OLTP) isn't supported on Managed Instance General Purpose service tier. Hekaton is only supported on Managed Instance Business Critical service tier. - For the full list of differences between SQL Server and Managed Instance, see [this article](./transact-sql-tsql-differences-sql-server.md). - In case Change data capture (CDC), log shipping, or service broker are used with database replicated on the SQL Server, and in case of database migration to Managed Instance, on the failover to the Azure, clients will need to connect using instance name of the current global primary replica. you'll need to manually re-configure these settings.-- In case Transactional Replication is used with database replicated on the SQL Server, and in case of migration scenario, on failover to Azure, transactional replication on Azure SQL Managed instance will not continue. you'll need to manually re-configure Transactional Replication.-- In case distributed transactions are used with database replicated from the SQL Server, and in case of migration scenario, on the cutover to the cloud, the DTC capabilities will not be transferred. There will be no possibility for migrated database to get involved in distributed transactions with SQL Server, as Managed Instance doesn't support distributed transactions with SQL Server at this time. For reference, Managed Instance today supports distributed transactions only between other Managed Instances, see [this article](../database/elastic-transactions-overview.md#transactions-for-sql-managed-instance).
+- In case Transactional Replication is used with database replicated on the SQL Server, and in case of migration scenario, on failover to Azure, transactional replication on Azure SQL Managed instance won't continue. you'll need to manually re-configure Transactional Replication.
+- In case distributed transactions are used with database replicated from the SQL Server, and in case of migration scenario, on the cutover to the cloud, the DTC capabilities won't be transferred. There will be no possibility for migrated database to get involved in distributed transactions with SQL Server, as Managed Instance doesn't support distributed transactions with SQL Server at this time. For reference, Managed Instance today supports distributed transactions only between other Managed Instances, see [this article](../database/elastic-transactions-overview.md#transactions-for-sql-managed-instance).
- Managed Instance link can replicate database of any size if it fits into chosen storage size of target Managed Instance. ### Additional limitations
Some Managed Instance link features and capabilities are limited **at this time*
- Managed Instance Link authentication between SQL Server instance and Managed Instance is certificate-based, available only through exchange of certificates. Windows authentication between instances isn't supported. - Replication of user databases from SQL Server to Managed Instance is one-way. User databases from Managed Instance can't be replicated back to SQL Server. - Auto failover groups replication to secondary Managed Instance can't be used in parallel while operating the Managed Instance Link with SQL Server.
+- Replicated databases aren't part of auto-backup process on SQL Managed Instance.
## Next steps
-If you are interested in using Link feature for Azure SQL Managed Instance with versions and editions that are currently not supported, sign-up [here](https://aka.ms/mi-link-signup).
+If you're interested in using Link feature for Azure SQL Managed Instance with versions and editions that are currently not supported, sign-up [here](https://aka.ms/mi-link-signup).
For more information on the link feature, see the following:
azure-sql Managed Instance Link Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-preparation.md
Previously updated : 03/07/2022 Last updated : 03/10/2022 # Prepare environment for link feature - Azure SQL Managed Instance [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article teaches you to prepare your environment for the [Managed Instance link feature](link-feature.md) so that you can replicate your databases from your instance of SQL Server to your instance of Azure SQL Managed Instance.
+This article teaches you to prepare your environment for the [Managed Instance link feature](link-feature.md) so that you can replicate databases from SQL Server instance to Azure SQL Managed Instance.
> [!NOTE] > The link feature for Azure SQL Managed Instance is currently in preview.
To use the Managed Instance link feature, you need the following prerequisites:
## Prepare your SQL Server instance
-To prepare your SQL Server instance, you need to validate you're on the minimum supported version, you've enabled the availability group feature, and you've added the proper trace flags at startup. You will need to restart SQL Server for these changes to take effect.
+To prepare your SQL Server instance, you need to validate:
+- you're on the minimum supported version;
+- you've enabled the availability group feature;
+- you've added the proper trace flags at startup;
+- your databases are in full recovery mode and backed up.
+
+You'll need to restart SQL Server for these changes to take effect.
### Install CU15 (or higher)
To check your SQL Server version, run the following Transact-SQL (T-SQL) script:
SELECT @@VERSION ```
-If your SQL Server version is lower than CU15 (15.0.4198.2), either install the minimally supported [CU15](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6), or the current latest cumulative update. Your SQL Server instance will be restarted during the update.
+If your SQL Server version is lower than CU15 (15.0.4198.2), either install the [CU15](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6), or the current latest cumulative update. Your SQL Server instance will be restarted during the update.
+
+### Create database master key in the master database
+
+Create database master key in the master database by running the following T-SQL script.
+
+```sql
+-- Create MASTER KEY
+USE MASTER
+CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<strong_password>'
+```
+
+To check if you have database master key, use the following T-SQL script.
+```sql
+SELECT * FROM sys.symmetric_keys WHERE name LIKE '%DatabaseMasterKey%'
+```
### Enable availability groups feature
-The link feature for SQL Managed Instance relies on the Always On availability groups feature, which is not enabled by default. To learn more, review [enabling the Always On availability groups feature](/sql/database-engine/availability-groups/windows/enable-and-disable-always-on-availability-groups-sql-server).
+The link feature for SQL Managed Instance relies on the Always On availability groups feature, which isn't enabled by default. To learn more, review [enabling the Always On availability groups feature](/sql/database-engine/availability-groups/windows/enable-and-disable-always-on-availability-groups-sql-server).
To confirm the Always On availability groups feature is enabled, run the following Transact-SQL (T-SQL) script:
select
end as 'HadrStatus' ```
-If the availability groups feature is not enabled, follow these steps to enable it:
+If the availability groups feature isn't enabled, follow these steps to enable it:
1. Open the **SQL Server Configuration Manager**. 1. Choose the SQL Server service from the navigation pane.
If the availability groups feature is not enabled, follow these steps to enable
To optimize Managed Instance link performance, enabling trace flags `-T1800` and `-T9567` at startup is highly recommended: -- **-T1800**: This trace flag optimizes SQL Server performance when the disks hosting the log files for the primary and secondary replica in an availability group have different sector sizes, such as 512 bytes and 4k. If both primary and secondary replicas have a disk sector size of 4k, this trace flag isn't required. To learn more, review [KB3009974](https://support.microsoft.com/topic/kb3009974-fix-slow-synchronization-when-disks-have-different-sector-sizes-for-primary-and-secondary-replica-log-files-in-sql-server-ag-and-logshipping-environments-ed181bf3-ce80-b6d0-f268-34135711043c). -- **-T9567**: This trace flag enables compression of the data stream for availability groups during automatic seeding, which increases the load on the processor but can significantly reduce transfer time during seeding.
+- **-T1800**: This trace flag optimizes performance when the log files for the primary and secondary replica in an availability group are hosted on disks with different sector sizes, such as 512 bytes and 4k. If both primary and secondary replicas have a disk sector size of 4k, this trace flag isn't required. To learn more, review [KB3009974](https://support.microsoft.com/topic/kb3009974-fix-slow-synchronization-when-disks-have-different-sector-sizes-for-primary-and-secondary-replica-log-files-in-sql-server-ag-and-logshipping-environments-ed181bf3-ce80-b6d0-f268-34135711043c).
+- **-T9567**: This trace flag enables compression of the data stream for availability groups during automatic seeding. The compression increases the load on the processor but can significantly reduce transfer time during seeding.
To enable these trace flags at startup, follow these steps:
To enable these trace flags at startup, follow these steps:
To learn more, review [enabling trace flags](/sql/t-sql/database-console-commands/dbcc-traceon-transact-sql). - ### Restart SQL Server and validate configuration - After you've validated you're on a supported version of SQL Server, enabled the Always On availability groups feature, and added your startup trace flags, restart your SQL Server instance to apply all of these changes. To restart your SQL Server instance, follow these steps:
To restart your SQL Server instance, follow these steps:
:::image type="content" source="./media/managed-instance-link-preparation/sql-server-configuration-manager-sql-server-restart.png" alt-text="Screenshot showing S Q L Server restart command call.":::
-After the restart, use Transact-SQL to validate the configuration of your SQL Server. Your SQL Server version should be 15.0.4198.2 or greater, the Always On availability groups feature should be enabled, and you should have the Trace flags -T1800 and -T9567 enabled.
+After the restart, use Transact-SQL to validate the configuration of your SQL Server. Your SQL Server version should be 15.0.4198.2 or greater, the Always On availability groups feature should be enabled, and you should have the Trace flags -T1800 and -T9567 enabled.
To validate your configuration, run the following Transact-SQL (T-SQL) script:
The following screenshot is an example of the expected outcome for a SQL Server
:::image type="content" source="./media/managed-instance-link-preparation/ssms-results-expected-outcome.png" alt-text="Screenshot showing expected outcome in S S M S.":::
+### User database recovery mode and backup
+
+All databases that are to be replicated via SQL Managed Instance link must be in full recovery mode and have at least one backup.
+
+```sql
+-- Set full recovery mode for all databases you want to replicate.
+ALTER DATABASE [<DatabaseName>] SET RECOVERY FULL
+GO
+
+-- Execute backup for all databases you want to replicate.
+BACKUP DATABASE [<DatabaseName>] TO DISK = N'<DiskPath>'
+GO
+```
+ ## Configure network connectivity For the Managed Instance link to work, there must be network connectivity between SQL Server and SQL Managed Instance. The network option that you choose depends on where your SQL Server resides - whether it's on-premises or on a virtual machine (VM).
If your SQL Server is hosted outside of Azure, establish a VPN connection betwee
### Open network ports between the environments
-Port 5022 needs to allow inbound and outbound traffic between SQL Server and SQL Managed Instance. Port 5022 is the standard port used for availability groups, and cannot be changed or customized.
+Port 5022 needs to allow inbound and outbound traffic between SQL Server and SQL Managed Instance. Port 5022 is the standard port used for availability groups, and can't be changed or customized.
The following table describes port actions for each environment:
Bidirectional network connectivity between SQL Server and SQL Managed Instance i
### Test connection from SQL Server to SQL Managed Instance
-To check if SQL Server can reach your SQL Managed Instance use the `tnc` command in PowerShell from the SQL Server host machine. Replace `<ManagedInstanceFQDN>` with the fully qualified domain name of the Azure SQL Managed Instance.
+To check if SQL Server can reach your SQL Managed Instance, use the `tnc` command in PowerShell from the SQL Server host machine. Replace `<ManagedInstanceFQDN>` with the fully qualified domain name of the Azure SQL Managed Instance.
```powershell tnc <ManagedInstanceFQDN> -port 5022
A successful test shows `TcpTestSucceeded : True`:
:::image type="content" source="./media/managed-instance-link-preparation/powershell-output-tnc-command.png" alt-text="Screenshot showing output of T N C command in PowerShell.":::
-If the response is unsuccessful, verify the following:
+If the response is unsuccessful, verify the following network settings:
- There are rules in both the network firewall *and* the windows firewall that allow traffic to the *subnet* of the SQL Managed Instance. -- There is an NSG rule allowing communication on port 5022 for the virtual network hosting the SQL Managed Instance.
+- There's an NSG rule allowing communication on port 5022 for the virtual network hosting the SQL Managed Instance.
#### Test connection from SQL Managed Instance to SQL Server
DROP CERTIFICATE TEST_CERT
GO ```
-If the connection is unsuccessful, verify the following:
+If the connection is unsuccessful, verify the following items:
- The firewall on the host SQL Server allows inbound and outbound communication on port 5022. -- There is an NSG rule for the virtual network hosting the SQL Managed instance that allows communication on port 5022. -- If your SQL Server is on an Azure VM, there is an NSG rule allowing communication on port 5022 on the virtual network hosting the VM.
+- There's an NSG rule for the virtual network hosting the SQL Managed instance that allows communication on port 5022.
+- If your SQL Server is on an Azure VM, there's an NSG rule allowing communication on port 5022 on the virtual network hosting the VM.
- SQL Server is running. > [!CAUTION]
azure-sql Managed Instance Link Use Ssms To Failover Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-failover-database.md
Previously updated : 03/07/2022 Last updated : 03/10/2022 # Failover database with link feature in SSMS - Azure SQL Managed Instance
To failover your database, follow these steps:
:::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-introduction.png" alt-text="Screenshot showing Introduction page.":::
-3. On the **Log in to Azure** page, select **Sign-in** to provide your credentials and sign into your Azure account. Select the subscription that is hosting the your SQL Managed Instance from the drop-down and then select **Next**:
+3. On the **Log in to Azure** page, select **Sign-in** to provide your credentials and sign into your Azure account. Select the subscription that is hosting your SQL Managed Instance from the drop-down and then select **Next**:
:::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-login-to-azure.png" alt-text="Screenshot showing Log in to Azure page.":::
azure-sql Managed Instance Link Use Ssms To Replicate Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-replicate-database.md
Previously updated : 03/07/2022 Last updated : 03/10/2022 # Replicate database with link feature in SSMS - Azure SQL Managed Instance
Use the **New Managed Instance link** wizard in SQL Server Management Studio (SS
To set up the Managed Instance link, follow these steps: 1. Open SQL Server Management Studio (SSMS) and connect to your instance of SQL Server.
-1. In **Object Explorer**, right-click your database, hover over **Azure SQL Managed Instance link** and select **Replicate database** to open the **New Managed Instance link** wizard:
+1. In **Object Explorer**, right-click your database, hover over **Azure SQL Managed Instance link** and select **Replicate database** to open the **New Managed Instance link** wizard. If SQL Server version isn't supported, this option won't be available in the context menu.
:::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-ssms-database-context-replicate-database.png" alt-text="Screenshot showing database's context menu option to replicate database after hovering over Azure SQL Managed Instance link.":::
azure-sql Availability Group Azure Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/availability-group-azure-portal-configure.md
If you do not already have an existing cluster, create it by using the Azure por
:::image type="content" source="media/availability-group-az-portal-configure/configure-new-cluster-1.png" alt-text="Provide name, storage account, and credentials for the cluster":::
-1. Expand **Windows Server Failover Cluster credentials** to provide [credentials](/rest/api/sqlvm/sqlvirtualmachinegroups/createorupdate#wsfcdomainprofile) for the SQL Server service account, as well as the cluster operator and bootstrap accounts if they're different than the account used for the SQL Server service.
+1. Expand **Windows Server Failover Cluster credentials** to provide [credentials](/rest/api/sqlvm/2021-11-01-preview/sql-virtual-machine-groups/create-or-update#wsfcdomainprofile) for the SQL Server service account, as well as the cluster operator and bootstrap accounts if they're different than the account used for the SQL Server service.
:::image type="content" source="media/availability-group-az-portal-configure/configure-new-cluster-2.png" alt-text="Provide credentials for the SQL Service account, cluster operator account and cluster bootstrap account":::
azure-sql Sql Assessment For Sql Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/sql-assessment-for-sql-vm.md
The SQL best practices assessment feature of the Azure portal identifies possible performance issues and evaluates that your SQL Server on Azure Virtual Machines (VMs) is configured to follow best practices using the [rich ruleset](https://github.com/microsoft/sql-server-samples/blob/master/samples/manage/sql-assessment-api/DefaultRuleset.csv) provided by the [SQL Assessment API](/sql/sql-assessment-api/sql-assessment-api-overview).
-To learn more, watch this video on [SQL best practices assessment](/shows/Data-Exposed/?WT.mc_id=dataexposed-c9-niner):
+To learn more, watch this video on [SQL best practices assessment](/shows/Data-Exposed/optimally-configure-sql-server-on-azure-virtual-machines-with-sql-assessment?WT.mc_id=dataexposed-c9-niner):
<iframe src="https://aka.ms/docs/player?id=13b2bf63-485c-4ec2-ab14-a1217734ad9f" width="640" height="370"></iframe>
backup Backup Azure Restore Files From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-files-from-vm.md
If you run the script on a computer with restricted access, ensure there's acces
> [!NOTE] > > In case, the backed up VM is Windows, then the geo-name will be mentioned in the password generated.<br><br>
-> For eg, if the generated password is *ContosoVM_wcus_GUID*, then then geo-name is wcus and the URL would be: <https://pod01-rec2.wcus.backup.windowsazure.com><br><br>
+> For eg, if the generated password is *ContosoVM_wcus_GUID*, then then geo-name is wcus and the URL would be: <`https://pod01-rec2.wcus.backup.windowsazure.com`><br><br>
> > > If the backed up VM is Linux, then the script file you downloaded in step 1 [above](#step-1-generate-and-download-script-to-browse-and-recover-files) will have the **geo-name** in the name of the file. Use that **geo-name** to fill in the URL. The downloaded script name will begin with: \'VMname\'\_\'geoname\'_\'GUID\'.<br><br>
-> So for example, if the script filename is *ContosoVM_wcus_12345678*, the **geo-name** is *wcus* and the URL would be: <https://pod01-rec2.wcus.backup.windowsazure.com><br><br>
+> So for example, if the script filename is *ContosoVM_wcus_12345678*, the **geo-name** is *wcus* and the URL would be: <`https://pod01-rec2.wcus.backup.windowsazure.com`><br><br>
>
backup Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-overview.md
Title: What is Azure Backup? description: Provides an overview of the Azure Backup service, and how it contributes to your business continuity and disaster recovery (BCDR) strategy. Previously updated : 01/04/2022 Last updated : 03/11/2022 # What is the Azure Backup service?
The Azure Backup service provides simple, secure, and cost-effective solutions t
- **Azure Files shares** - [Back up Azure File shares to a storage account](backup-afs.md) - **SQL Server in Azure VMs** - [Back up SQL Server databases running on Azure VMs](backup-azure-sql-database.md) - **SAP HANA databases in Azure VMs** - [Backup SAP HANA databases running on Azure VMs](backup-azure-sap-hana-database.md)-- **Azure Database for PostgreSQL servers (preview)** - [Back up Azure PostgreSQL databases and retain the backups for up to 10 years](backup-azure-database-postgresql.md)
+- **Azure Database for PostgreSQL servers** - [Back up Azure PostgreSQL databases and retain the backups for up to 10 years](backup-azure-database-postgresql.md)
- **Azure Blobs** - [Overview of operational backup for Azure Blobs](blob-backup-overview.md) ![Azure Backup Overview](./media/backup-overview/azure-backup-overview.png)
backup Blob Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-restore.md
Title: Restore Azure Blobs description: Learn how to restore Azure Blobs. Previously updated : 05/05/2021 Last updated : 03/11/2022
Block blobs in storage accounts with operational backup configured can be restor
- Blobs will be restored to the same storage account. So blobs that have undergone changes since the time to which you're restoring will be overwritten. - Only block blobs in a standard general-purpose v2 storage account can be restored as part of a restore operation. Append blobs, page blobs, and premium block blobs aren't restored.-- While a restore job is in progress, blobs in the storage cannot be read or written to.
+- When you perform a restore operation, Azure Storage blocks data operations on the blobs in the ranges being restored for the duration of the operation.
- A blob with an active lease cannot be restored. If a blob with an active lease is included in the range of blobs to restore, the restore operation will fail automatically. Break any active leases before starting the restore operation. - Snapshots aren't created or deleted as part of a restore operation. Only the base blob is restored to its previous state. - If you delete a container from the storage account by calling the **Delete Container** operation, that container cannot be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you may want to restore them later. Also, Microsoft recommends enabling soft delete for containers in addition to operational backup to protect against accidental deletion of containers.
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
cloudfoundry Create Cloud Foundry On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloudfoundry/create-cloud-foundry-on-azure.md
For more information, see [Use SSH keys with Windows on Azure](../virtual-machin
5. Set the permission role of your service principal as a Contributor. ```azurecli
- az role assignment create --assignee "{enter-your-homepage}" --role "Contributor"
+ az role assignment create --assignee "{enter-your-homepage}" --role "Contributor" --scope /subscriptions/{subscription-id}/resourceGroups/{resource-group-name}
``` Or you also can use ```azurecli
- az role assignment create --assignee {service-principal-name} --role "Contributor"
+ az role assignment create --assignee {service-principal-name} --role "Contributor" --scope /subscriptions/{subscription-id}/resourceGroups/{resource-group-name}
``` ![Service principal role assignment](media/deploy/svc-princ.png )
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
container-apps Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress.md
With ingress enabled, your container app features the following characteristics:
- Supports TLS termination - Supports HTTP/1.1 and HTTP/2
+- Supports WebSocket and gRPC
- Endpoints always use TLS 1.2, terminated at the ingress point - Endpoints always expose ports 80 (for HTTP) and 443 (for HTTPS). - By default, HTTP requests to port 80 are automatically redirected to HTTPS on 443.
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Title: Built-in policy definitions for Azure Container Instances description: Lists Azure Policy built-in policy definitions for Azure Container Instances. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
container-registry Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Container Registry description: Sample Azure Resource Graph queries for Azure Container Registry showing use of resource types and tables to access Azure Container Registry related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
cosmos-db Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Cosmos DB description: Sample Azure Resource Graph queries for Azure Cosmos DB showing use of resource types and tables to access Azure Cosmos DB related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
cosmos-db Change Feed Processor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/change-feed-processor.md
ms.devlang: csharp Previously updated : 11/16/2021 Last updated : 03/10/2022
There are four main components of implementing the change feed processor:
1. **The lease container:** The lease container acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account.
-1. **The host:** A host is an application instance that uses the change feed processor to listen for changes. Multiple instances with the same lease configuration can run in parallel, but each instance should have a different **instance name**.
+1. **The compute instance**: A compute instance hosts the change feed processor to listen for changes. Depending on the platform, it could be represented by a VM, a kubernetes pod, an Azure App Service instance, an actual physical machine. It has a unique identifier referenced as the *instance name* throughout this article.
1. **The delegate:** The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads. To further understand how these four elements of change feed processor work together, let's look at an example in the following diagram. The monitored container stores documents and uses 'City' as the partition key. We see that the partition key values are distributed in ranges that contain items.
-There are two host instances and the change feed processor is assigning different ranges of partition key values to each instance to maximize compute distribution.
+There are two compute instances and the change feed processor is assigning different ranges of partition key values to each instance to maximize compute distribution, each instance has a unique and different name.
Each range is being read in parallel and its progress is maintained separately from other ranges in the lease container. :::image type="content" source="./media/change-feed-processor/changefeedprocessor.png" alt-text="Change feed processor example" border="false":::
An example of a delegate would be:
[!code-csharp[Main](~/samples-cosmosdb-dotnet-change-feed-processor/src/Program.cs?name=Delegate)]
-Finally you define a name for this processor instance with `WithInstanceName` and which is the container to maintain the lease state with `WithLeaseContainer`.
+Afterwards, you define the compute instance name or unique identifier with `WithInstanceName`, this should be unique and different in each compute instance you are deploying, and finally which is the container to maintain the lease state with `WithLeaseContainer`.
Calling `Build` will give you the processor instance that you can start by calling `StartAsync`.
The change feed processor lets you hook to relevant events in its [life cycle](#
## Deployment unit
-A single change feed processor deployment unit consists of one or more instances with the same `processorName` and lease container configuration. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
+A single change feed processor deployment unit consists of one or more compute instances with the same `processorName` and lease container configuration but different instance name each. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
For example, you might have one deployment unit that triggers an external API anytime there is a change in your container. Another deployment unit might move data, in real time, each time there is a change. When a change happens in your monitored container, all your deployment units will get notified. ## Dynamic scaling
-As mentioned before, within a deployment unit you can have one or more instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are:
+As mentioned before, within a deployment unit you can have one or more compute instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are:
1. All instances should have the same lease container configuration. 1. All instances should have the same `processorName`.
cosmos-db Sql Api Dotnet V3sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-dotnet-v3sdk-samples.md
Title: 'Azure Cosmos DB: .NET (Microsoft.Azure.Cosmos) examples for the SQL API'
-description: Find the C# .NET V3 SDK examples on GitHub for common tasks using the Azure Cosmos DB SQL API.
+description: Find the C# .NET v3 SDK examples on GitHub for common tasks by using the Azure Cosmos DB SQL API.
Last updated 02/23/2022
-# Azure Cosmos DB.NET V3 SDK (Microsoft.Azure.Cosmos) examples for the SQL API
+
+# Azure Cosmos DB .NET v3 SDK (Microsoft.Azure.Cosmos) examples for the SQL API
+ [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
-> * [.NET V3 SDK Examples](sql-api-dotnet-v3sdk-samples.md)
-> * [Java V4 SDK Examples](sql-api-java-sdk-samples.md)
-> * [Spring Data V3 SDK Examples](sql-api-spring-data-sdk-samples.md)
+> * [.NET v3 SDK Examples](sql-api-dotnet-v3sdk-samples.md)
+> * [Java v4 SDK Examples](sql-api-java-sdk-samples.md)
+> * [Spring Data v3 SDK Examples](sql-api-spring-data-sdk-samples.md)
> * [Node.js Examples](sql-api-nodejs-samples.md) > * [Python Examples](sql-api-python-samples.md)
-> * [.NET V2 SDK Examples (Legacy)](sql-api-dotnet-v2sdk-samples.md)
+> * [.NET v2 SDK Examples (Legacy)](sql-api-dotnet-v2sdk-samples.md)
> * [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db) > >
-The [azure-cosmos-dotnet-v3](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage) GitHub repository includes the latest .NET sample solutions to perform CRUD and other common operations on Azure Cosmos DB resources. If you're familiar with the previous version of the .NET SDK, you may be used to the terms collection and document. Because Azure Cosmos DB supports multiple API models, version 3.0 of the .NET SDK uses the generic terms "container" and "item". A container can be a collection, graph, or table. An item can be a document, edge/vertex, or row, and is the content inside a container. This article provides:
+The [azure-cosmos-dotnet-v3](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage) GitHub repository includes the latest .NET sample solutions. You use these solutions to perform CRUD (create, read, update, and delete) and other common operations on Azure Cosmos DB resources.
+
+If you're familiar with the previous version of the .NET SDK, you might be used to the terms collection and document. Because Azure Cosmos DB supports multiple API models, version 3.0 of the .NET SDK uses the generic terms *container* and *item*. A container can be a collection, graph, or table. An item can be a document, edge/vertex, or row, and is the content inside a container. This article provides:
* Links to the tasks in each of the example C# project files. * Links to the related API reference content. ## Prerequisites
-Visual Studio 2019 with the Azure development workflow installed
--- You can download and use the **free** [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/). Make sure that you enable **Azure development** during the Visual Studio setup.
+- Visual Studio 2019 with the Azure development workflow installed. You can download and use the free [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/). Make sure that you enable **Azure development** during the Visual Studio setup.
- The [Microsoft.Azure.cosmos NuGet package](https://www.nuget.org/packages/Microsoft.Azure.cosmos/)
+- The [Microsoft.Azure.cosmos NuGet package](https://www.nuget.org/packages/Microsoft.Azure.cosmos/).
-An Azure subscription or free Cosmos DB trial account
+- An Azure subscription or free Azure Cosmos DB trial account.
-- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+ - [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month, which you can use for paid Azure services.
+- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Your Visual Studio subscription gives you credits every month, which you can use for paid Azure services.
+ - [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)] > [!NOTE]
An Azure subscription or free Cosmos DB trial account
## Database examples
-The [RunDatabaseDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/DatabaseManagement/Program.cs#L65-L91) method of the sample *DatabaseManagement* project shows how to do the following tasks. To learn about Azure Cosmos databases before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
+The [RunDatabaseDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/DatabaseManagement/Program.cs#L65-L91) method of the sample *DatabaseManagement* project shows how to do the following tasks. To learn about Azure Cosmos DB databases before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
| Task | API reference | | | |
The [RunDatabaseDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/maste
## Container examples
-The [RunContainerDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ContainerManagement/Program.cs#L69-L89) method of the sample *ContainerManagement* project shows how to do the following tasks. To learn about Azure Cosmos containers before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
+The [RunContainerDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ContainerManagement/Program.cs#L69-L89) method of the sample *ContainerManagement* project shows how to do the following tasks. To learn about Azure Cosmos DB containers before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
| Task | API reference | | | |
The [RunContainerDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/mast
## Item examples
-The [RunItemsDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L119-L130) method of the sample *ItemManagement* project shows how to do the following tasks. To learn about Azure Cosmos items before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
+The [RunItemsDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L119-L130) method of the sample *ItemManagement* project shows how to do the following tasks. To learn about Azure Cosmos DB items before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
| Task | API reference | | | |
The [RunIndexDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/M
## Query examples
-The [RunDemoAsync](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L76-L96) method of the sample *Queries* project shows how to do the following tasks using the SQL query grammar, the LINQ provider with query, and Lambda. To learn about the SQL query reference in Azure Cosmos DB before you run the following samples, see [SQL query examples for Azure Cosmos DB](./sql-query-getting-started.md).
+The [RunDemoAsync](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L76-L96) method of the sample *Queries* project shows how to do the following tasks, by using the SQL query grammar, the LINQ provider with query, and Lambda. To learn about the SQL query reference in Azure Cosmos DB before you run the following samples, see [SQL query examples for Azure Cosmos DB](./sql-query-getting-started.md).
| Task | API reference | | | |
The [RunBasicChangeFeed](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/ma
| [Basic change feed functionality](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L91-L119) |[Container.GetChangeFeedProcessorBuilder](/dotnet/api/microsoft.azure.cosmos.container.getchangefeedprocessorbuilder) | | [Read change feed from a specific time](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L127-L162) |[Container.GetChangeFeedProcessorBuilder](/dotnet/api/microsoft.azure.cosmos.container.getchangefeedprocessorbuilder) | | [Read change feed from the beginning](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L170-L198) |[ChangeFeedProcessorBuilder.WithStartTime(DateTime)](/dotnet/api/microsoft.azure.cosmos.changefeedprocessorbuilder.withstarttime) |
-| [Migrate from change feed processor to change feed in V3 SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L256-L333) |[Container.GetChangeFeedProcessorBuilder](/dotnet/api/microsoft.azure.cosmos.container.getchangefeedprocessorbuilder) |
+| [Migrate from change feed processor to change feed in v3 SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L256-L333) |[Container.GetChangeFeedProcessorBuilder](/dotnet/api/microsoft.azure.cosmos.container.getchangefeedprocessorbuilder) |
## Server-side programming examples
The [RunDemoAsync](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/M
| [Execute a stored procedure](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ServerSideScripts/Program.cs#L135) |[Scripts.ExecuteStoredProcedureAsync](/dotnet/api/microsoft.azure.cosmos.scripts.scripts.executestoredprocedureasync) | | [Delete a stored procedure](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ServerSideScripts/Program.cs#L351) |[Scripts.DeleteStoredProcedureAsync](/dotnet/api/microsoft.azure.cosmos.scripts.scripts.deletestoredprocedureasync) |
-## Custom Serialization
+## Custom serialization
-The [SystemTextJson](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/SystemTextJson/Program.cs) sample project shows how to use a custom serializer when initializing a new `CosmosClient` object. The sample also includes [a custom `CosmosSerializer` class](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/SystemTextJson/CosmosSystemTextJsonSerializer.cs) which leverages `System.Text.Json` for serialization and deserialization.
+The [SystemTextJson](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/SystemTextJson/Program.cs) sample project shows how to use a custom serializer when you're initializing a new `CosmosClient` object. The sample also includes [a custom `CosmosSerializer` class](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/SystemTextJson/CosmosSystemTextJsonSerializer.cs), which uses `System.Text.Json` for serialization and deserialization.
## Next steps Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units by using vCores or vCPUs](../convert-vcore-to-request-unit.md).
+
+* If you know typical request rates for your current database workload, read about [estimating request units by using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md
Title: Troubleshoot Azure Cosmos DB slow requests with the .NET SDK
-description: Learn how to diagnose and fix slow requests when using Azure Cosmos DB .NET SDK.
+ Title: Troubleshoot slow requests in Azure Cosmos DB .NET SDK
+description: Learn how to diagnose and fix slow requests when you use Azure Cosmos DB .NET SDK.
-# Diagnose and troubleshoot Azure Cosmos DB .NET SDK slow requests
+# Diagnose and troubleshoot slow requests in Azure Cosmos DB .NET SDK
+ [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-Azure Cosmos DB slow requests can happen for multiple reasons such as request throttling or the way your application is designed. This article explains the different root causes for this issue.
+In Azure Cosmos DB, you might notice slow requests. Delays can happen for multiple reasons, such as request throttling or the way your application is designed. This article explains the different root causes for this problem.
-## Request rate too large (429 throttles)
+## Request rate too large
-Request throttling is the most common reason for slow requests. Azure Cosmos DB will throttle requests if they exceed the allocated RUs for the database or container. The SDK has built-in logic to retry these requests. The [request rate too large](troubleshoot-request-rate-too-large.md#how-to-investigate) troubleshooting article explains how to check if the requests are being throttled and how to scale your account to avoid these issues in the future.
+Request throttling is the most common reason for slow requests. Azure Cosmos DB throttles requests if they exceed the allocated request units for the database or container. The SDK has built-in logic to retry these requests. The [request rate too large](troubleshoot-request-rate-too-large.md#how-to-investigate) troubleshooting article explains how to check if the requests are being throttled. The article also discusses how to scale your account to avoid these problems in the future.
## Application design
-If your application doesn't follow the SDK best practices, it can result in different issues that will cause slow or failed requests. Follow the [.NET SDK best practices](performance-tips-dotnet-sdk-v3-sql.md) for the best performance.
+When you design your application, [follow the .NET SDK best practices](performance-tips-dotnet-sdk-v3-sql.md) for the best performance. If your application doesn't follow the SDK best practices, you might get slow or failed requests.
Consider the following when developing your application:
-* Application should be in the same region as your Azure Cosmos DB account.
-* Singleton instance of the SDK instance. The SDK has several caches that have to be initialized which may slow down the first few requests.
-* Use Direct + TCP connectivity mode
-* Avoid High CPU. Make sure to look at Max CPU and not average, which is the default for most logging systems. Anything above roughly 40% can increase the latency.
+
+* The application should be in the same region as your Azure Cosmos DB account.
+* The SDK has several caches that have to be initialized, which might slow down the first few requests.
+* The connectivity mode should be direct and TCP.
+* Avoid high CPU. Make sure to look at the maximum CPU and not the average, which is the default for most logging systems. Anything above roughly 40 percent can increase the latency.
## Metadata operations
-Do not verify a Database and/or Container exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](https://aka.ms/CosmosDB/sql/errors/metadata-429) that do not scale like data operations.
+If you need to verify that a database or container exists, don't do so by calling `Create...IfNotExistsAsync` or `Read...Async` before doing an item operation. The validation should only be done on application startup when it's necessary, if you expect them to be deleted. These metadata operations generate extra latency, have no service-level agreement (SLA), and have their own separate [limitations](https://aka.ms/CosmosDB/sql/errors/metadata-429). They don't scale like data operations.
## Slow requests on bulk mode
Do not verify a Database and/or Container exists by calling `Create...IfNotExist
## <a name="capture-diagnostics"></a>Capture the diagnostics
-All the responses in the SDK including `CosmosException` have a Diagnostics property. This property records all the information related to the single request including if there were retries or any transient failures.
+All the responses in the SDK, including `CosmosException`, have a `Diagnostics` property. This property records all the information related to the single request, including if there were retries or any transient failures.
-The Diagnostics are returned as a string. The string changes with each version as it is improved to better troubleshooting different scenarios. With each version of the SDK, the string will have breaking changes to the formatting. Do not parse the string to avoid breaking changes. The following code sample shows how to read diagnostic logs using the .NET SDK:
+The diagnostics are returned as a string. The string changes with each version, as it's improved for troubleshooting different scenarios. With each version of the SDK, the string will have breaking changes to the formatting. Don't parse the string to avoid breaking changes. The following code sample shows how to read diagnostic logs by using the .NET SDK:
```c# try
if (response.Diagnostics.GetClientElapsedTime() > ConfigurableSlowRequestTimeSpa
} ```
+## Diagnostics in version 3.19 and later
-## Diagnostics in version 3.19 and higher
-The JSON structure has breaking changes with each version of the SDK. This makes it unsafe to be parsed. The JSON represents a tree structure of the request going through the SDK. This covers a few key things to look at:
+The JSON structure has breaking changes with each version of the SDK. This makes it unsafe to be parsed. The JSON represents a tree structure of the request going through the SDK. The following sections cover a few key things to look at.
### <a name="cpu-history"></a>CPU history
-High CPU utilization is the most common cause for slow requests. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries where the requests might do multiple connections for a single query.
-# [3.21 or greater SDK](#tab/cpu-new)
+High CPU utilization is the most common cause for slow requests. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries, where the requests might do multiple connections for a single query.
+
+# [3.21 or later SDK](#tab/cpu-new)
-The timeouts will contain *Diagnostics*, which contain:
+The timeouts include diagnostics, which contain the following, for example:
```json "systemHistory": [
The timeouts will contain *Diagnostics*, which contain:
] ```
-* If the `cpu` values are over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
-* If the `threadInfo/isThreadStarving` nodes have `True` values, the cause is thread starvation. In this case the solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
-* If the `dateUtc` time in-between measurements is not approximately 10 seconds, it also would indicate contention on the thread pool. CPU is measured as an independent Task that is enqueued in the thread pool every 10 seconds, if the time in-between measurement is longer, it would indicate that the async Tasks are not able to be processed in a timely fashion. Most common scenarios are when doing [blocking calls over async code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait) in the application code.
+* If the `cpu` values are over 70 percent, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
+* If the `threadInfo/isThreadStarving` nodes have `True` values, the cause is thread starvation. In this case, the solution is to investigate the source or sources of the thread starvation (potentially locked threads), or scale the machine or machines to a larger resource size.
+* If the `dateUtc` time between measurements is not approximately 10 seconds, it also indicates contention on the thread pool. CPU is measured as an independent task that is enqueued in the thread pool every 10 seconds. If the time between measurements is longer, it indicates that the async tasks aren't able to be processed in a timely fashion. The most common scenario is when your application code is [blocking calls over async code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait).
# [Older SDK](#tab/cpu-old)
-If the error contains `TransportException` information, it might contain also `CPU History`:
+If the error contains `TransportException` information, it might also contain `CPU history`:
``` CPU history:
CPU history:
CPU count: 8) ```
-* If the CPU measurements are over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
-* If the CPU measurements are not happening every 10 seconds (e.g., gaps or measurement times indicate larger times in between measurements), the cause is thread starvation. In this case the solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
+* If the CPU measurements are over 70 percent, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
+* If the CPU measurements are not happening every 10 seconds (for example, there are gaps, or measurement times indicate longer times in between measurements), the cause is thread starvation. In this case the solution is to investigate the source or sources of the thread starvation (potentially locked threads), or scale the machine or machines to a larger resource size.
+
-#### Solution:
-The client application that uses the SDK should be scaled up or out.
+#### Solution
+The client application that uses the SDK should be scaled up or out.
### <a name="httpResponseStats"></a>HttpResponseStats
-HttpResponseStats are request going to [gateway](sql-sdk-connection-modes.md). Even in Direct mode the SDK gets all the meta data information from the gateway.
-
-If the request is slow, first verify all the suggestions above don't yield results.
-If it is still slow different patterns point to different issues:
+`HttpResponseStats` are requests that go to the [gateway](sql-sdk-connection-modes.md). Even in direct mode, the SDK gets all the metadata information from the gateway.
-Single store result for a single request
+If the request is slow, first verify that none of the previous suggestions yield the desired results. If it's still slow, different patterns point to different problems. The following table provides more details.
| Number of requests | Scenario | Description | |-|-|-|
-| Single to all | Request Timeout or HttpRequestExceptions | Points to [SNAT Port exhaustion](troubleshoot-dot-net-sdk.md#snat) or lack of resources on the machine to process request in time. |
-| Single or small percentage (SLA is not violated) | All | A single or small percentage of slow requests can be caused by several different transient issues and should be expected. |
-| All | All | Points to an issue with the infrastructure or networking. |
-| SLA Violated | No changes to application and SLA dropped | Points to an issue with the Azure Cosmos DB service. |
+| Single to all | Request timeout or `HttpRequestExceptions` | Points to [SNAT port exhaustion](troubleshoot-dot-net-sdk.md#snat), or a lack of resources on the machine to process the request in time. |
+| Single or small percentage (SLA isn't violated) | All | A single or small percentage of slow requests can be caused by several different transient problems, and should be expected. |
+| All | All | Points to a problem with the infrastructure or networking. |
+| SLA violated | No changes to application, and SLA dropped. | Points to a problem with the Azure Cosmos DB service. |
```json "HttpResponseStats": [
Single store result for a single request
``` ### <a name="storeResult"></a>StoreResult
-StoreResult represents a single request to Azure Cosmos DB using Direct mode with TCP protocol.
-If it is still slow different patterns point to different issues:
+`StoreResult` represents a single request to Azure Cosmos DB, by using direct mode with the TCP protocol.
-Single store result for a single request
+If it's still slow, different patterns point to different problems. The following table provides more details.
| Number of requests | Scenario | Description | |-|-|-|
-| Single to all | StoreResult contains TransportException | Points to [SNAT Port exhaustion](troubleshoot-dot-net-sdk.md#snat) or lack of resources on the machine to process request in time. |
-| Single or small percentage (SLA is not violated) | All | A single or small percentage of slow requests can be caused by several different transient issues and should be expected. |
-| All | All | An issue with the infrastructure or networking. |
-| SLA Violated | Requests contain multiple failure error codes like 410 and IsValid is true | Points to an issue with the Cosmos DB service |
-| SLA Violated | Requests contain multiple failure error codes like 410 and IsValid is false | Points to an issue with the machine |
-| SLA Violated | StorePhysicalAddress is the same with no failure status code | Likely an issue with Cosmos DB service |
-| SLA Violated | StorePhysicalAddress have the same partition ID but different replica IDs with no failure status code | Likely an issue with the Cosmos DB service |
-| SLA Violated | StorePhysicalAddress are random with no failure status code | Points to an issue with the machine |
+| Single to all | `StoreResult` contains `TransportException` | Points to [SNAT port exhaustion](troubleshoot-dot-net-sdk.md#snat), or a lack of resources on the machine to process the request in time. |
+| Single or small percentage (SLA isn't violated) | All | A single or small percentage of slow requests can be caused by several different transient problems, and should be expected. |
+| All | All | A problem with the infrastructure or networking. |
+| SLA violated | Requests contain multiple failure error codes, like `410` and `IsValid is true`. | Points to a problem with the Azure Cosmos DB service. |
+| SLA violated | Requests contain multiple failure error codes, like `410` and `IsValid is false`. | Points to a problem with the machine. |
+| SLA violated | `StorePhysicalAddress` are the same, with no failure status code. | Likely a problem with Azure Cosmos DB. |
+| SLA violated | `StorePhysicalAddress` have the same partition ID, but different replica IDs, with no failure status code. | Likely a problem with Azure Cosmos DB. |
+| SLA violated | `StorePhysicalAddress` is random, with no failure status code. | Points to a problem with the machine. |
-Multiple StoreResults for single request:
+For multiple store results for a single request, be aware of the following:
-* Strong and bounded staleness consistency will always have at least two store results
-* Check the status code of each StoreResult. The SDK retries automatically on multiple different [transient failures](troubleshoot-dot-net-sdk-request-timeout.md). The SDK is constantly being improved to cover more scenarios.
+* Strong consistency and bounded staleness consistency always have at least two store results.
+* Check the status code of each `StoreResult`. The SDK retries automatically on multiple different [transient failures](troubleshoot-dot-net-sdk-request-timeout.md). The SDK is constantly improved to cover more scenarios.
### <a name="rntbdRequestStats"></a>RntbdRequestStats + Show the time for the different stages of sending and receiving a request in the transport layer.
-* ChannelAcquisitionStarted: The time to get or create a new connection. New connections can be created for numerous different regions. For example, a connection was unexpectedly closed or too many requests were getting sent through the existing connections so a new connection is being created.
-* Pipelined time is large points to possibly a large request.
-* Transit time is large, which leads to a networking issue. Compare this number to the `BELatencyInMs`. If the BELatencyInMs is small, then the time was spent on the network and not on the Azure Cosmos DB service.
-* Received time is large this points to a thread starvation issue. This the time between having the response and returning the result.
+* `ChannelAcquisitionStarted`: The time to get or create a new connection. You can create new connections for numerous different regions. For example, let's say that a connection was unexpectedly closed, or too many requests were getting sent through the existing connections. You create a new connection.
+* *Pipelined time is large* might be caused by a large request.
+* *Transit time is large*, which leads to a networking problem. Compare this number to the `BELatencyInMs`. If `BELatencyInMs` is small, then the time was spent on the network, and not on the Azure Cosmos DB service.
+* *Received time is large* might be caused by a thread starvation problem. This is the time between having the response and returning the result.
```json "StoreResult": {
Show the time for the different stages of sending and receiving a request in the
``` ### Failure rate violates the Azure Cosmos DB SLA
-Contact [Azure Support](https://aka.ms/azure-support).
+
+Contact [Azure support](https://aka.ms/azure-support).
## Next steps
-* [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
+
+* [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) problems when you use the Azure Cosmos DB .NET SDK.
* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md).
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 02/15/2022 Last updated : 03/08/2022 # Azure Policy built-in definitions for Data Factory (Preview)
data-lake-analytics Data Lake Analytics U Sql Develop With Python R Csharp In Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-develop-with-python-r-csharp-in-vscode.md
Register Python and, R extensions assemblies for your ADL account.
3. Select **Install U-SQL Extensions**. 4. Confirmation message is displayed after the U-SQL extensions are installed.
- ![Set up the environment for python and R](./media/data-lake-analytics-data-lake-tools-for-vscode/setup-the-enrionment-for-python-and-r.png)
+ ![Set up the environment for Python and R](./media/data-lake-analytics-data-lake-tools-for-vscode/setup-the-enrionment-for-python-and-r.png)
> [!Note] > For best experiences on Python and R language service, please install VSCode Python and R extension.
data-lake-analytics Data Lake Analytics U Sql Python Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-python-extensions.md
All the standard Python modules are included.
### Additional Python modules
-Besides the standard Python libraries, several commonly used python libraries are included:
+Besides the standard Python libraries, several commonly used Python libraries are included:
* pandas * numpy
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
databox-online Azure Stack Edge Pro 2 Deploy Activate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-activate.md
Before you configure and set up your Azure Stack Edge Pro 2, make sure that:
![Screenshot of local web UI with "Activate" highlighted in the Activation tile.](./media/azure-stack-edge-pro-2-deploy-activate/activate-1.png)
-3. In the **Activate** pane, enter the **Activation key** that you got in [Get the activation key for Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-prep.md#get-the-activation-key).
+3. In the **Activate** pane, enter the **Activation key** from [Get the activation key for Azure Stack Edge](azure-stack-edge-gpu-deploy-prep.md#get-the-activation-key).
4. Select **Activate**.
In this tutorial, you learned about:
To learn how to deploy workloads on your Azure Stack Edge device, see: > [!div class="nextstepaction"]
-> [Configure compute to deploy IoT Edge and Kubernetes workloads on Azure Stack Edge](./azure-stack-edge-pro-2-deploy-configure-compute.md)
+> [Configure compute to deploy IoT Edge and Kubernetes workloads on Azure Stack Edge Pro 2](./azure-stack-edge-pro-2-deploy-configure-compute.md)
databox-online Azure Stack Edge Pro 2 Deploy Configure Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-certificates.md
Use these steps to regenerate and download the Azure Stack Edge Pro 2 device cer
- Make sure that the status of all the certificates is shown as **Valid**.
- ![Screenshot of newly generated certificates on the Certificates page of an Azure Stack Edge device. Certificates with Valid state are highlighted.](./media/azure-stack-edge-gpu-deploy-configure-certificates/generate-certificate-6.png)
+ ![Screenshot of newly generated certificates on the Certificates page of an Azure Stack Edge device. Certificates with Valid state are highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/generate-certificate-6.png)
- You can select a specific certificate name, and view the certificate details.
Follow these steps to upload your own certificates including the signing chain.
![Screenshot of the Add Certificate pane for the Local Web UI certificate for an Azure Stack Edge device. The certificate type and certificate entries highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/add-certificate-5.png)
- At any time, you can select a certificate and view the details to ensure that these match with the certificate that you uploaded.
+ The certificate page should update to reflect the newly added certificates. At any time, you can select a certificate and view the details to ensure that these match with the certificate that you uploaded.
- ![Screenshot of the Add Certificate pane for a node certificate for an Azure Stack Edge device. The certificate type and certificate entries highlighted.](./media/azure-stack-edge-gpu-deploy-configure-certificates/add-certificate-6.png)
+ ![Screenshot of the Add Certificate pane for a node certificate for an Azure Stack Edge device. The certificate type and certificate entries highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/add-certificate-6.png)
- The certificate page should update to reflect the newly added certificates.
-
- ![Screenshot of the Certificates page in the local web UI for an Azure Stack Edge device. A newly added set of certificates is highlighted.](./media/azure-stack-edge-gpu-deploy-configure-certificates/add-certificate-7.png)
> [!NOTE] > Except for Azure public cloud, signing chain certificates are needed to be brought in before activation for all cloud configurations (Azure Government or Azure Stack).
In this tutorial, you learn about:
> * Configure certificates for the physical device > * Configure encryption-at-rest
-To learn how to activate your Azure Stack Edge Pro GPU device, see:
+To learn how to activate your Azure Stack Edge Pro 2 device, see:
> [!div class="nextstepaction"]
-> [Activate Azure Stack Edge Pro GPU device](./azure-stack-edge-gpu-deploy-activate.md)
+> [Activate Azure Stack Edge Pro 2 device](./azure-stack-edge-pro-2-deploy-activate.md)
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
na Previously updated : 02/15/2022 Last updated : 03/08/2022
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions for Microsoft Defender for Cloud description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022 # Azure Policy built-in definitions for Microsoft Defender for Cloud
defender-for-cloud Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Microsoft Defender for Cloud description: Sample Azure Resource Graph queries for Microsoft Defender for Cloud showing use of resource types and tables to access Microsoft Defender for Cloud related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
and [Resource Graph samples by Table](../governance/resource-graph/samples/sampl
## Sample queries ## Next steps
devtest-labs Automate Add Lab User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/automate-add-lab-user.md
The object that is being granted access can be specified by the `objectId`, `sig
The following Azure CLI example shows you how to add a person to the DevTest Labs User role for the specified Lab. ```azurecli
-az role assignment create --roleName "DevTest Labs User" --signInName <email@company.com> -ΓÇôresource-name "<Lab Name>" --resource-type "Microsoft.DevTestLab/labs" --resource-group "<Resource Group Name>"
+az role assignment create --roleName "DevTest Labs User" --signInName <email@company.com> -ΓÇôresource-name "<Lab Name>" --resource-type "Microsoft.DevTestLab/labs" --resource-group "<Resource Group Name>" --role Contributor --scope /subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>
``` ## Next steps
digital-twins Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-security.md
# Secure Azure Digital Twins
-This article explains Azure Digital Twins security best practices. It covers roles and permissions, managed identity, private network access with Azure Private Link (preview), service tags, encryption of data at rest, and Cross-Origin Resource Sharing (CORS).
+This article explains Azure Digital Twins security best practices. It covers roles and permissions, managed identity, private network access with Azure Private Link, service tags, encryption of data at rest, and Cross-Origin Resource Sharing (CORS).
For security, Azure Digital Twins enables precise access control over specific data, resources, and actions in your deployment. It does so through a granular role and permission management strategy called [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md).
You can use a system-assigned managed identity for your Azure Digital Instance t
For instructions on how to enable a system-managed identity for Azure Digital Twins and use it to route events, see [Route events with a managed identity](how-to-route-with-managed-identity.md).
-## Private network access with Azure Private Link (preview)
+## Private network access with Azure Private Link
[Azure Private Link](../private-link/private-link-overview.md) is a service that enables you to access Azure resources (like [Azure Event Hubs](../event-hubs/event-hubs-about.md), [Azure Storage](../storage/common/storage-introduction.md), and [Azure Cosmos DB](../cosmos-db/introduction.md)) and Azure-hosted customer and partner services over a private endpoint in your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md).
The private endpoint uses an IP address from your Azure VNet address space. Netw
Configuring a private endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure, as well as avoid data exfiltration from your VNet.
-For instructions on how to set up Private Link for Azure Digital Twins, see [Enable private access with Private Link (preview)](./how-to-enable-private-link.md).
+For instructions on how to set up Private Link for Azure Digital Twins, see [Enable private access with Private Link](./how-to-enable-private-link.md).
### Design considerations
digital-twins How To Enable Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-enable-private-link.md
# Mandatory fields. Title: Enable private access with Private Link (preview)
+ Title: Enable private access with Private Link
description: Learn how to enable private access for Azure Digital Twins solutions with Private Link.
ms.devlang: azurecli
#
-# Enable private access with Private Link (preview)
+# Enable private access with Private Link
-This article describes the different ways to [enable Private Link with a private endpoint for an Azure Digital Twins instance](concepts-security.md#private-network-access-with-azure-private-link-preview) (currently in preview). Configuring a private endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure. Additionally, it helps avoid data exfiltration from your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md).
+This article describes the different ways to [enable Private Link with a private endpoint for an Azure Digital Twins instance](concepts-security.md#private-network-access-with-azure-private-link). Configuring a private endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure. Additionally, it helps avoid data exfiltration from your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md).
Here are the steps that are covered in this article: 1. Turn on Private Link and configure a private endpoint for an Azure Digital Twins instance.
In this section, you'll enable Private Link with a private endpoint for an Azure
1. First, navigate to the [Azure portal](https://portal.azure.com) in a browser. Bring up your Azure Digital Twins instance by searching for its name in the portal search bar.
-1. Select **Networking (preview)** in the left-hand menu.
+1. Select **Networking** in the left-hand menu.
1. Switch to the **Private endpoint connections** tab.
In this section, you'll see how to view, edit, and delete a private endpoint aft
# [Portal](#tab/portal)
-Once a private endpoint has been created for your Azure Digital Twins instance, you can view it in the **Networking (preview)** tab for your Azure Digital Twins instance. This page will show all the private endpoint connections associated with the instance.
+Once a private endpoint has been created for your Azure Digital Twins instance, you can view it in the **Networking** tab for your Azure Digital Twins instance. This page will show all the private endpoint connections associated with the instance.
:::image type="content" source="media/how-to-enable-private-link/view-endpoint-digital-twins.png" alt-text="Screenshot of the Azure portal showing the Networking page for an existing Azure Digital Twins instance with one private endpoint." lightbox="media/how-to-enable-private-link/view-endpoint-digital-twins.png":::
You can update the value of the network flag using the [Azure portal](https://po
To disable or enable public network access in the [Azure portal](https://portal.azure.com), open the portal and navigate to your Azure Digital Twins instance.
-1. Select **Networking (preview)** in the left-hand menu.
+1. Select **Networking** in the left-hand menu.
1. In the **Public access** tab, set **Allow public network access to** either **Disabled** or **All networks**.
digital-twins How To Use Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-azure-digital-twins-explorer.md
You can use the **Query Explorer** panel to run [queries](concepts-query-languag
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/query-explorer-panel.png" alt-text="Screenshot of Azure Digital Twins Explorer. The Query Explorer panel is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/query-explorer-panel.png":::
-Enter the query you want to run and select the **Run Query** button. Doing so will load the query results in the **Twin Graph** panel.
+Enter the query you want to run. If you want to enter a query in multiple lines, you can use SHIFT + ENTER to add a new line to the query box.
+
+Select the **Run Query** button to display query results in the **Twin Graph** panel.
>[!NOTE] > Query results containing relationships can only be rendered in the **Twin Graph** panel if the results include at least one twin as well. While queries that return only relationships are possible in Azure Digital Twins, you can only view them in Azure Digital Twins Explorer by using the [Output panel](#accessibility-and-advanced-settings).
Once the two twins are simultaneously selected, right-click the target twin to b
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-add-relationship.png" alt-text="Screenshot of Azure Digital Twins Explorer Twin Graph panel. The FactoryA and Consumer twins are selected, and a menu shows the option to Add relationships." lightbox="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-add-relationship.png":::
-Doing so will bring up the **Create Relationship** dialog, which shows the source twin and target twin of the relationship, followed by a **Relationship** dropdown menu that contains the types of relationship that the source twin can have (defined in its DTDL model). Select an option for the relationship type, and **Save** the new relationship.
+Doing so will bring up the **Create Relationship** dialog, populated with the source twin and target twin of the relationship (you can also use the **Swap Relationship** icon to switch them). There is a **Relationship** dropdown menu that contains the types of relationship that the source twin can have, according to its DTDL model. Select an option for the relationship type, and **Save** the new relationship.
### Edit twin and relationship properties
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
Azure Database Migration Service prerequisites that are common across all suppor
* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace * Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
- - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
+ - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
- Owner or Contributor role for the Azure subscription. > [!IMPORTANT] > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
To complete this tutorial, you need to:
* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace * Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
- - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
+ - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
- Owner or Contributor role for the Azure subscription (required if creating a new DMS service). > [!IMPORTANT] > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
To complete this tutorial, you need to:
> [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
-1. If you picked the first option for network share, provide details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.
+* For backups located on a network share provide the below details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.
|Field |Description | ||-|
To complete this tutorial, you need to:
|**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. | |**Storage account details** |The resource group and storage account where backup files will be uploaded to. You do not need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process. |
-1. If you picked the second option for backups stored in an Azure Blob Container specify the **Target database name**, **Resource group**, **Azure storage account**, **Blob container** and **Last backup file** from the corresponding drop-down lists. This Azure storage account will be used by DMS to upload the database backups from network share. You do not need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
+* For backups stored in an Azure storage blob container specify the below details of the **Target database name**, **Resource group**, **Azure storage account**, **Blob container** and **Last backup file from** the corresponding drop-down lists.
+
+ |Field |Description |
+ ||-|
+ |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
+ |**Storage account details** |The resource group, storage account and container where backup files are located.
+ |**Last Backup File** |The file name of the last backup of the database that you are migrating.
+ > [!IMPORTANT] > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd)
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
To complete this tutorial, you need to:
* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace * Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
- - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
+ - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
- Owner or Contributor role for the Azure subscription. > [!IMPORTANT] > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
To complete this tutorial, you need to:
3. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container. > [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
-4. After selecting the backup location, provide details of your source SQL Server and source backup location.
+* For backups located on a network share provide the below details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.
|Field |Description | ||-|
To complete this tutorial, you need to:
|**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. | |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
-5. Specify the **Azure storage account** by selecting the **Subscription**, **Location**, and **Resource Group** from the corresponding drop-down lists. This Azure storage account will be used by DMS to upload the database backups from network share. You don't need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
+* For backups stored in an Azure storage blob container specify the below details of the **Target database name**, **Resource group**, **Azure storage account**, **Blob container** and **Last backup file from** the corresponding drop-down lists.
+
+ |Field |Description |
+ ||-|
+ |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
+ |**Storage account details** |The resource group, storage account and container where backup files are located.
+ |**Last Backup File** |The file name of the last backup of the database that you are migrating.
+
> [!IMPORTANT] > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd)
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
To complete this tutorial, you need to:
* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace * Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
- - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
+ - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
- Owner or Contributor role for the Azure subscription. > [!IMPORTANT] > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
To complete this tutorial, you need to:
3. In step 5, select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container. > [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
-4. After selecting the backup location, provide details of your source SQL Server and source backup location.
+
+* For backups located on a network share provide the below details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.
|Field |Description | ||-|
To complete this tutorial, you need to:
|**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. | |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
-5. Specify the **Azure storage account** by selecting the **Subscription**, **Location**, and **Resource Group** from the corresponding drop-down lists. This Azure storage account will be used by DMS to upload the database backups from network share. You don't need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
+* For backups stored in an Azure storage blob container specify the below details of the Target database name,
+Resource group, Azure storage account, Blob container from the corresponding drop-down lists.
+
+ |Field |Description |
+ ||-|
+ |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
+ |**Storage account details** |The resource group, storage account and container where backup files are located.
+ 6. Select **Next** to continue. > [!IMPORTANT] > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd)
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
event-hubs Event Hubs Premium Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-premium-overview.md
In comparison to the dedicated offering, the premium tier provides the following
- Scale far more elastically and quicker - PUs can be dynamically adjusted
-Therefore, the premium tier is often a more cost effective option for mid-range (<120MB/sec) throughput requirements, especially with changing loads throughout the day or week, when compared to the dedicated tier.
+Therefore, the premium tier is often a more cost effective option for event streaming workloads up to 160 MB/sec (per namespace), especially with changing loads throughout the day or week, when compared to the dedicated tier.
For the extra robustness gained by availability-zone support, the minimal deployment scale for the dedicated tier is 8 capacity units (CU), but you'll have availability zone support in the premium tier from the first PU in all availability zone regions.
event-hubs Event Hubs Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-scalability.md
For more information about the auto-inflate feature, see [Automatically scale th
[Event Hubs Premium](./event-hubs-premium-overview.md) provides superior performance and better isolation with in a managed multitenant PaaS environment. The resources in a Premium tier are isolated at the CPU and memory level so that each tenant workload runs in isolation. This resource container is called a *Processing Unit*(PU). You can purchase 1, 2, 4, 8 or 16 processing Units for each Event Hubs Premium namespace.
-How much you can ingest and stream with a processing unit depends on various factors such as your producers, consumers, the rate at which you're ingesting and processing, and much more. One processing unit can approximately offer core capacity of ~5-10 MB/s ingress and 10-20 MB/s egress, given that we have sufficient partitions so that storage is not a throttling factor.
+How much you can ingest and stream with a processing unit depends on various factors such as your producers, consumers, the rate at which you're ingesting and processing, and much more.
+
+For example, Event Hubs Premium namespace with 1 PU and 1 event hub(100 partitions) can approximately offer core capacity of ~5-10 MB/s ingress and 10-20 MB/s egress for both AMQP or Kafka workloads.
To learn about configuring PUs for a premium tier namespace, see [Configure processing units](configure-processing-units-premium-namespace.md).
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
genomics Quickstart Input Bam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/genomics/quickstart-input-bam.md
output_storage_account_container: outputs
Submit the `config.txt` file with this invocation: `msgen submit -f config.txt` ## Next steps
-In this article, you uploaded a BAM file into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` python client. For additional information regarding workflow submission and other commands you can use with the Microsoft Genomics service, see our [FAQ](frequently-asked-questions-genomics.yml).
+In this article, you uploaded a BAM file into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` Python client. For additional information regarding workflow submission and other commands you can use with the Microsoft Genomics service, see our [FAQ](frequently-asked-questions-genomics.yml).
genomics Quickstart Input Multiple https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/genomics/quickstart-input-multiple.md
output_storage_account_container: outputs
Submit the `config.txt` file with this invocation: `msgen submit -f config.txt` ## Next steps
-In this article, you uploaded multiple BAM files or paired FASTQ files into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` python client. For more information regarding workflow submission and other commands you can use with the Microsoft Genomics service, see the [FAQ](frequently-asked-questions-genomics.yml).
+In this article, you uploaded multiple BAM files or paired FASTQ files into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` Python client. For more information regarding workflow submission and other commands you can use with the Microsoft Genomics service, see the [FAQ](frequently-asked-questions-genomics.yml).
genomics Quickstart Input Pair Fastq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/genomics/quickstart-input-pair-fastq.md
output_storage_account_container: outputs
Submit the `config.txt` file with this invocation: `msgen submit -f config.txt` ## Next steps
-In this article, you uploaded a pair of FASTQ files into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` python client. To learn more about workflow submission and other commands you can use with the Microsoft Genomics service, see our [FAQ](frequently-asked-questions-genomics.yml).
+In this article, you uploaded a pair of FASTQ files into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` Python client. To learn more about workflow submission and other commands you can use with the Microsoft Genomics service, see our [FAQ](frequently-asked-questions-genomics.yml).
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/cis-azure-1-1-0.md
- Title: CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample
-description: Overview of the CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample. This blueprint sample helps customers assess specific controls.
Previously updated : 09/08/2021--
-# CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample
-
-The CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample provides governance guardrails
-using [Azure Policy](../../policy/overview.md) that help you assess specific CIS Microsoft Azure
-Foundations Benchmark recommendations. This blueprint helps customers deploy a core set of policies
-for any Azure-deployed architecture that must implement CIS Microsoft Azure Foundations Benchmark
-v1.1.0 recommendations.
-
-## Recommendation mapping
-
-The [Azure Policy recommendation mapping](../../policy/samples/cis-azure-1-1-0.md) provides details
-on policy definitions included within this blueprint and how these policy definitions map to the
-**recommendations** in CIS Microsoft Azure Foundations Benchmark v1.1.0. When assigned to an
-architecture, resources are evaluated by Azure Policy for non-compliance with assigned policy
-definitions. For more information, see [Azure Policy](../../policy/overview.md).
-
-## Deploy
-
-To deploy the Azure Blueprints CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample,
-the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-### Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **CIS Microsoft Azure Foundations Benchmark v1.1.0** blueprint sample under _Other
- Samples_ and select **Use this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the CIS Microsoft Azure Foundations
- Benchmark blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that are included in the blueprint sample. Many of the artifacts
- have parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-### Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with CIS Microsoft Azure Foundations Benchmark v1.1.0 recommendations.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the CIS
- Microsoft Azure Foundations Benchmark blueprint sample." Then select **Publish** at the bottom of
- the page.
-
-### Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are provided
-to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since
- they're defined during the assignment of the blueprint. For a full list or artifact parameters
- and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-### Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|Audit CIS Microsoft Azure Foundations Benchmark v1.1.0 recommendations and deploy specific supporting VM Extensions|Policy assignment|List of regions where Network Watcher should be enabled|A semicolon-separated list of regions. To see a complete list of regions use Get-AzLocation. Ex: eastus; eastus2|
-|Audit CIS Microsoft Azure Foundations Benchmark v1.1.0 recommendations and deploy specific supporting VM Extensions|Policy assignment|List of virtual machine extensions that are approved for use|A semicolon-separated list of extensions. To see a complete list of virtual machine extensions, use Get-AzVMExtensionImage. Ex: AzureDiskEncryption; IaaSAntimalware|
-
-## Next steps
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/cis-azure-1-3-0.md
- Title: CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample
-description: Overview of the CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample. This blueprint sample helps customers assess specific controls.
Previously updated : 09/08/2021--
-# CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample
-
-The CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample provides governance guardrails
-using [Azure Policy](../../policy/overview.md) that help you assess specific CIS Microsoft Azure
-Foundations Benchmark v1.3.0 recommendations. This blueprint helps customers deploy a core set of
-policies for any Azure-deployed architecture that must implement CIS Microsoft Azure Foundations
-Benchmark v1.3.0 recommendations.
-
-## Recommendation mapping
-
-The [Azure Policy recommendation mapping](../../policy/samples/cis-azure-1-3-0.md) provides details
-on policy definitions included within this blueprint and how these policy definitions map to the
-**recommendations** in CIS Microsoft Azure Foundations Benchmark v1.3.0. When assigned to an
-architecture, resources are evaluated by Azure Policy for non-compliance with assigned policy
-definitions. For more information, see [Azure Policy](../../policy/overview.md).
-
-## Deploy
-
-To deploy the Azure Blueprints CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample,
-the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-### Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **CIS Microsoft Azure Foundations Benchmark v1.3.0** blueprint sample under _Other
- Samples_ and select **Use this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the CIS Microsoft Azure Foundations
- Benchmark blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that are included in the blueprint sample. Many of the artifacts
- have parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-### Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with CIS Microsoft Azure Foundations Benchmark v1.3.0 recommendations.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the CIS
- Microsoft Azure Foundations Benchmark blueprint sample." Then select **Publish** at the bottom of
- the page.
-
-### Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are provided
-to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since
- they're defined during the assignment of the blueprint. For a full list or artifact parameters
- and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-### Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|List of virtual machine extensions that are approved for use|A semicolon-separated list of virtual machine extensions; to see a complete list of extensions, use the Azure PowerShell command Get-AzVMExtensionImage|
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: SQL managed instances should use customer-managed keys to encrypt data at rest|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Azure Data Lake Store should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Disk encryption should be applied on virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Key vault should have purge protection enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: SQL servers should use customer-managed keys to encrypt data at rest|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Managed identity should be used in your Function App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for Key Vault should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Custom subscription owner roles should not exist|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Keys should have expiration dates set|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Transparent Data Encryption on SQL databases should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on SQL Managed Instance|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'PHP version' is the latest, if used as a part of the API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An Azure Active Directory administrator should be provisioned for SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for App Service should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage accounts should restrict network access using virtual network rules|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Managed identity should be used in your Web App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: SSH access from the Internet should be blocked|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Unattached disks should be encrypted|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for Storage should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage accounts should restrict network access|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Logic Apps should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in IoT Hub should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: FTPS only should be required in your Function App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Security operations (Microsoft.Security/securitySolutions/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Security operations (Microsoft.Security/securitySolutions/write)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Secure transfer to storage accounts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Batch accounts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Auto provisioning of the Log Analytics agent should be enabled on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the Web app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: FTPS should be required in your Web App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for servers should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Subscriptions should have a contact email address for security issues|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage account public access should be disallowed|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for Kubernetes should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Connection throttling should be enabled for PostgreSQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure WEB app has 'Client Certificates (Incoming client certificates)' set to 'On'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: External accounts with write permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: External accounts with read permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for SQL servers on machines should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Email notification for high severity alerts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage account should use customer-managed key for encryption|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the Web app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the Function app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'PHP version' is the latest, if used as a part of the WEB app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Virtual Machine Scale Sets should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for Azure SQL Database servers should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Event Hub should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: System updates should be installed on your machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: SQL servers should be configured with 90 days auditing retention or higher.|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the Web app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Latest TLS version should be used in your API App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: MFA should be enabled accounts with write permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Authentication should be enabled on your web app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Secrets should have expiration dates set|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: FTPS only should be required in your API App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the Function app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Web Application should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Auditing on SQL server should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: MFA should be enabled on accounts with owner permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Advanced data security should be enabled on your SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Advanced data security should be enabled on SQL Managed Instance|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Role-Based Access Control (RBAC) should be used on Kubernetes Services|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Monitor missing Endpoint Protection in Azure Security Center|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Search services should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in App Services should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/delete) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/securityRules/delete) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/securityRules/write) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/write) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Sql/servers/firewallRules/delete) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Sql/servers/firewallRules/write) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Only approved VM extensions should be installed|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for container registries should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Managed identity should be used in your API App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Authentication should be enabled on your API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Policy operations (Microsoft.Authorization/policyAssignments/delete) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Policy operations (Microsoft.Authorization/policyAssignments/write) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Authentication should be enabled on your Function app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Data Lake Analytics should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage accounts should allow access from trusted Microsoft services|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Key Vault should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Enforce SSL connection should be enabled for PostgreSQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the Function app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: MFA should be enabled on accounts with read permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: RDP access from the Internet should be blocked|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Enforce SSL connection should be enabled for MySQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure Function app has 'Client Certificates (Incoming client certificates)' set to 'On'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Log checkpoints should be enabled for PostgreSQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Log connections should be enabled for PostgreSQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Disconnections should be logged for PostgreSQL database servers.|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on your SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Latest TLS version should be used in your Web App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: External accounts with owner permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Service Bus should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Azure Stream Analytics should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Latest TLS version should be used in your Function App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage account containing the container with activity logs must be encrypted with BYOK|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Include AKS clusters when auditing if virtual machine scale set diagnostic logs are enabled||
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Latest Java version for App Services|Latest supported Java version for App Services|
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Latest Python version for Linux for App Services|Latest supported Python version for App Services|
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|List of regions where Network Watcher should be enabled|To see a complete list of regions, run the PowerShell command Get-AzLocation|
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Latest PHP version for App Services|Latest supported PHP version for App Services|
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Required retention period (days) for resource logs|For more information about resource logs, visit [https://aka.ms/resourcelogs](../../../azure-monitor/essentials/resource-logs.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Name of the resource group for Network Watcher|Name of the resource group where Network Watchers are located|
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Required auditing setting for SQL servers||
-
-## Next steps
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/cmmc-l3.md
- Title: CMMC Level 3 blueprint sample
-description: Overview of the CMMC Level 3 blueprint sample. This blueprint sample helps customers assess specific controls.
Previously updated : 09/08/2021--
-# CMMC Level 3 blueprint sample
-
-The CMMC Level 3 blueprint sample provides governance guardrails using
-[Azure Policy](../../policy/overview.md) that help you assess specific
-[Cybersecurity Maturity Model Certification (CMMC) framework](https://www.acq.osd.mil/cmmc/https://docsupdatetracker.net/index.html)
-controls. This blueprint helps customers deploy a core set of policies for any Azure-deployed
-architecture that must implement controls for CMMC Level 3.
-
-## Control mapping
-
-The [Azure Policy control mapping](../../policy/samples/cmmc-l3.md) provides details on policy
-definitions included within this blueprint and how these policy definitions map to the **controls**
-in the CMMC framework. When assigned to an architecture, resources are evaluated by Azure Policy for
-non-compliance with assigned policy definitions. For more information, see
-[Azure Policy](../../policy/overview.md).
-
-## Deploy
-
-To deploy the Azure Blueprints CMMC Level 3 blueprint sample,
-the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-### Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **CMMC Level 3** blueprint sample under _Other
- Samples_ and select **Use this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the CMMC Level 3 blueprint
- sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that are included in the blueprint sample. Many of the artifacts
- have parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-### Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with CMMC Level 3 controls.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the CMMC Level
- 3 blueprint sample." Then select **Publish** at the bottom of the page.
-
-### Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are provided
-to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since
- they're defined during the assignment of the blueprint. For a full list or artifact parameters
- and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-### Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|CMMC Level 3|Policy Assignment|Include Arc-connected servers when evaluating guest configuration policies|By selecting 'true', you agree to be charged monthly per Arc connected machine; for more information, visit https://aka.ms/policy-pricing|
-|CMMC Level 3|Policy Assignment|List of users that must be excluded from Windows VM Administrators group|A semicolon-separated list of users that should be excluded in the Administrators local group; Ex: Administrator; myUser1; myUser2|
-|CMMC Level 3|Policy Assignment|List of users that must be included in Windows VM Administrators group|A semicolon-separated list of users that should be included in the Administrators local group; Ex: Administrator; myUser1; myUser2|
-|CMMC Level 3|Policy Assignment|Log Analytics workspace ID for VM agent reporting|ID (GUID) of the Log Analytics workspace where VMs agents should report|
-|CMMC Level 3|Policy Assignment|Allowed elliptic curve names|The list of allowed curve names for elliptic curve cryptography certificates.|
-|CMMC Level 3|Policy Assignment|Allowed key types|The list of allowed key types|
-|CMMC Level 3|Policy Assignment|Allow host network usage for Kubernetes cluster pods|Set this value to true if pod is allowed to use host network otherwise false.|
-|CMMC Level 3|Policy Assignment|Audit Authentication Policy Change|Specifies whether audit events are generated when changes are made to authentication policy. This setting is useful for tracking changes in domain-level and forest-level trust and privileges that are granted to user accounts or groups.|
-|CMMC Level 3|Policy Assignment|Audit Authorization Policy Change|Specifies whether audit events are generated for assignment and removal of user rights in user right policies, changes in security token object permission, resource attributes changes and Central Access Policy changes for file system objects.|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Backup should be enabled for Virtual Machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Cognitive Services accounts should restrict network access|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: SQL managed instances should use customer-managed keys to encrypt data at rest|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure API for FHIR should use a customer-managed key (CMK) to encrypt data at rest|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should be enabled for Azure Front Door Service|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Public network access should be disabled for Cognitive Services accounts|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: CORS should not allow every resource to access your Function Apps|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Adaptive network hardening recommendations should be applied on internet facing virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: There should be more than one owner assigned to your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Disk encryption should be applied on virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Email notification to subscription owner for high severity alerts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Key vault should have purge protection enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: SQL servers should use customer-managed keys to encrypt data at rest|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Remote debugging should be turned off for Function Apps|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Defender for Key Vault should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Geo-redundant backup should be enabled for Azure Database for MariaDB|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: CORS should not allow every domain to access your API for FHIR|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Windows machines should meet requirements for 'Security Options - Network Security'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Allowlist rules in your adaptive application control policy should be updated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should use the specified mode for Application Gateway|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Keys should have expiration dates set|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Transparent Data Encryption on SQL databases should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Monitor log profile should collect logs for categories 'write,' 'delete,' and 'action'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on SQL Managed Instance|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'PHP version' is the latest, if used as a part of the API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Key vault should have soft delete enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: An Azure Active Directory administrator should be provisioned for SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Only secure connections to your Azure Cache for Redis should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Infrastructure encryption should be enabled for Azure Database for PostgreSQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Endpoint protection solution should be installed on virtual machine scale sets|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Defender for App Service should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Windows machines should meet requirements for 'System Audit Policies - Policy Change'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Cognitive Services accounts should enable data encryption|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: SSH access from the Internet should be blocked|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Unattached disks should be encrypted|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Defender for Storage should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Storage accounts should restrict network access|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: CORS should not allow every resource to access your API App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Deploy Advanced Threat Protection on Storage Accounts|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Automation account variables should be encrypted|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Diagnostic logs in IoT Hub should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Infrastructure encryption should be enabled for Azure Database for MySQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: An activity log alert should exist for specific Security operations (Microsoft.Security/securitySolutions/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Vulnerabilities in security configuration on your virtual machine scale sets should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Windows machines should meet requirements for 'Security Options - Network Access'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Secure transfer to storage accounts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Monitor should collect activity logs from all regions|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should use the specified mode for Azure Front Door Service|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Storage accounts should have infrastructure encryption|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Adaptive application controls for defining safe applications should be enabled on your machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Geo-redundant backup should be enabled for Azure Database for PostgreSQL|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Windows machines should meet requirements for 'Security Options - User Account Control'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the Web app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Defender for servers should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: A maximum of 3 owners should be designated for your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Subscriptions should have a contact email address for security issues|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Storage account public access should be disallowed|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: A vulnerability assessment solution should be enabled on your virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Defender for Kubernetes should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Firewall should be enabled on Key Vault|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should be enabled for Application Gateway|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: CORS should not allow every resource to access your Web Applications|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Audit Windows machines that allow re-use of the previous 24 passwords|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Container registries should be encrypted with a customer-managed key (CMK)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: External accounts with write permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Public network access should be disabled for PostgreSQL flexible servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Vulnerabilities in Azure Container Registry images should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: External accounts with read permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Defender for SQL servers on machines should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Cognitive Services accounts should enable data encryption with customer-managed key|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Deprecated accounts should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Function App should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Email notification for high severity alerts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Storage account should use customer-managed key for encryption|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the Web app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the Function app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'PHP version' is the latest, if used as a part of the WEB app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Keys should be the specified cryptographic type RSA or EC|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure subscriptions should have a log profile for Activity Log|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Both operating systems and data disks in Azure Kubernetes Service clusters should be encrypted by customer-managed keys|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Defender for Azure SQL Database servers should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Data Explorer encryption at rest should use a customer-managed key|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Keys using RSA cryptography should have a specified minimum key size|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Geo-redundant backup should be enabled for Azure Database for MySQL|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Kubernetes cluster pods should only use approved host network and port range|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: System updates should be installed on your machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Windows machines should meet requirements for 'System Audit Policies - Privilege Use'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Stream Analytics jobs should use customer-managed keys to encrypt data|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the Web app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Latest TLS version should be used in your API App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: MFA should be enabled accounts with write permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Microsoft IaaSAntimalware extension should be deployed on Windows servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the Function app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: All network ports should be restricted on network security groups associated to your virtual machine|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Security Center standard pricing tier should be selected|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Audit Windows machines that do not restrict the minimum password length to 14 characters|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Audit usage of custom RBAC rules|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Web Application should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Auditing on SQL server should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: The Log Analytics agent should be installed on virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: MFA should be enabled on accounts with owner permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Advanced data security should be enabled on your SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Advanced data security should be enabled on SQL Managed Instance|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Role-Based Access Control (RBAC) should be used on Kubernetes Services|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Virtual machines should have the Guest Configuration extension|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Monitor missing Endpoint Protection in Azure Security Center|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Activity log should be retained for at least one year|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Management ports of virtual machines should be protected with just-in-time network access control|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Public network access should be disabled for PostgreSQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Deploy Advanced Threat Protection for Cosmos DB Accounts|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Diagnostic logs in App Services should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: API App should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.ClassicNetwork/networkSecurityGroups/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.ClassicNetwork/networkSecurityGroups/securityRules/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/securityRules/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Sql/servers/firewallRules/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Non-internet-facing virtual machines should be protected with network security groups|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Audit Windows machines that do not have the password complexity setting enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Defender for container registries should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Data Box jobs should enable double encryption for data at rest on the device|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: System updates on virtual machine scale sets should be installed|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Microsoft Antimalware for Azure should be configured to automatically update protection signatures|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: An activity log alert should exist for specific Policy operations (Microsoft.Authorization/policyAssignments/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Public network access should be disabled for MySQL flexible servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Storage accounts should allow access from trusted Microsoft services|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Remote debugging should be turned off for Web Applications|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Certificates using RSA cryptography should have the specified minimum key size|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Container registries should not allow unrestricted network access|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Enforce SSL connection should be enabled for PostgreSQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Guest Configuration extension should be deployed to Azure virtual machines with system assigned managed identity|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Long-term geo-redundant backup should be enabled for Azure SQL Databases|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Public network access should be disabled for MySQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Audit Windows machines that do not store passwords using reversible encryption|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Windows machines should meet requirements for 'User Rights Assignment'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Vulnerabilities in security configuration on your machines should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the Function app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: MFA should be enabled on accounts with read permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: RDP access from the Internet should be blocked|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Audit Linux machines that do not have the passwd file permissions set to 0644|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Subnets should be associated with a Network Security Group|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Enforce SSL connection should be enabled for MySQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Vulnerabilities in container security configurations should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Remote debugging should be turned off for API Apps|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Audit Linux machines that allow remote connections from accounts without passwords|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Deprecated accounts with owner permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Double encryption should be enabled on Azure Data Explorer|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on your SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: The Log Analytics agent should be installed on Virtual Machine Scale Sets|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Latest TLS version should be used in your Web App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Disk encryption should be enabled on Azure Data Explorer|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Internet-facing virtual machines should be protected with network security groups|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Audit Linux machines that have accounts without passwords|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Synapse workspaces should use customer-managed keys to encrypt data at rest|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: External accounts with owner permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Latest TLS version should be used in your Function App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: All Internet traffic should be routed via your deployed Azure Firewall|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Linux machines should meet requirements for the Azure security baseline|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Public network access should be disabled for MariaDB servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Vulnerabilities on your SQL databases should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Keys using elliptic curve cryptography should have the specified curve names|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Namespaces excluded from evaluation of policy: Kubernetes cluster pods should only use approved host network and port range|List of Kubernetes namespaces to exclude from policy evaluation.|
-|CMMC Level 3|Policy Assignment|Latest Java version for App Services|Latest supported Java version for App Services|
-|CMMC Level 3|Policy Assignment|Latest Python version for Linux for App Services|Latest supported Python version for App Services|
-|CMMC Level 3|Policy Assignment|Optional: List of VM images that have supported Linux OS to add to scope when auditing Log Analytics agent deployment|Example value: `/subscriptions/<subscriptionId>/resourceGroups/YourResourceGroup/providers/Microsoft.Compute/images/ContosoStdImage`|
-|CMMC Level 3|Policy Assignment|Optional: List of VM images that have supported Windows OS to add to scope when auditing Log Analytics agent deployment|Example value: `/subscriptions/<subscriptionId>/resourceGroups/YourResourceGroup/providers/Microsoft.Compute/images/ContosoStdImage`|
-|CMMC Level 3|Policy Assignment|List of regions where Network Watcher should be enabled|Audit if Network Watcher is not enabled for region(s).|
-|CMMC Level 3|Policy Assignment|List of resource types that should have diagnostic logs enabled||
-|CMMC Level 3|Policy Assignment|Maximum value in the allowable host port range that pods can use in the host network namespace|The maximum value in the allowable host port range that pods can use in the host network namespace.|
-|CMMC Level 3|Policy Assignment|Minimum RSA key size for keys|The minimum key size for RSA keys.|
-|CMMC Level 3|Policy Assignment|Minimum RSA key size certificates|The minimum key size for RSA certificates.|
-|CMMC Level 3|Policy Assignment|Minimum TLS version for Windows web servers|Windows web servers with lower TLS versions will be assessed as non-compliant|
-|CMMC Level 3|Policy Assignment|Minimum value in the allowable host port range that pods can use in the host network namespace|The minimum value in the allowable host port range that pods can use in the host network namespace.|
-|CMMC Level 3|Policy Assignment|Mode Requirement|Mode required for all WAF policies|
-|CMMC Level 3|Policy Assignment|Mode Requirement|Mode required for all WAF policies|
-|CMMC Level 3|Policy Assignment|Allowed host paths for pod hostPath volumes to use|The host paths allowed for pod hostPath volumes to use. Provide an empty paths list to block all host paths.|
-|CMMC Level 3|Policy Assignment|Network access: Remotely accessible registry paths|Specifies which registry paths will be accessible over the network, regardless of the users or groups listed in the access control list (ACL) of the `winreg` registry key.|
-|CMMC Level 3|Policy Assignment|Network access: Remotely accessible registry paths and sub-paths|Specifies which registry paths and sub-paths will be accessible over the network, regardless of the users or groups listed in the access control list (ACL) of the `winreg` registry key.|
-|CMMC Level 3|Policy Assignment|Network access: Shares that can be accessed anonymously|Specifies which network shares can be accessed by anonymous users. The default configuration for this policy setting has little effect because all users have to be authenticated before they can access shared resources on the server.|
-|CMMC Level 3|Policy Assignment|Network Security: Configure encryption types allowed for Kerberos|Specifies the encryption types that Kerberos is allowed to use.|
-|CMMC Level 3|Policy Assignment|Network security: LAN Manager authentication level|Specify which challenge-response authentication protocol is used for network logons. This choice affects the level of authentication protocol used by clients, the level of session security negotiated, and the level of authentication accepted by servers.|
-|CMMC Level 3|Policy Assignment|Network security: LDAP client signing requirements|Specify the level of data signing that is requested on behalf of clients that issue LDAP BIND requests.|
-|CMMC Level 3|Policy Assignment|Network security: Minimum session security for NTLM SSP based (including secure RPC) clients|Specifies which behaviors are allowed by clients for applications using the NTLM Security Support Provider (SSP). The SSP Interface (SSPI) is used by applications that need authentication services. See [https://docs.microsoft.com/windows/security/threat-protection/security-policy-settings/network-security-minimum-session-security-for-ntlm-ssp-based-including-secure-rpc-servers](/windows/security/threat-protection/security-policy-settings/network-security-minimum-session-security-for-ntlm-ssp-based-including-secure-rpc-servers) for more information.|
-|CMMC Level 3|Policy Assignment|Network security: Minimum session security for NTLM SSP based (including secure RPC) servers|Specifies which behaviors are allowed by servers for applications using the NTLM Security Support Provider (SSP). The SSP Interface (SSPI) is used by applications that need authentication services.|
-|CMMC Level 3|Policy Assignment|Latest PHP version for App Services|Latest supported PHP version for App Services|
-|CMMC Level 3|Policy Assignment|Required retention period (days) for IoT Hub diagnostic logs||
-|CMMC Level 3|Policy Assignment|Name of the resource group for Network Watcher|Name of the resource group of NetworkWatcher, such as NetworkWatcherRG. This is the resource group where the Network Watchers are located.|
-|CMMC Level 3|Policy Assignment|Required auditing setting for SQL servers||
-|CMMC Level 3|Policy Assignment|Azure Data Box SKUs that support software-based double encryption|The list of Azure Data Box SKUs that support software-based double encryption|
-|CMMC Level 3|Policy Assignment|UAC: Admin Approval Mode for the Built-in Administrator account|Specifies the behavior of Admin Approval Mode for the built-in Administrator account.|
-|CMMC Level 3|Policy Assignment|UAC: Behavior of the elevation prompt for administrators in Admin Approval Mode|Specifies the behavior of the elevation prompt for administrators.|
-|CMMC Level 3|Policy Assignment|UAC: Detect application installations and prompt for elevation|Specifies the behavior of application installation detection for the computer.|
-|CMMC Level 3|Policy Assignment|UAC: Run all administrators in Admin Approval Mode|Specifies the behavior of all User Account Control (UAC) policy settings for the computer.|
-|CMMC Level 3|Policy Assignment|User and groups that may force shutdown from a remote system|Specifies which users and groups are permitted to shut down the computer from a remote location on the network.|
-|CMMC Level 3|Policy Assignment|Users and groups that are denied access to this computer from the network|Specifies which users or groups are explicitly prohibited from connecting to the computer across the network.|
-|CMMC Level 3|Policy Assignment|Users and groups that are denied local logon|Specifies which users and groups are explicitly not permitted to log on to the computer.|
-|CMMC Level 3|Policy Assignment|Users and groups that are denied logging on as a batch job|Specifies which users and groups are explicitly not permitted to log on to the computer as a batch job (i.e. scheduled task).|
-|CMMC Level 3|Policy Assignment|Users and groups that are denied logging on as a service|Specifies which service accounts are explicitly not permitted to register a process as a service.|
-|CMMC Level 3|Policy Assignment|Users and groups that are denied log on through Remote Desktop Services|Specifies which users and groups are explicitly not permitted to log on to the computer via Terminal Services/Remote Desktop Client.|
-|CMMC Level 3|Policy Assignment|Users and groups that may restore files and directories|Specifies which users and groups are permitted to bypass file, directory, registry, and other persistent object permissions when restoring backed up files and directories.|
-|CMMC Level 3|Policy Assignment|Users and groups that may shut down the system|Specifies which users and groups who are logged on locally to the computers in your environment are permitted to shut down the operating system with the Shut Down command.|
-|CMMC Level 3|Policy Assignment|Users or groups that may log on locally|Specifies which remote users on the network are permitted to connect to the computer. This does not include Remote Desktop Connection.|
-|CMMC Level 3|Policy Assignment|Users or groups that may back up files and directories|Specifies users and groups allowed to circumvent file and directory permissions to back up the system.|
-|CMMC Level 3|Policy Assignment|Users or groups that may change the system time|Specifies which users and groups are permitted to change the time and date on the internal clock of the computer.|
-|CMMC Level 3|Policy Assignment|Users or groups that may change the time zone|Specifies which users and groups are permitted to change the time zone of the computer.|
-|CMMC Level 3|Policy Assignment|Users or groups that may create a token object|Specifies which users and groups are permitted to create an access token, which may provide elevated rights to access sensitive data.|
-|CMMC Level 3|Policy Assignment|Users or groups that may log on locally|Specifies which users or groups can interactively log on to the computer. Users who attempt to log on via Remote Desktop Connection or IIS also require this user right.|
-|CMMC Level 3|Policy Assignment|Remote Desktop Users|Users or groups that may log on through Remote Desktop Services|
-|CMMC Level 3|Policy Assignment|Users or groups that may manage auditing and security log|Specifies users and groups permitted to change the auditing options for files and directories and clear the Security log.|
-|CMMC Level 3|Policy Assignment|Users or groups that may take ownership of files or other objects|Specifies which users and groups are permitted to take ownership of files, folders, registry keys, processes, or threads. This user right bypasses any permissions that are in place to protect objects to give ownership to the specified user.|
-
-## Next steps
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/hipaa-hitrust-9-2.md
- Title: HIPAA HITRUST 9.2 blueprint sample overview
-description: Overview of the HIPAA HITRUST 9.2 blueprint sample. This blueprint sample helps customers assess specific HIPAA HITRUST 9.2 controls.
Previously updated : 09/08/2021--
-# HIPAA HITRUST 9.2 blueprint sample
-
-The HIPAA HITRUST 9.2 blueprint sample provides governance guardrails using
-[Azure Policy](../../policy/overview.md) that help you assess specific HIPAA HITRUST 9.2 controls.
-This blueprint helps customers deploy a core set of policies for any Azure-deployed architecture
-that must implement HIPAA HITRUST 9.2 controls.
-
-## Control mapping
-
-The [Azure Policy control mapping](../../policy/samples/hipaa-hitrust-9-2.md) provides details on
-policy definitions included within this blueprint and how these policy definitions map to the
-**compliance domains** and **controls** in HIPAA HITRUST 9.2. When assigned to an architecture,
-resources are evaluated by Azure Policy for non-compliance with assigned policy definitions. For
-more information, see [Azure Policy](../../policy/overview.md).
-
-## Deploy
-
-To deploy the Azure Blueprints HIPAA HITRUST 9.2 blueprint sample, the following steps must
-be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-### Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **HIPAA HITRUST** blueprint sample under _Other Samples_ and select **Use
- this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the HIPAA HITRUST 9.2 blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that make up the blueprint sample. Many of the artifacts have
- parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-### Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with HIPAA HITRUST 9.2 controls.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the HIPAA
- HITRUST 9.2 blueprint sample." Then select **Publish** at the bottom of the page.
-
-### Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are provided
-to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since they're
- defined during the assignment of the blueprint. For a full list or artifact parameters and
- their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-### Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name |Parameter name |Description |
-||||
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Access through Internet facing endpoint should be restricted |Enable or disable overly permissive inbound NSG rules monitoring |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Accounts: Guest account status |Specifies whether the local Guest account is disabled. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Adaptive Application Controls should be enabled on virtual machines |Enable or disable the monitoring of application whitelisting in Azure Security Center |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Allow simultaneous connections to the Internet or a Windows Domain |Specify whether to prevent computers from connecting to both a domain based network and a non-domain based network at the same time. A value of 0 allows simultaneous connections, and a value of 1 blocks them. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |API App should only be accessible over HTTPS V2 |Enable or disable the monitoring of the use of HTTPS in API App V2 |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Application names (supports wildcards) |A semicolon-separated list of the names of the applications that should be installed. e.g. 'Microsoft SQL Server 2014 (64-bit); Microsoft Visual Studio Code' or 'Microsoft SQL Server 2014*' (to match any application starting with 'Microsoft SQL Server 2014') |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Audit Process Termination |Specifies whether audit events are generated when a process has exited. Recommended for monitoring termination of critical processes. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Audit unrestricted network access to storage accounts |Enable or disable the monitoring of network access to storage account |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Audit: Shut down system immediately if unable to log security audits |Audits if the system will shut down when unable to log Security events. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Certificate thumbprints |A semicolon-separated list of certificate thumbprints that should exist under the Trusted Root certificate store (Cert:\LocalMachine\Root). e.g. THUMBPRINT1;THUMBPRINT2;THUMBPRINT3 |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Diagnostic logs in Batch accounts should be enabled |Enable or disable the monitoring of diagnostic logs in Batch accounts |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Diagnostic logs in Event Hub should be enabled |Enable or disable the monitoring of diagnostic logs in Event Hub accounts |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Diagnostic logs in Search services should be enabled |Enable or disable the monitoring of diagnostic logs in Azure Search service |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Diagnostic logs in Virtual Machine Scale Sets should be enabled |Enable or disable the monitoring of diagnostic logs in Service Fabric |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Disk encryption should be applied on virtual machines |Enable or disable the monitoring for VM disk encryption |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Enable insecure guest logons |Specifies whether the SMB client will allow insecure guest logons to an SMB server. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Just-In-Time network access control should be applied on virtual machines |Enable or disable the monitoring of network just In time access |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Management ports should be closed on your virtual machines |Enable or disable the monitoring of open management ports on Virtual Machines |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |MFA should be enabled accounts with write permissions on your subscription |Enable or disable the monitoring of MFA for accounts with write permissions in subscription |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |MFA should be enabled on accounts with owner permissions on your subscription |Enable or disable the monitoring of MFA for accounts with owner permissions in subscription |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Network access: Remotely accessible registry paths |Specifies which registry paths will be accessible over the network, regardless of the users or groups listed in the access control list (ACL) of the `winreg` registry key. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Network access: Remotely accessible registry paths and sub-paths |Specifies which registry paths and sub-paths will be accessible over the network, regardless of the users or groups listed in the access control list (ACL) of the `winreg` registry key. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Network access: Shares that can be accessed anonymously |Specifies which network shares can be accessed by anonymous users. The default configuration for this policy setting has little effect because all users have to be authenticated before they can access shared resources on the server. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Recovery console: Allow floppy copy and access to all drives and all folders |Specifies whether to make the Recovery Console SET command available, which allows setting of recovery console environment variables. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Remote debugging should be turned off for API App |Enable or disable the monitoring of remote debugging for API App |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Remote debugging should be turned off for Web Application |Enable or disable the monitoring of remote debugging for Web App |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Required retention (in days) for logs in Batch accounts |The required diagnostic logs retention period in days |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Required retention (in days) of logs in Azure Search service |The required diagnostic logs retention period in days |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Required retention (in days) of logs in Event Hub accounts |The required diagnostic logs retention period in days |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Resource Group Name for Storage Account (must exist) to deploy diagnostic settings for Network Security Groups |The resource group that the storage account will be created in. This resource group must already exist. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Role-Based Access Control (RBAC) should be used on Kubernetes Services |Enable or disable the monitoring of Kubernetes Services without RBAC enabled |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |SQL managed instance TDE protector should be encrypted with your own key |Enable or disable the monitoring of Transparent Data Encryption (TDE) with your own key support. TDE with your own key support provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |SQL server TDE protector should be encrypted with your own key |Enable or disable the monitoring of Transparent Data Encryption (TDE) with your own key support. TDE with your own key support provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Storage Account Prefix for Regional Storage Account to deploy diagnostic settings for Network Security Groups |This prefix will be combined with the network security group location to form the created storage account name. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |System updates on virtual machine scale sets should be installed |Enable or disable virtual machine scale sets reporting of system updates |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |System updates on virtual machine scale sets should be installed |Enable or disable virtual machine scale sets reporting of system updates |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Turn off multicast name resolution |Specifies whether LLMNR, a secondary name resolution protocol that transmits using multicast over a local subnet link on a single subnet, is enabled. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Virtual machines should be migrated to new Azure Resource Manager resources |Enable or disable the monitoring of classic compute VMs |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Vulnerabilities in security configuration on your virtual machine scale sets should be remediated |Enable or disable virtual machine scale sets OS vulnerabilities monitoring |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Vulnerabilities should be remediated by a Vulnerability Assessment solution |Enable or disable the detection of VM vulnerabilities by a vulnerability assessment solution |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Vulnerability assessment should be enabled on your SQL managed instances |Audit SQL managed instances which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Domain): Apply local firewall rules |Specifies whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy for the Domain profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Domain): Behavior for outbound connections |Specifies the behavior for outbound connections for the Domain profile that do not match an outbound firewall rule. The default value of 0 means to allow connections, and a value of 1 means to block connections. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Domain): Behavior for outbound connections |Specifies the behavior for outbound connections for the Domain profile that do not match an outbound firewall rule. The default value of 0 means to allow connections, and a value of 1 means to block connections. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Domain): Display notifications |Specifies whether Windows Firewall with Advanced Security displays notifications to the user when a program is blocked from receiving inbound connections, for the Domain profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Domain): Use profile settings |Specifies whether Windows Firewall with Advanced Security uses the settings for the Domain profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Private): Apply local connection security rules |Specifies whether local administrators are allowed to create connection security rules that apply together with connection security rules configured by Group Policy for the Private profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Private): Apply local firewall rules |Specifies whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy for the Private profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Private): Behavior for outbound connections |Specifies the behavior for outbound connections for the Private profile that do not match an outbound firewall rule. The default value of 0 means to allow connections, and a value of 1 means to block connections. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Private): Display notifications |Specifies whether Windows Firewall with Advanced Security displays notifications to the user when a program is blocked from receiving inbound connections, for the Private profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Private): Use profile settings |Specifies whether Windows Firewall with Advanced Security uses the settings for the Private profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Public): Apply local connection security rules |Specifies whether local administrators are allowed to create connection security rules that apply together with connection security rules configured by Group Policy for the Public profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Public): Apply local firewall rules |Specifies whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy for the Public profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Public): Behavior for outbound connections |Specifies the behavior for outbound connections for the Public profile that do not match an outbound firewall rule. The default value of 0 means to allow connections, and a value of 1 means to block connections. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Public): Display notifications |Specifies whether Windows Firewall with Advanced Security displays notifications to the user when a program is blocked from receiving inbound connections, for the Public profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Public): Use profile settings |Specifies whether Windows Firewall with Advanced Security uses the settings for the Public profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall: Domain: Allow unicast response |Specifies whether Windows Firewall with Advanced Security permits the local computer to receive unicast responses to its outgoing multicast or broadcast messages; for the Domain profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall: Private: Allow unicast response |Specifies whether Windows Firewall with Advanced Security permits the local computer to receive unicast responses to its outgoing multicast or broadcast messages; for the Private profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall: Public: Allow unicast response |Specifies whether Windows Firewall with Advanced Security permits the local computer to receive unicast responses to its outgoing multicast or broadcast messages; for the Public profile. |
-
-## Next steps
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/index.md
Title: Index of blueprint samples description: Index of compliance and standard samples for deploying environments, policies, and Cloud Adoptions Framework foundations with Azure Blueprints. Previously updated : 08/17/2021 Last updated : 03/11/2022 # Azure Blueprints samples
quality and ready to deploy today to assist you in meeting your various complian
| [Australian Government ISM PROTECTED](./ism-protected/index.md) | Provides guardrails for compliance to Australian Government ISM PROTECTED. | | [Azure Security Benchmark Foundation](./azure-security-benchmark-foundation/index.md) | Deploys and configures Azure Security Benchmark Foundation. | | [Canada Federal PBMM](./canada-federal-pbmm.md) | Provides guardrails for compliance to Canada Federal Protected B, Medium Integrity, Medium Availability (PBMM). |
-| [CIS Microsoft Azure Foundations Benchmark v1.3.0](./cis-azure-1-3-0.md) | Provides a set of policies to help comply with CIS Microsoft Azure Foundations Benchmark v1.3.0 recommendations. |
-| [CIS Microsoft Azure Foundations Benchmark v1.1.0](./cis-azure-1-1-0.md) | Provides a set of policies to help comply with CIS Microsoft Azure Foundations Benchmark v1.1.0 recommendations. |
-| [CMMC Level 3](./cmmc-l3.md) | Provides guardrails for compliance with CMMC Level 3. |
-| [HIPAA HITRUST 9.2](./hipaa-hitrust-9-2.md) | Provides a set of policies to help comply with HIPAA HITRUST. |
-| [IRS 1075 September 2016](./irs-1075-sept2016.md) | Provides guardrails for compliance with IRS 1075.|
| [ISO 27001](./iso-27001-2013.md) | Provides guardrails for compliance with ISO 27001. | | [ISO 27001 Shared Services](./iso27001-shared/index.md) | Provides a set of compliant infrastructure patterns and policy guardrails that help toward ISO 27001 attestation. | | [ISO 27001 App Service Environment/SQL Database workload](./iso27001-ase-sql-workload/index.md) | Provides more infrastructure to the [ISO 27001 Shared Services](./iso27001-shared/index.md) blueprint sample. |
-| [Media](./medi) | Provides a set of policies to help comply with Media MPAA. |
| [New Zealand ISM Restricted](./new-zealand-ism.md) | Assigns policies to address specific New Zealand Information Security Manual controls. |
-| [NIST SP 800-171 R2](./nist-sp-800-171-r2.md) | Provides guardrails for compliance with NIST SP 800-171 R2. |
-| [PCI-DSS v3.2.1](./pci-dss-3.2.1/index.md) | Provides a set of policies to aide in PCI-DSS v3.2.1 compliance. |
| [SWIFT CSP-CSCF v2020](./swift-2020/index.md) | Aides in SWIFT CSP-CSCF v2020 compliance. | | [UK OFFICIAL and UK NHS Governance](./ukofficial-uknhs.md) | Provides a set of compliant infrastructure patterns and policy guardrails that help toward UK OFFICIAL and UK NHS attestation. | | [CAF Foundation](./caf-foundation/index.md) | Provides a set of controls to help you manage your cloud estate in alignment with the [Microsoft Cloud Adoption Framework for Azure (CAF)](/azure/architecture/cloud-adoption/governance/journeys/index). |
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/irs-1075-sept2016.md
- Title: IRS 1075 September 2016 blueprint sample
-description: Overview of the IRS 1075 September 2016 blueprint sample. This blueprint sample helps customers assess specific controls.
Previously updated : 09/08/2021--
-# IRS 1075 September 2016 blueprint sample
-
-The IRS 1075 September 2016 blueprint sample provides governance guardrails using
-[Azure Policy](../../policy/overview.md) that help you assess specific
-IRS 1075 September 2016 controls. This blueprint helps
-customers deploy a core set of policies for any Azure-deployed architecture that must implement
-controls for IRS 1075 September 2016.
-
-## Control mapping
-
-The [Azure Policy control mapping](../../policy/samples/irs-1075-sept2016.md) provides details on
-policy definitions included within this blueprint and how these policy definitions map to the
-**controls** in the IRS 1075 September 2016 framework. When assigned to an architecture, resources
-are evaluated by Azure Policy for non-compliance with assigned policy definitions. For more
-information, see [Azure Policy](../../policy/overview.md).
-
-## Deploy
-
-To deploy the Azure Blueprints IRS 1075 September 2016 blueprint sample,
-the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-### Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **IRS 1075 September 2016** blueprint sample under _Other Samples_ and select **Use this
- sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the IRS 1075 September 2016 blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that are included in the blueprint sample. Many of the artifacts
- have parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-### Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with IRS 1075 September 2016 controls.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the IRS 1075
- September 2016 blueprint sample." Then select **Publish** at the bottom of the page.
-
-### Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are provided
-to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since they're
- defined during the assignment of the blueprint. For a full list or artifact parameters and
- their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-### Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|Audit IRS 1075 (Rev.11-2016) controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Log Analytics workspace ID that VMs should be configured for|This is the ID (GUID) of the Log Analytics workspace that the VMs should be configured for.|
-|Audit IRS 1075 (Rev.11-2016) controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of resource types that should have diagnostic logs enabled|List of resource types to audit if diagnostic log setting is not enabled. Acceptable values can be found at [Azure Monitor diagnostic logs schemas](../../../azure-monitor/essentials/resource-logs-schema.md#service-specific-schemas).|
-|Audit IRS 1075 (Rev.11-2016) controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of users that should be excluded from Windows VM Administrators group|A semicolon-separated list of members that should be excluded in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|Audit IRS 1075 (Rev.11-2016) controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of users that should be included in Windows VM Administrators group|A semicolon-separated list of members that should be included in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)|Policy assignment|Log Analytics workspace for Linux VM Scale Sets (VMSS)|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)|Policy assignment|Optional: List of VM images that have supported Linux OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Log Analytics Agent for Linux VMs|Policy assignment|Log Analytics workspace for Linux VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|Deploy Log Analytics Agent for Linux VMs|Policy assignment|Optional: List of VM images that have supported Linux OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Log Analytics workspace for Windows VM Scale Sets (VMSS)|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Log Analytics Agent for Windows VMs|Policy assignment|Log Analytics workspace for Windows VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|Deploy Log Analytics Agent for Windows VMs|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Advanced Threat Protection on Storage Accounts|Policy assignment|Effect|Information about policy effects can be found at [Understand Azure Policy Effects](../../policy/concepts/effects.md) |
-|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention) |Retention days (optional, 180 days if unspecified) |
-|Deploy Auditing on SQL servers|Policy assignment|Resource group name for storage account for SQL server auditing|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region). Important - for proper operation of Auditing do not delete or rename the resource group or the storage accounts.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Storage account prefix for network security group diagnostics|This prefix will be combined with the network security group location to form the created storage account name.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist) |The resource group that the storage account will be created in. This resource group must already exist.|
-
-## Next steps
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/media/control-mapping.md
- Title: Media blueprint sample controls
-description: Control mapping of the Media blueprint samples. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Previously updated : 09/08/2021--
-# Control mapping of the Media blueprint sample
-
-The following article details how the Azure Blueprints Media blueprint sample maps to the Media
-controls. For more information about the controls, see
-[Media](https://www.motionpictures.org/best-practices).
-
-The following mappings are to the **Media** controls. Use the navigation on the right to jump
-directly to a specific control mapping. Many of the mapped controls are implemented with an
-[Azure Policy](../../../policy/overview.md) initiative. To review the complete initiative, open
-**Policy** in the Azure portal and select the **Definitions** page. Then, find and select the
-**\[Preview\]: Audit Media controls** built-in policy initiative.
-
-> [!IMPORTANT]
-> Each control below is associated with one or more [Azure Policy](../../../policy/overview.md)
-> definitions. These policies may help you
-> [assess compliance](../../../policy/how-to/get-compliance-data.md) with the control; however,
-> there often is not a one-to-one or complete match between a control and one or more policies. As
-> such, **Compliant** in Azure Policy refers only to the policies themselves; this doesn't ensure
-> you're fully compliant with all requirements of a control. In addition, the compliance standard
-> includes controls that aren't addressed by any Azure Policy definitions at this time. Therefore,
-> compliance in Azure Policy is only a partial view of your overall compliance status. The
-> associations between controls and Azure Policy definitions for this compliance blueprint sample
-> may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/governance/blueprints/samples/medi).
-
-## Access Control
-
-### AC-1.1- Ensure no root access key exists
--- \[Preview\]: Deploy prerequisites to audit Windows VMs that do not contain the specified
- certificates in Trusted Root
-
-### AC-1.2 - Passwords, PINs, and Tokens must be protected
--- \[Preview\]: Deploy prerequisites to audit Windows VMs that do not restrict the minimum password
- length to 14 characters
-
-### AC-1.8 - Shared account access is prohibited
--- All authorization rules except RootManageSharedAccessKey should be removed from Service Bus
- namespace
-
-### AC-1.9 -System must restrict access to authorized users.
--- Audit unrestricted network access to storage accounts-
-### AC- 1.14 -System must enforce access rights.
--- \[Preview\]: Deploy prerequisites to audit Windows VMs configurations in 'User Rights Assignment'-
-### AC- 1.15 -Prevent unauthorized access to security relevant information or functions.
--- \[Preview\]: Show audit results from Windows VMs configurations in 'Security Options - System
- settings'
-
-### AC-1-21 - Separation of duties must be enforced through appropriate assignment of role.
--- [Preview\]: Role-Based Access Control (RBAC) should be used on Kubernetes Services-
-### AC-1.40- Ensure that systems are not connecting trusted network and untrusted networks at the same time.
--- \[Preview\]: Deploy prerequisites to audit Windows VMs configurations in 'Security Options -
- Network Access'
-
-### AC-1.42 & AC- 1.43 - Remote access for non-employees must be restricted to allow access only to specifically approved information systems
--- \[Preview\]: Show audit results from Linux VMs that allow remote connections from accounts without
- passwords
-
-### AC-1.50- Log security related events for all information system components.
--- Diagnostic logs in Logic Apps should be enabled-
-### AC-1.54- Ensure multi-factor authentication (MFA) is enabled for all cloud console users.
--- MFA should be enabled accounts with write permissions on your subscription-- Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write
- privileges to prevent a breach of accounts or resources.
-
-## Auditing & Logging
-
-### AL-2.1- Successful and unsuccessful events must be logged.
--- Diagnostic logs in Search services should be enabled-
-### AL -2.16 - Network devices/instances must log any event classified as a critical security event by that network device/instance (ELBs, web application firewalls, etc.)
--- \[Preview\]: Show audit results from Windows VMs configurations in 'Security Options - Accounts'-
-### AL-2.17- Servers/instances must log any event classified as a critical security event by that server/instance
--- \[Preview\]: Show audit results from Windows VMs configurations in 'Security Options - Accounts'-
-### AL-2.19 - Domain events must log any event classified as a critical or high security event by the domain management software
--- \[Preview\]: Show audit results from Windows VMs configurations in 'Security Options - Accounts'-- \[Preview\]: Deploy prerequisites to audit Windows VMs configurations in 'Security Options -
- Microsoft Network Client'
-
-### AL-2.20- Domain events must log any event classified as a critical security event by domain security controls
--- \[Preview\]: Show audit results from Windows VMs configurations in 'Security Options - Accounts'-
-### AL-2.21- Domain events must log any access or changes to the domain log
--- \[Preview\]: Show audit results from Windows VMs configurations in 'Security Options - Recovery
- console'
-
-## Cryptographic Controls
-
-### CC-4.2- Applications and systems must use current cryptographic solutions for protecting data.
--- Transparent Data Encryption on SQL databases should be enabled-- Transparent data encryption should be enabled to protect data-at-rest and meet compliance
- requirements
-
-### CC-4.5- Digital Certificates must be signed by an approved Certificate Authority.
--- \[Preview\]: Show audit results from Windows VMs that contain certificates expiring within the
- specified number of days
-
-### CC-4.6- Digital Certificates must be uniquely assigned to a user or device.
--- \[Preview\]: Deploy prerequisites to audit Windows VMs that contain certificates expiring within
- the specified number of days
-
-### CC-4.7- Cryptographic material must be stored to enable decryption of the records for the length of time the records are retained.
--- Disk encryption should be applied on virtual machines-- VMs without an enabled disk encryption will be monitored by Azure Security Center as
- recommendations
-
-### CC-4.8- Secret and private keys must be stored securely.
--- Transparent Data Encryption on SQL databases should be enabled-- Transparent data encryption should be enabled to protect data-at-rest and meet compliance
- requirements
-
-## Change & Config Management
-
-### CM-5.2- Only authorized users may implement approved changes on the system.
--- System updates should be installed on your machines-- Missing security system updates on your servers will be monitored by Azure Security Center as
- recommendations
-
-### CM-5.12- Maintain an up-to-date, complete, accurate, and readily available baseline configuration of the information system.
--- System updates should be installed on your machines-- Missing security system updates on your servers will be monitored by Azure Security Center as
- recommendations
-
-### CM-5.13- Employ automated tools to maintain a baseline configuration of the information system.
--- System updates should be installed on your machines-- Missing security system updates on your servers will be monitored by Azure Security Center as
- recommendations
-
-### CM-5.14- Identify and disable unnecessary and/or non-secure functions, ports, protocols and services.
--- Network interfaces should disable IP forwarding-- \[Preview\]: IP Forwarding on your virtual machine should be disabled-
-### CM-5.19- Monitor changes to the security configuration settings.
--- Deploy Diagnostic Settings for Network Security Groups-
-### CM-5.22- Ensure that only authorized software and updates are installed on Company systems.
--- System updates should be installed on your machines-- Missing security system updates on your servers will be monitored by Azure Security Center as
- recommendations
-
-## Identity & Authentication
-
-### IA-7.1- User accounts must be uniquely assigned to individuals for access to information that is not classified as Public. Account IDs must be constructed using a standardized logical format.
--- External accounts with owner permissions should be removed from your subscription-- External accounts with owner permissions should be removed from your subscription in order to
- prevent unmonitored access.
-
-## Network Security
-
-### NS-9.2- Access to network device management functionality is restricted to authorized users.
--- \[Preview\]: Deploy prerequisites to audit Windows VMs configurations in 'Security Options -
- Network Access'
-
-### NS-9.3- All network devices must be configured using their most secure configurations.
--- \[Preview\]: Deploy prerequisites to audit Windows VMs configurations in 'Security Options -
- Network Access'
-
-### NS-9.5- All network connections to a system through a firewall must be approved and audited on a regular basis.
--- \[Preview\]: Show audit results from Windows VMs configurations in 'Windows Firewall Properties'-
-### NS-9.7- Appropriate controls must be present at any boundary between a trusted network and any untrusted or public network.
--- \[Preview\]: Deploy prerequisites to audit Windows VMs configurations in 'Windows Firewall
- Properties'
-
-## Security Planning
-
-### SP-11.3- Threats must be identified that could negatively impact the confidentiality, integrity, or availability of Company information and content along with the likelihood of their occurrence.
--- Advanced Threat Protection types should be set to 'All' in SQL managed instance Advanced Data
- Security settings
-
-### Security Continuity
-
-## SC-12.5- Data in long-term storage must be accessible throughout the retention period and protected against media degradation and technology changes.
--- SQL servers should be configured with auditing retention days greater than 90 days.-- Audit SQL servers configured with an auditing retention period of less than 90 days.-
-## System Integrity
-
-### SI-14.3- Only authorized personnel may monitor network and user activities.
--- Vulnerabilities on your SQL databases should be remediated-- Monitor Vulnerability Assessment scan results and recommendations for how to remediate database
- vulnerabilities.
-
-### SI-14.4- Internet facing systems must have intrusion detection.
--- Deploy Threat Detection on SQL servers-
-### SI-14.13- Standardized centrally managed anti-malware software should be implemented across the company.
--- Deploy default Microsoft IaaSAntimalware extension for Windows Server-
-### SI-14.14- Anti-malware software must scan computers and media weekly at a minimum.
--- Deploy default Microsoft IaaSAntimalware extension for Windows Server-
-## Vulnerability Management
-
-### VM-15.4- Ensure that applications are scanned for vulnerabilities on a monthly basis.
--- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks.-
-### VM-15.5- Ensure that vulnerabilities are identified, paired to threats, and evaluated for risk.
--- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks.-
-### VM-15.6- Ensure that identified vulnerabilities have been remediated within a mutually agreed upon timeline.
--- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks.-
-### VM-15.7- Access to and use of vulnerability management systems must be restricted to authorized personnel.
--- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks.-
-> [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
-> clouds.
-
-## Next steps
-
-You've reviewed the control mapping of the Media blueprint sample. Next, visit the following
-articles to learn about the overview and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [Media blueprint - Overview](./control-mapping.md)
-> [Media blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/media/deploy.md
- Title: Deploy Media blueprint sample
-description: Deploy steps for the Media blueprint sample including blueprint artifact parameter details.
Previously updated : 09/08/2021--
-# Deploy the Media blueprint sample
-
-To deploy the Media blueprint sample, the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-## Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** and search for and select **Policy** in the left pane. On the **Policy**
- page, select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **Media** blueprint sample under _Other Samples_ and select **Use this
- sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that make up the blueprint sample. Many of the artifacts have
- parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-## Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move
-it away from the standard.
-
-1. Select **All services** and search for and select **Policy** in the left pane. On the **Policy**
- page, select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the Media blueprint sample." Then select **Publish** at the bottom of the page.
-
-## Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are
-provided to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** and search for and select **Policy** in the left pane. On the **Policy**
- page, select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see [blueprints resource locking](../../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../../concepts/parameters.md#dynamic-parameters) since
- they're defined during the assignment of the blueprint. For a full list or artifact parameters
- and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/)
-> to estimate the cost of running resources deployed by this blueprint sample.
-
-## Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|\[Preview\]: Deploy Log Analytics Agent for Linux VMs |Policy assignment |Log Analytics workspace for Linux VMs |For more information, see [Create a Log Analytics workspace in the Azure portal](../../../../azure-monitor/logs/quick-create-workspace.md). |
-|\[Preview\]: Deploy Log Analytics Agent for Linux VMs |Policy assignment |Optional: List of VM images that have supported Linux OS to add to scope |An empty array may be used to indicate no optional parameters: `[]` |
-|\[Preview\]: Deploy Log Analytics Agent for Windows VMs |Policy assignment |Optional: List of VM images that have supported Windows OS to add to scope |An empty array may be used to indicate no optional parameters: `[]` |
-|\[Preview\]: Deploy Log Analytics Agent for Windows VMs |Policy assignment |Log Analytics workspace for Windows VMs |For more information, see [Create a Log Analytics workspace in the Azure portal](../../../../azure-monitor/logs/quick-create-workspace.md). |
-|\[Preview\]: Audit Media controls and deploy specific VM Extensions to support audit requirements |Policy assignment |Log Analytics workspace ID that VMs should be configured for |This is the ID (GUID) of the Log Analytics workspace that the VMs should be configured for. |
-|\[Preview\]: Audit Media controls and deploy specific VM Extensions to support audit requirements |Policy assignment |List of resource types that should have diagnostic logs enabled |List of resource types to audit if diagnostic log setting isn't enabled. Acceptable values can be found at [Azure Monitor diagnostic logs schemas](../../../../azure-monitor/essentials/resource-logs-schema.md#service-specific-schemas). |
-|\[Preview\]: Audit Media controls and deploy specific VM Extensions to support audit requirements |Policy assignment |Administrators group |Group. Example: `Administrator; myUser1; myUser2` |
-|\[Preview\]: Audit Media controls and deploy specific VM Extensions to support audit requirements |Policy assignment |List of users that should be included in Windows VM Administrators group |A semicolon-separated list of members that should be included in the Administrators local group. Example: `Administrator; myUser1; myUser2` |
-|Deploy Advanced Threat Protection on Storage Accounts |Policy assignment |Effect |Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md). |
-|Deploy Auditing on SQL servers |Policy assignment |The value in days of the retention period (0 indicates unlimited retention) |Retention days (optional, _180_ days if unspecified) |
-|Deploy Auditing on SQL servers |Policy assignment |Resource group name for storage account for SQL server auditing |Auditing writes database events to an audit log in your Azure Storage account (a storage account is created in each region where a SQL Server is created that is shared by all servers in that region). Important - for proper operation of Auditing don't delete or rename the resource group or the storage accounts. |
-|Deploy diagnostic settings for Network Security Groups |Policy assignment |Storage account prefix for network security group diagnostics |This prefix is combined with the network security group location to form the created storage account name. |
-|Deploy diagnostic settings for Network Security Groups |Policy assignment |Resource group name for storage account for network security group diagnostics (must exist) |The resource group that the storage account is created in. This resource group must already exist. |
-
-## Next steps
-
-Now that you've reviewed the steps to deploy the Media sample, visit the following
-articles to learn about the overview and control mapping:
-
-> [!div class="nextstepaction"]
-> [Media blueprints - Overview](./index.md)
-> [Media blueprints - Control mapping](./control-mapping.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/media/index.md
- Title: Media blueprint sample overview
-description: Overview of the Media blueprint sample. This blueprint sample helps customers assess specific Media controls.
Previously updated : 09/08/2021--
-# Overview of the Media blueprint sample
-
-Media blueprint sample provides a set of governance guardrails using
-[Azure Policy](../../../policy/overview.md) that help toward
-[Media](https://www.hhs.gov/hipaa/for-professionals/security/laws-regulations/https://docsupdatetracker.net/index.html)
-attestation.
-
-## Blueprint sample
-
-The blueprint sample helps customers deploy a core set of policies for any Azure-deployed
-architecture requiring accreditation or compliance with the Media framework. The
-[control mapping](./control-mapping.md) section provides details on policies included within this
-initiative and how these policies help meet various controls defined by Media framework. When
-assigned to an architecture, resources are evaluated by Azure Policy for compliance with assigned
-policies.
-
-## Next steps
-
-You've reviewed the overview of the Media blueprint sample. Next, visit the following
-articles to learn about the control mapping and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [Media blueprint - Control mapping](./control-mapping.md)
-> [Media blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/nist-sp-800-171-r2.md
- Title: NIST SP 800-171 R2 blueprint sample overview
-description: Overview of the NIST SP 800-171 R2 blueprint sample. This blueprint sample helps customers assess specific NIST SP 800-171 R2 requirements or controls.
Previously updated : 09/08/2021--
-# NIST SP 800-171 R2 blueprint sample
-
-The NIST SP 800-171 R2 blueprint sample provides governance guardrails using
-[Azure Policy](../../policy/overview.md) that help you assess specific NIST SP 800-171 R2
-requirements or controls. This blueprint helps customers deploy a core set of policies for any
-Azure-deployed architecture that must implement NIST SP 800-171 R2 requirements or controls.
-
-## Control mapping
-
-The [Azure Policy control mapping](../../policy/samples/nist-sp-800-171-r2.md) provides details on
-policy definitions included within this blueprint and how these policy definitions map to the
-**compliance domains** and **requirements** in NIST SP 800-171 R2. When assigned to an architecture,
-resources are evaluated by Azure Policy for non-compliance with assigned policy definitions. For
-more information, see [Azure Policy](../../policy/overview.md).
-
-## Deploy
-
-To deploy the Azure Blueprints NIST SP 800-171 R2 blueprint sample, the following steps must
-be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-### Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **NIST SP 800-171 R2** blueprint sample under _Other Samples_ and select **Use
- this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the NIST SP 800-171 R2 blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that are included in the blueprint sample. Many of the artifacts
- have parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-### Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with NIST SP 800-171 requirements.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the NIST SP
- 800-171 R2 blueprint sample." Then select **Publish** at the bottom of the page.
-
-### Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are provided
-to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since they're
- defined during the assignment of the blueprint. For a full list or artifact parameters and
- their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-### Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|List of users that should be excluded from Windows VM Administrators group|A semicolon-separated list of members that should be excluded in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|List of users that should be included in Windows VM Administrators group|A semicolon-separated list of members that should be included in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|List of regions where Network Watcher should be enabled|A semicolon-separated list of regions. To see a complete list of regions use Get-AzLocation. Ex: East US; East US2|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|Log Analytics workspace ID that VMs should be configured for|This is the ID (GUID) of the Log Analytics workspace that the VMs should be configured for.|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|Optional: List of Windows VM images that support Log Analytics agent to add to audit scope|A semicolon-separated list of images|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|Optional: List of Linux VM images that support Log Analytics agent to add to audit scope|A semicolon-separated list of images|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|Latest PHP version|Latest supported PHP version for App Services|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|Latest Java version|Latest supported Java version for App Services|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|Latest Windows Python version|Latest supported Python version for App Services|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|Latest Linux Python version|Latest supported Python version for App Services|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|List of resource types that should have diagnostic logs enabled|List of resource types to audit if diagnostic log setting is not enabled. Acceptable values can be found at [Azure Monitor diagnostic logs schemas](../../../azure-monitor/essentials/resource-logs-schema.md).|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|Minimum TLS version for Windows Web servers|The minimum TLS protocol version that should be enabled on Windows web servers.|
-
-## Next steps
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/pci-dss-3.2.1/control-mapping.md
- Title: PCI-DSS v3.2.1 blueprint sample controls
-description: Control mapping of the Payment Card Industry Data Security Standard v3.2.1 blueprint sample to Azure Policy and Azure RBAC.
Previously updated : 09/08/2021--
-# Control mapping of the PCI-DSS v3.2.1 blueprint sample
-
-The following article details how the Azure Blueprints PCI-DSS v3.2.1 blueprint sample maps to the
-PCI-DSS v3.2.1 controls. For more information about the controls, see
-[PCI-DSS v3.2.1](https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf).
-
-The following mappings are to the **PCI-DSS v3.2.1:2018** controls. Use the navigation on the right
-to jump directly to a specific control mapping. Many of the mapped controls are implemented with an
-[Azure Policy](../../../policy/overview.md) initiative. To review the complete initiative, open
-**Policy** in the Azure portal and select the **Definitions** page. Then, find and select the **PCI
-v3.2.1:2018** built-in policy initiative.
-
-> [!IMPORTANT]
-> Each control below is associated with one or more [Azure Policy](../../../policy/overview.md)
-> definitions. These policies may help you
-> [assess compliance](../../../policy/how-to/get-compliance-data.md) with the control; however,
-> there often is not a one-to-one or complete match between a control and one or more policies. As
-> such, **Compliant** in Azure Policy refers only to the policies themselves; this doesn't ensure
-> you're fully compliant with all requirements of a control. In addition, the compliance standard
-> includes controls that aren't addressed by any Azure Policy definitions at this time. Therefore,
-> compliance in Azure Policy is only a partial view of your overall compliance status. The
-> associations between controls and Azure Policy definitions for this compliance blueprint sample
-> may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/governance/blueprints/samples/pci-dss-3.2.1/control-mapping.md).
-
-## 1.3.2 and 1.3.4 Boundary Protection
-
-This blueprint helps you manage and control networks by assigning [Azure
-Policy](../../../policy/overview.md) definitions that monitors network security groups with
-permissive rules. Rules that are too permissive may allow unintended network access and should be
-reviewed. This blueprint assigns one Azure Policy definitions that monitor unprotected endpoints,
-applications, and storage accounts. Endpoints and applications that aren't protected by a firewall,
-and storage accounts with unrestricted access can allow unintended access to information contained
-within the information system.
--- Audit unrestricted network access to storage accounts-- Access through Internet facing endpoint should be restricted-
-## 3.4.a, 4.1, 4.1.g, 4.1.h and 6.5.3 Cryptographic Protection
-
-This blueprint helps you enforce your policy with the use of cryptograph controls by assigning
-[Azure Policy](../../../policy/overview.md) definitions which enforce specific cryptograph controls
-and audit use of weak cryptographic settings. Understanding where your Azure resources may have
-non-optimal cryptographic configurations can help you take corrective actions to ensure resources
-are configured in accordance with your information security policy. Specifically, the policies
-assigned by this blueprint require transparent data encryption on SQL databases; audit missing
-encryption on storage accounts, and automation account variables. There are also policies which
-address audit insecure connections to storage accounts, Function Apps, WebApp, API Apps, and Redis
-Cache, and audit unencrypted Service Fabric communication.
--- Function App should only be accessible over HTTPS-- Web Application should only be accessible over HTTPS-- API App should only be accessible over HTTPS-- Transparent Data Encryption on SQL databases should be enabled-- Disk encryption should be applied on virtual machines-- Automation account variables should be encrypted-- Only secure connections to your Redis Cache should be enabled-- Secure transfer to storage accounts should be enabled-- Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign-- Transparent Data Encryption on SQL databases should be enabled-- Deploy SQL DB transparent data encryption-
-## 5.1, 6.2, 6.6 and 11.2.1 Vulnerability Scanning and System Updates
-
-This blueprint helps you manage information system vulnerabilities by assigning [Azure
-Policy](../../../policy/overview.md) definitions that monitor missing system updates, operating
-system vulnerabilities, SQL vulnerabilities, and virtual machine vulnerabilities in Azure Security
-Center. Azure Security Center provides reporting capabilities that enable you to have real-time
-insight into the security state of deployed Azure resources.
--- Monitor missing Endpoint Protection in Azure Security Center-- Deploy default Microsoft IaaSAntimalware extension for Windows Server-- Deploy Threat Detection on SQL Servers-- System updates should be installed on your machines-- Vulnerabilities in security configuration on your machines should be remediated-- Vulnerabilities on your SQL databases should be remediated-- Vulnerabilities should be remediated by a Vulnerability Assessment solution-
-## 7.1.1. 7.1.2 and 7.1.3 Separation of Duties
-
-Having only one Azure subscription owner doesn't allow for administrative redundancy. Conversely,
-having too many Azure subscription owners can increase the potential for a breach via a compromised
-owner account. This blueprint helps you maintain an appropriate number of Azure subscription owners
-by assigning [Azure Policy](../../../policy/overview.md) definitions which audit the number of
-owners for Azure subscriptions. Managing subscription owner permissions can help you implement
-appropriate separation of duties.
--- There should be more than one owner assigned to your subscription-- A maximum of 3 owners should be designated for your subscription-
-## 3.2, 7.2.1, 8.3.1.a and 8.3.1.b Management of Privileged Access Rights
-
-This blueprint helps you restrict and control privileged access rights by assigning [Azure
-Policy](../../../policy/overview.md) definitions to audit external accounts with owner, write and/or
-read permissions and employee accounts with owner and/or write permissions that don't have
-multi-factor authentication enabled. Azure role-based access control (Azure RBAC) helps to manage
-who has access to Azure resources. Understanding where custom Azure RBAC rules are implement can
-help you verify need and proper implementation, as custom Azure RBAC rules are error prone. This
-blueprint also assigns [Azure Policy](../../../policy/overview.md) definitions to audit use of Azure
-Active Directory authentication for SQL Servers. Using Azure Active Directory authentication
-simplifies permission management and centralizes identity management of database users and other
-Microsoft services.
--- External accounts with owner permissions should be removed from your subscription-- External accounts with write permissions should be removed from your subscription-- External accounts with read permissions should be removed from your subscription-- MFA should be enabled on accounts with owner permissions on your subscription-- MFA should be enabled accounts with write permissions on your subscription-- MFA should be enabled on accounts with read permissions on your subscription-- An Azure Active Directory administrator should be provisioned for SQL servers-- Audit usage of custom RBAC rules-
-## 8.1.2 and 8.1.5 Least Privilege and Review of User Access Rights
-
-Azure role-based access control (Azure RBAC) helps you manage who has access to resources in
-Azure. Using the Azure portal, you can review who has access to Azure resources and their
-permissions. This blueprint assigns [Azure Policy](../../../policy/overview.md) definitions to audit
-accounts that should be prioritized for review, including depreciated accounts and external accounts
-with elevated permissions.
--- Deprecated accounts should be removed from your subscription-- Deprecated accounts with owner permissions should be removed from your subscription-- External accounts with owner permissions should be removed from your subscription-- External accounts with write permissions should be removed from your subscription-- External accounts with read permissions should be removed from your subscription-
-## 8.1.3 Removal or Adjustment of Access Rights
-
-Azure role-based access control (Azure RBAC) helps you manage who has access to resources in Azure.
-Using Azure Active Directory and Azure RBAC, you can update user roles to reflect organizational
-changes. When needed, accounts can be blocked from signing in (or removed), which immediately
-removes access rights to Azure resources. This blueprint assigns [Azure
-Policy](../../../policy/overview.md) definitions to audit depreciated account that should be
-considered for removal.
--- Deprecated accounts should be removed from your subscription-- Deprecated accounts with owner permissions should be removed from your subscription-
-## 8.2.3.a,b, 8.2.4.a,b and 8.2.5 Password-based Authentication
-
-This blueprint helps you enforce strong passwords by assigning [Azure
-Policy](../../../policy/overview.md) definitions that audit Windows VMs that don't enforce minimum
-strength and other password requirements. Awareness of VMs in violation of the password strength
-policy helps you take corrective actions to ensure passwords for all VM user accounts are compliant
-with policy.
--- \[Preview\]: Audit Windows VMs that do not have a maximum password age of 70 days-- \[Preview\]: Deploy requirements to audit Windows VMs that do not have a maximum password age of
- 70 days
-- \[Preview\]: Audit Windows VMs that do not restrict the minimum password length to 14 characters-- \[Preview\]: Deploy requirements to audit Windows VMs that do not restrict the minimum password
- length to 14 characters
-- \[Preview\]: Audit Windows VMs that allow re-use of the previous 24 passwords-- \[Preview\]: Deploy requirements to audit Windows VMs that allow re-use of the previous 24
- passwords
-
-## 10.3 and 10.5.4 Audit Generation
-
-This blueprint helps you ensure system events are logged by assigning [Azure
-Policy](../../../policy/overview.md) definitions that audit log settings on Azure resources.
-Diagnostic logs provide insight into operations that were performed within Azure resources. Azure
-logs rely on synchronized internal clocks to create a time-correlated record of events across
-resources.
--- Auditing should be enabled on advanced data security settings on SQL Server-- Audit diagnostic setting-- Audit SQL server level Auditing settings-- Deploy Auditing on SQL servers-- Storage accounts should be migrated to new Azure Resource Manager resources-- Virtual machines should be migrated to new Azure Resource Manager resources-
-## 12.3.6 and 12.3.7 Information Security
-
-This blueprint helps you manage and control your network by assigning [Azure
-Policy](../../../policy/overview.md) definitions that audit the acceptable network locations and the
-approved company products allowed for the environment. These are customizable by each company
-through the policy parameters within each of these policies.
--- Allowed locations-- Allowed locations for resource groups-
-## Next steps
-
-Now that you've reviewed the control mapping of the PCI-DSS v3.2.1 blueprint, visit the following
-articles to learn about the overview and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [PCI-DSS v3.2.1 blueprint - Overview](./index.md)
-> [PCI-DSS v3.2.1 blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/pci-dss-3.2.1/deploy.md
- Title: Deploy PCI-DSS v3.2.1 blueprint sample
-description: Deploy steps for the Payment Card Industry Data Security Standard v3.2.1 blueprint sample including blueprint artifact parameter details.
Previously updated : 09/08/2021--
-# Deploy the PCI-DSS v3.2.1 blueprint sample
-
-To deploy the Azure Blueprints PCI-DSS v3.2.1 blueprint sample, the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-## Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **PCI-DSS v3.2.1** blueprint sample under _Other Samples_ and select **Use
- this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the PCI-DSS v3.2.1 blueprint
- sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that make up the blueprint sample. Many of the artifacts have
- parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-## Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move
-it away from the PCI-DSS v3.2.1 standard.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the PCI-DSS
- v3.2.1 blueprint sample." Then select **Publish** at the bottom of the page.
-
-## Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are
-provided to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../../concepts/parameters.md#dynamic-parameters) since
- they're defined during the assignment of the blueprint. For a full list or artifact parameters
- and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-## Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|PCI v3.2.1:2018|Policy Assignment|List of Resource Types | Audit diagnostic setting for selected resource types. Default value is all resources are selected|
-|Allowed locations|Policy Assignment|List Of Allowed Locations|List of data center locations allowed for any resource to be deployed into. This list is customizable to the desired Azure locations globally. Select locations you wish to allow.|
-|Allowed Locations for resource groups|Policy Assignment |Allowed Location |This policy enables you to restrict the locations your organization can create resource groups in. Use to enforce your geo-compliance requirements.|
-|Deploy Auditing on SQL servers|Policy Assignment|Retention days|Data retention in number of days. Default value is 180 but PCI requires 365.|
-|Deploy Auditing on SQL servers|Policy Assignment|Resource group name for storage account|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region).|
-
-## Next steps
-
-Now that you've reviewed the steps to deploy the PCI-DSS v3.2.1 blueprint sample, visit the
-following articles to learn about the overview and control mapping:
-
-> [!div class="nextstepaction"]
-> [PCI-DSS v3.2.1 blueprint - Overview](./index.md)
-> [PCI-DSS v3.2.1 blueprint - Control mapping](./control-mapping.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/pci-dss-3.2.1/index.md
- Title: PCI-DSS v3.2.1 blueprint sample overview
-description: Overview of the Payment Card Industry Data Security Standard v3.2.1 blueprint sample. This blueprint sample helps customers assess specific controls.
Previously updated : 09/08/2021--
-# Overview of the PCI-DSS v3.2.1 blueprint sample
-
-The PCI-DSS v3.2.1 blueprint sample is a set of policies which aides in achieving PCI-DSS v3.2.1
-compliance. This blueprint helps customers govern cloud-based environments with PCI-DSS workloads.
-The PCI-DSS blueprint deploys a core set of policies for any Azure-deployed architecture requiring
-this accreditation.
-
-## Control mapping
-
-The control mapping section provides details on policies included within this initiative and how
-these policies help meet various controls defined by PCI-DSS v3.2.1. When assigned to an
-architecture, resources are evaluated by Azure Policy for non-compliance with assigned policies.
-
-After assigning this blueprint, view your Azure environments level of compliance in the Azure Policy
-Compliance Dashboard.
-
-## Next steps
-
-You've reviewed the overview of the PCI-DSS v3.2.1 blueprint sample. Next, visit the following
-articles to learn about the control mapping and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [PCI-DSS v3.2.1 blueprint - Control mapping](./control-mapping.md)
-> [PCI-DSS v3.2.1 blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/resource-graph-samples.md
Title: Azure Resource Graph sample queries for management groups description: Sample Azure Resource Graph queries for management groups showing use of resource types and tables to access management group details. Previously updated : 02/16/2022 Last updated : 03/08/2022
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 02/15/2022 Last updated : 03/08/2022
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 02/15/2022 Last updated : 03/08/2022
governance Gov Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-irs-1075-sept2016.md
initiative definition, open **Policy** in the Azure portal and select the **Defi
Then, find and select the **IRS1075 September 2016** Regulatory Compliance built-in initiative definition.
-This built-in initiative is deployed as part of the
-[IRS 1075 September 2016 blueprint sample](../../blueprints/samples/irs-1075-sept2016.md).
- > [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions. > These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
governance Guest Configuration Baseline Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-linux.md
Title: Reference - Azure Policy guest configuration baseline for Linux description: Details of the Linux baseline on Azure implemented through Azure Policy guest configuration. Previously updated : 02/16/2022 Last updated : 03/08/2022
governance Guest Configuration Baseline Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-windows.md
Title: Reference - Azure Policy guest configuration baseline for Windows description: Details of the Windows baseline on Azure implemented through Azure Policy guest configuration. Previously updated : 02/16/2022 Last updated : 03/08/2022
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/15/2022 Last updated : 03/11/2022
initiative definition, open **Policy** in the Azure portal and select the **Defi
Then, find and select the **IRS1075 September 2016** Regulatory Compliance built-in initiative definition.
-This built-in initiative is deployed as part of the
-[IRS 1075 September 2016 blueprint sample](../../blueprints/samples/irs-1075-sept2016.md).
- > [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions. > These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Policy description: Sample Azure Resource Graph queries for Azure Policy showing use of resource types and tables to access Azure Policy related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/reference/supported-tables-resources.md
Title: Supported Azure Resource Manager resource types description: Provide a list of the Azure Resource Manager resource types supported by Azure Resource Graph and Change History. Previously updated : 02/16/2022 Last updated : 03/08/2022
For sample queries for this table, see [Resource Graph sample queries for adviso
- microsoft.chaos/targets - microsoft.chaos/targets/capabilities
+## communitygalleryresources
+
+- microsoft.compute/locations/communitygalleries
+- microsoft.compute/locations/communitygalleries/images
+- microsoft.compute/locations/communitygalleries/images/versions
+ ## desktopvirtualizationresources - microsoft.desktopvirtualization/hostpools/sessionhosts
For sample queries for this table, see [Resource Graph sample queries for kubern
- microsoft.maintenance/applyupdates - microsoft.maintenance/configurationassignments
+- microsoft.maintenance/maintenanceconfigurations/applyupdates
- microsoft.maintenance/updates-- microsoft.resources/subscriptions (Subscriptions)
- - Sample query: [Count of subscriptions per management group](../samples/samples-by-category.md#count-of-subscriptions-per-management-group)
- - Sample query: [Key vaults with subscription name](../samples/samples-by-category.md#key-vaults-with-subscription-name)
- - Sample query: [List all management group ancestors for a specified subscription](../samples/samples-by-category.md#list-all-management-group-ancestors-for-a-specified-subscription)
- - Sample query: [List all subscriptions under a specified management group](../samples/samples-by-category.md#list-all-subscriptions-under-a-specified-management-group)
- - Sample query: [Remove columns from results](../samples/samples-by-category.md#remove-columns-from-results)
- - Sample query: [Secure score per management group](../samples/samples-by-category.md#secure-score-per-management-group)
## patchassessmentresources
For sample queries for this table, see [Resource Graph sample queries for resour
- Microsoft.ApiManagement/service (API Management services) - microsoft.app/containerapps - microsoft.app/managedenvironments
+- microsoft.app/managedenvironments/certificates
- microsoft.appassessment/migrateprojects - Microsoft.AppConfiguration/configurationStores (App Configuration) - Microsoft.AppPlatform/Spring (Azure Spring Cloud)
For sample queries for this table, see [Resource Graph sample queries for resour
- Microsoft.Logic/workflows (Logic apps) - Microsoft.Logz/monitors (Logz main account) - Microsoft.Logz/monitors/accounts (Logz sub account)
+- Microsoft.Logz/monitors/metricsSource (Logz metrics data source)
- Microsoft.MachineLearning/commitmentPlans (Machine Learning Studio (classic) web service plans) - Microsoft.MachineLearning/webServices (Machine Learning Studio (classic) web services) - Microsoft.MachineLearning/workspaces (Machine Learning Studio (classic) workspaces)
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.offazure/serversites - microsoft.offazure/vmwaresites - Microsoft.OpenEnergyPlatform/energyServices (Project Oak Forest)
+- microsoft.openlogisticsplatform/applicationmanagers
- microsoft.openlogisticsplatform/applicationworkspaces - Microsoft.OpenLogisticsPlatform/workspaces (Open Supply Chain Platform) - microsoft.operationalinsights/clusters
For sample queries for this table, see [Resource Graph sample queries for securi
- microsoft.authorization/roleassignments/providers/assessments/governanceassignments - microsoft.security/assessments - Sample query: [Count healthy, unhealthy, and not applicable resources per recommendation](../samples/samples-by-category.md#count-healthy-unhealthy-and-not-applicable-resources-per-recommendation)
- - Sample query: [List Azure Security Center recommendations](../samples/samples-by-category.md#list-azure-security-center-recommendations)
- Sample query: [List Container Registry vulnerability assessment results](../samples/samples-by-category.md#list-container-registry-vulnerability-assessment-results)
+ - Sample query: [List Microsoft Defender recommendations](../samples/samples-by-category.md)
- Sample query: [List Qualys vulnerability assessment results](../samples/samples-by-category.md#list-qualys-vulnerability-assessment-results) - microsoft.security/assessments/governanceassignments - microsoft.security/assessments/subassessments
For sample queries for this table, see [Resource Graph sample queries for securi
- Sample query: [Get specific IoT alert](../samples/samples-by-category.md#get-specific-iot-alert) - microsoft.security/locations/alerts (Security Alerts) - microsoft.security/pricings
- - Sample query: [Show Azure Defender pricing tier per subscription](../samples/samples-by-category.md#show-azure-defender-pricing-tier-per-subscription)
+ - Sample query: [Show Defender for Cloud plan pricing tier per subscription](../samples/samples-by-category.md)
- microsoft.security/regulatorycompliancestandards - Sample query: [Regulatory compliance state per compliance standard](../samples/samples-by-category.md#regulatory-compliance-state-per-compliance-standard) - microsoft.security/regulatorycompliancestandards/regulatorycompliancecontrols
For sample queries for this table, see [Resource Graph sample queries for servic
- Sample query: [All active Service Health events](../samples/samples-by-category.md#all-active-service-health-events) - Sample query: [All active service issue events](../samples/samples-by-category.md#all-active-service-issue-events)
+## spotresources
+
+- microsoft.compute/skuspotevictionrate/location
+- microsoft.compute/skuspotpricehistory/ostype/location
+ ## workloadmonitorresources - microsoft.workloadmonitor/monitors
governance Samples By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-category.md
Title: List of sample Azure Resource Graph queries by category description: List sample queries for Azure Resource-Graph. Categories include Tags, Azure Advisor, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 02/16/2022 Last updated : 03/08/2022
Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature
[!INCLUDE [azure-resource-graph-samples-cat-azure-policy-guest-configuration](../../../../includes/resource-graph/samples/bycat/azure-policy-guest-configuration.md)]
-## Azure Security Center
-- ## Azure Service Health [!INCLUDE [azure-resource-graph-samples-cat-azure-service-health](../../../../includes/resource-graph/samples/bycat/azure-service-health.md)]
Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature
[!INCLUDE [azure-resource-graph-samples-cat-management-groups](../../../../includes/resource-graph/samples/bycat/management-groups.md)]
+## Microsoft Defender
++ ## Networking [!INCLUDE [azure-resource-graph-samples-cat-networking](../../../../includes/resource-graph/samples/bycat/networking.md)]
governance Samples By Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-table.md
Title: List of sample Azure Resource Graph queries by table description: List sample queries for Azure Resource-Graph. Tables include Resources, ResourceContainers, PolicyResources, and more. Previously updated : 02/16/2022 Last updated : 03/08/2022
hdinsight Apache Hadoop Hive Java Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-hive-java-udf.md
This problem may be caused by the line endings in the Python file. Many Windows
You can use the following PowerShell statements to remove the CR characters before uploading the file to HDInsight: ```PowerShell
-# Set $original_file to the python file path
+# Set $original_file to the Python file path
$text = [IO.File]::ReadAllText($original_file) -replace "`r`n", "`n" [IO.File]::WriteAllText($original_file, $text) ```
hdinsight Python Udf Hdinsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/python-udf-hdinsight.md
In the commands below, replace `sshuser` with the actual username if different.
ssh sshuser@mycluster-ssh.azurehdinsight.net ```
-3. From the SSH session, add the python files uploaded previously to the storage for the cluster.
+3. From the SSH session, add the Python files uploaded previously to the storage for the cluster.
```bash hdfs dfs -put hiveudf.py /hiveudf.py
In the commands below, replace `sshuser` with the actual username if different.
ssh sshuser@mycluster-ssh.azurehdinsight.net ```
-3. From the SSH session, add the python files uploaded previously to the storage for the cluster.
+3. From the SSH session, add the Python files uploaded previously to the storage for the cluster.
```bash hdfs dfs -put pigudf.py /pigudf.py
hdinsight Hdinsight Analyze Twitter Data Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-analyze-twitter-data-linux.md
Twitter allows you to retrieve the data for each tweet as a JavaScript Object No
### Create a Twitter application
-1. From a web browser, sign in to [https://developer.twitter.com/apps/](https://developer.twitter.com/apps/). Select the **Sign-up now** link if you don't have a Twitter account.
+1. From a web browser, sign in to [https://developer.twitter.com](https://developer.twitter.com). Select the **Sign-up now** link if you don't have a Twitter account.
2. Select **Create New App**.
These commands store the data in a location that all nodes in the cluster can ac
You've learned how to transform an unstructured JSON dataset into a structured [Apache Hive](https://hive.apache.org/) table. To learn more about Hive on HDInsight, see the following documents: * [Get started with HDInsight](hadoop/apache-hadoop-linux-tutorial-get-started.md)
-* [Analyze flight delay data using HDInsight](./interactive-query/interactive-query-tutorial-analyze-flight-data.md)
+* [Analyze flight delay data using HDInsight](./interactive-query/interactive-query-tutorial-analyze-flight-data.md)
hdinsight Hdinsight For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-for-vscode.md
Using the PySpark interactive command to submit the queries, follow these steps:
:::image type="content" source="./media/hdinsight-for-vscode/select-interpreter-to-start-jupyter-server.png" alt-text="select interpreter to start jupyter server":::
-8. Select the python option below.
+8. Select the Python option below.
:::image type="content" source="./media/hdinsight-for-vscode/choose-the-below-option.png" alt-text="choose the below option":::
hdinsight Hdinsight Hadoop Linux Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-linux-information.md
There are a various ways to access data from outside the HDInsight cluster. The
If using __Azure Blob storage__, see the following links for ways that you can access your data: * [Azure CLI](/cli/azure/install-az-cli2): Command-Line interface commands for working with Azure. After installing, use the `az storage` command for help on using storage, or `az storage blob` for blob-specific commands.
-* [blobxfer.py](https://github.com/Azure/blobxfer): A python script for working with blobs in Azure Storage.
+* [blobxfer.py](https://github.com/Azure/blobxfer): A Python script for working with blobs in Azure Storage.
* Various SDKs: * [Java](https://github.com/Azure/azure-sdk-for-java)
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
description: Archived release notes for Azure HDInsight. Get development tips an
Previously updated : 12/27/2021 Last updated : 03/10/2022 # Archived release notes
Last updated 12/27/2021
Azure HDInsight is one of the most popular services among enterprise customers for open-source Apache Hadoop and Apache Spark analytics on Azure.
+## Release date: 12/27/2021
+
+This release applies for HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region over several days.
+
+The OS versions for this release are:
+- HDInsight 4.0: Ubuntu 18.04.5 LTS
+
+HDInsight 4.0 image has been updated to mitigate Log4j vulnerability as described in [MicrosoftΓÇÖs Response to CVE-2021-44228 Apache Log4j 2.](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/)
+
+> [!Note]
+> * Any HDI 4.0 clusters created post 27 Dec 2021 00:00 UTC are created with an updated version of the image which mitigates the log4j vulnerabilities. Hence, customers need not patch/reboot these clusters.
+> * For new HDInsight 4.0 clusters created between 16 Dec 2021 at 01:15 UTC and 27 Dec 2021 00:00 UTC, HDInsight 3.6 or in pinned subscriptions after 16 Dec 2021 the patch is auto applied within the hour in which the cluster is created, however customers must then reboot their nodes for the patching to complete (except for Kafka Management nodes, which are automatically rebooted).
+ ## Release date: 07/27/2021 This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days.
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-94901 | [HBASE-19285](https://issues.apache.org/jira/browse/HBASE-19285) | Add per-table latency histograms | | BUG-94908 | [ATLAS-1921](https://issues.apache.org/jira/browse/ATLAS-1921) | UI: Search using entity and trait attributes: UI doesn't perform range check and allows providing out of bounds values for integral and float data types. | | BUG-95086 | [RANGER-1953](https://issues.apache.org/jira/browse/RANGER-1953) | improvement on user-group page listing |
-| BUG-95193 | [SLIDER-1252](https://issues.apache.org/jira/browse/SLIDER-1252) | Slider agent fails with SSL validation errors with python 2.7.5-58 |
+| BUG-95193 | [SLIDER-1252](https://issues.apache.org/jira/browse/SLIDER-1252) | Slider agent fails with SSL validation errors with Python 2.7.5-58 |
| BUG-95314 | [YARN-7699](https://issues.apache.org/jira/browse/YARN-7699) | queueUsagePercentage is coming as INF for getApp REST api call | | BUG-95315 | [HBASE-13947](https://issues.apache.org/jira/browse/HBASE-13947), [HBASE-14517](https://issues.apache.org/jira/browse/HBASE-14517), [HBASE-17931](https://issues.apache.org/jira/browse/HBASE-17931) | Assign system tables to servers with highest version | | BUG-95392 | [ATLAS-2421](https://issues.apache.org/jira/browse/ATLAS-2421) | Notification updates to support V2 data structures |
hdinsight Rest Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/rest-proxy.md
The steps below use the Azure portal. For an example using Azure CLI, see [Creat
## Client application sample
-You can use the python code below to interact with the REST proxy on your Kafka cluster. To use the code sample, follow these steps:
+You can use the Python code below to interact with the REST proxy on your Kafka cluster. To use the code sample, follow these steps:
1. Save the sample code on a machine with Python installed.
-1. Install required python dependencies by executing `pip3 install msal`.
+1. Install required Python dependencies by executing `pip3 install msal`.
1. Modify the code section **Configure these properties** and update the following properties for your environment: |Property |Description |
You can use the python code below to interact with the REST proxy on your Kafka
|Client Secret|The secret for the application that you registered in the security group.| |Kafkarest_endpoint|Get this value from the **Properties** tab in the cluster overview as described in the [deployment section](#create-a-kafka-cluster-with-rest-proxy-enabled). It should be in the following format ΓÇô `https://<clustername>-kafkarest.azurehdinsight.net`|
-1. From the command line, execute the python file by executing `sudo python3 <filename.py>`
+1. From the command line, execute the Python file by executing `sudo python3 <filename.py>`
This code does the following action:
This code does the following action:
For more information about getting OAuth tokens in Python, see [Python AuthenticationContext class](/python/api/adal/adal.authentication_context.authenticationcontext). You might see a delay while `topics` that aren't created or deleted through the Kafka REST proxy are reflected there. This delay is because of cache refresh. The **value** field of the Producer API has been enhanced. Now, it accepts JSON objects and any serialized form. ```python
-#Required python packages
+#Required Python packages
#pip3 install msal import json
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
hdinsight Apache Spark Jupyter Notebook Use External Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-notebook-use-external-packages.md
In this article, you'll learn how to use the [spark-csv](https://search.maven.or
### Tools and extensions
-* [Use external python packages with Jupyter Notebooks in Apache Spark clusters on HDInsight Linux](apache-spark-python-package-installation.md)
+* [Use external Python packages with Jupyter Notebooks in Apache Spark clusters on HDInsight Linux](apache-spark-python-package-installation.md)
* [Use HDInsight Tools Plugin for IntelliJ IDEA to create and submit Spark Scala applications](apache-spark-intellij-tool-plugin.md) * [Use HDInsight Tools Plugin for IntelliJ IDEA to debug Apache Spark applications remotely](apache-spark-intellij-tool-plugin-debug-jobs-remotely.md) * [Use Apache Zeppelin notebooks with an Apache Spark cluster on HDInsight](apache-spark-zeppelin-notebook.md)
hdinsight Apache Spark Python Package Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-python-package-installation.md
Title: Script action for Python packages with Jupyter on Azure HDInsight
-description: Step-by-step instructions on how to use script action to configure Jupyter Notebooks available with HDInsight Spark clusters to use external python packages.
+description: Step-by-step instructions on how to use script action to configure Jupyter Notebooks available with HDInsight Spark clusters to use external Python packages.
export PYSPARK3_PYTHON=${PYSPARK_PYTHON:-/usr/bin/miniforge/envs/py38/bin/python
HDInsight cluster depends on the built-in Python environment, both Python 2.7 and Python 3.5. Directly installing custom packages in those default built-in environments may cause unexpected library version changes. And break the cluster further. To safely install custom external Python packages for your Spark applications, follow below steps.
-1. Create Python virtual environment using conda. A virtual environment provides an isolated space for your projects without breaking others. When creating the Python virtual environment, you can specify python version that you want to use. You still need to create virtual environment even though you would like to use Python 2.7 and 3.5. This requirement is to make sure the cluster's default environment not getting broke. Run script actions on your cluster for all nodes with below script to create a Python virtual environment.
+1. Create Python virtual environment using conda. A virtual environment provides an isolated space for your projects without breaking others. When creating the Python virtual environment, you can specify Python version that you want to use. You still need to create virtual environment even though you would like to use Python 2.7 and 3.5. This requirement is to make sure the cluster's default environment not getting broke. Run script actions on your cluster for all nodes with below script to create a Python virtual environment.
- `--prefix` specifies a path where a conda virtual environment lives. There are several configs that need to be changed further based on the path specified here. In this example, we use the py35new, as the cluster has an existing virtual environment called py35 already. - `python=` specifies the Python version for the virtual environment. In this example, we use version 3.5, the same version as the cluster built in one. You can also use other Python versions to create the virtual environment.
hdinsight Apache Spark Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-settings.md
Spark clusters in HDInsight include a number of components by default. Each of t
|Component |Description| ||| |Spark Core|Spark Core, Spark SQL, Spark streaming APIs, GraphX, and Apache Spark MLlib.|
-|Anaconda|A python package manager.|
+|Anaconda|A Python package manager.|
|Apache Livy|The Apache Spark REST API, used to submit remote jobs to an HDInsight Spark cluster.| |Jupyter Notebooks and Apache Zeppelin Notebooks|Interactive browser-based UI for interacting with your Spark cluster.| |ODBC driver|Connects Spark clusters in HDInsight to business intelligence (BI) tools such as Microsoft Power BI and Tableau.|
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
iot-develop Concepts Azure Rtos Security Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-azure-rtos-security-practices.md
Using hardware-based X.509 certificates with TLS mutual authentication and a PKI
**Hardware**: No specific hardware requirements.
-**Azure RTOS**: Azure RTOS TLS provides support for mutual certificate authentication in both TLS Server and Client applications. For more information, see the [Azure RTOS NetX Secure TLS documentation](/netx-secure-tls/chapter1#netx-secure-unique-features).
+**Azure RTOS**: Azure RTOS TLS provides support for mutual certificate authentication in both TLS Server and Client applications. For more information, see the [Azure RTOS NetX Secure TLS documentation](/azure/rtos/netx-duo/netx-secure-tls/chapter1#netx-secure-unique-features).
**Application**: Applications using TLS should always default to mutual certificate authentication whenever possible. Mutual authentication requires TLS clients to have a device certificate. Mutual authentication is an optional TLS feature but is highly recommended when possible.
iot-dps Concepts Device Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-device-reprovision.md
Depending on the scenario, as a device moves between IoT hubs, it may also be ne
## Reprovisioning policies
-Depending on the scenario, a device usually sends a request to a provisioning service instance on reboot. It also supports a method to manually trigger provisioning on demand. The reprovisioning policy on an enrollment entry determines how the device provisioning service instance handles these provisioning requests. The policy also determines whether device state data should be migrated during reprovisioning. The same policies are available for individual enrollments and enrollment groups:
+Depending on the scenario, a device could send a request to a provisioning service instance on reboot. It also supports a method to manually trigger provisioning on demand. The reprovisioning policy on an enrollment entry determines how the device provisioning service instance handles these provisioning requests. The policy also determines whether device state data should be migrated during reprovisioning. The same policies are available for individual enrollments and enrollment groups:
* **Re-provision and migrate data**: This policy is the default for new enrollment entries. This policy takes action when devices associated with the enrollment entry submit a new request (1). Depending on the enrollment entry configuration, the device may be reassigned to another IoT hub. If the device is changing IoT hubs, the device registration with the initial IoT hub will be removed. The updated device state information from that initial IoT hub will be migrated over to the new IoT hub (2). During migration, the device's status will be reported as **Assigning**.
Depending on the scenario, a device usually sends a request to a provisioning se
> [!NOTE] > DPS will always call the custom allocation webhook regardless of re-provisioning policy in case there is new [ReturnData](how-to-send-additional-data.md) for the device. If the re-provisioning policy is set to **never re-provision**, the webhook will be called but the device will not change its assigned hub.
+When designing your solution and defining a reprovisioning logic there are a few things to consider. For example:
+
+* How often you expect your devices to restart
+* The [DPS quotas and limits](about-iot-dps.md#quotas-and-limits)
+* Expected deployment time for your fleet (phased rollout vs all at once)
+* Retry capability implemented on your client code, as described on the [Retry general guidance](/architecture/best-practices/transient-faults) at the Azure Architecture Center
+
+>[!TIP]
+> We recommend not provisioning on every reboot of the device, as this could cause some issues when reprovisioning several thousands or millions of devices at once. Instead you should attempt to [get the device registration state](/rest/api/iot-dps/service/device-registration-state/get) and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/architecture/best-practices/transient-faults).
+>In some cases, depending on the device capabilities, itΓÇÖs possible to save the IoT Hub information directly on the device to connect directly to IoT Hub after the first-time provisioning using DPS occurred. If you choose to do this, make sure you implement a fallback mechanism in case you get specific [errors from Hub occur](../iot-hub/troubleshoot-message-routing.md#common-error-codes), for example, consider the following scenarios:
+> * Retry the Hub operation if the result code is 429 (Too Many Requests) or an error in the 5xx range. Do not retry for any other errors.
+> * For 429 errors, only retry after the time indicated in the Retry-After header.
+> * For 5xx errors, use exponential back-off, with the first retry at least 5 seconds after the response.
+> * On errors other than 429 and 5xx, re-register through DPS
+> * Ideally you should also support a [method](../iot-hub/iot-hub-devguide-direct-methods.md) to manually trigger provisioning on demand.
+>
+> We also recommend taking into account the service limits when planning activities like pushing updates to your fleet. For example, updating the fleet all at once could cause all devices to re-register through DPS (which could easily be above the registration quota limit) - For such scenarios, consider planning for device updates in phases instead of updating your entire fleet at the same time.
+
+>[!Note]
+> The [get device registration state API](/rest/api/iot-dps/service/device-registration-state/get) does not currently work for TPM devices (the API surface does not include enough information to authenticate the request).
++ ### Managing backwards compatibility Before September 2018, device assignments to IoT hubs had a sticky behavior. When a device went back through the provisioning process, it would only be assigned back to the same IoT hub.
iot-dps How To Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-reprovision.md
The following steps configure the allocation policy for a device's enrollment:
In order for devices to be reprovisioned based on the configuration changes made in the preceding sections, these devices must request reprovisioning.
-How often a device submits a provisioning request depends on the scenario. However, it is advised to program your devices to send a provisioning request to a provisioning service instance on reboot, and support a [method](../iot-hub/iot-hub-devguide-direct-methods.md) to manually trigger provisioning on demand. Provisioning could also be triggered by setting a [desired property](../iot-hub/iot-hub-devguide-device-twins.md#desired-property-example).
-
-The reprovisioning policy on an enrollment entry determines how the device provisioning service instance handles these provisioning requests, and if device state data should be migrated during reprovisioning. The same policies are available for individual enrollments and enrollment groups:
-
-For example code of sending provisioning requests from a device during a boot sequence, see [Auto-provisioning a simulated device](quick-create-simulated-device-tpm.md).
-
+How often a device submits a provisioning request depends on the scenario. When designing your solution and defining a reprovisioning logic there are a few things to consider. For example:
+
+* How often you expect your devices to restart
+* The [DPS quotas and limits](about-iot-dps.md#quotas-and-limits)
+* Expected deployment time for your fleet (phased rollout vs all at once)
+* Retry capability implemented on your client code, as described on the [Retry general guidance](/architecture/best-practices/transient-faults) at the Azure Architecture Center
+
+>[!TIP]
+> We recommend not provisioning on every reboot of the device, as this could cause some issues when reprovisioning several thousands or millions of devices at once. Instead you should attempt to [get the device registration state](/rest/api/iot-dps/service/device-registration-state/get) and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/architecture/best-practices/transient-faults).
+>In some cases, depending on the device capabilities, itΓÇÖs possible to save the IoT Hub information directly on the device to connect directly to IoT Hub after the first-time provisioning using DPS occurred. If you choose to do this, make sure you implement a fallback mechanism in case you get specific [errors from Hub occur](../iot-hub/troubleshoot-message-routing.md#common-error-codes), for example, consider the following scenarios:
+> * Retry the Hub operation if the result code is 429 (Too Many Requests) or an error in the 5xx range. Do not retry for any other errors.
+> * For 429 errors, only retry after the time indicated in the Retry-After header.
+> * For 5xx errors, use exponential back-off, with the first retry at least 5 seconds after the response.
+> * On errors other than 429 and 5xx, re-register through DPS
+> * Ideally you should also support a [method](../iot-hub/iot-hub-devguide-direct-methods.md) to manually trigger provisioning on demand.
+>
+> We also recommend taking into account the service limits when planning activities like pushing updates to your fleet. For example, updating the fleet all at once could cause all devices to re-register through DPS (which could easily be above the registration quota limit) - For such scenarios, consider planning for device updates in phases instead of updating your entire fleet at the same time.
+
+>[!Note]
+> The [get device registration state API](/rest/api/iot-dps/service/device-registration-state/get) does not currently work for TPM devices (the API surface does not include enough information to authenticate the request).
## Next steps
iot-dps Quick Create Simulated Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-symm-key.md
To update and run the provisioning sample with your device information:
pip install azure-iot-device ```
-6. Run the python sample code in *_provision_symmetric_key.py_*.
+6. Run the Python sample code in *_provision_symmetric_key.py_*.
```cmd python provision_symmetric_key.py
iot-dps Quick Create Simulated Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-tpm.md
In this section, you'll configure sample code to use the [Advanced Message Queui
cd azure-iot-sdk-python/provisioning_device_client/samples ```
-2. Using your Python IDE, edit the python script named **provisioning\_device\_client\_sample.py** (replace `{globalServiceEndpoint}` and `{idScope}` to the values that you previously copied). Also, make sure *SECURITY\_DEVICE\_TYPE* is set to `ProvisioningSecurityDeviceType.TPM`.
+2. Using your Python IDE, edit the Python script named **provisioning\_device\_client\_sample.py** (replace `{globalServiceEndpoint}` and `{idScope}` to the values that you previously copied). Also, make sure *SECURITY\_DEVICE\_TYPE* is set to `ProvisioningSecurityDeviceType.TPM`.
```python GLOBAL_PROV_URI = "{globalServiceEndpoint}"
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
Make sure that the user **iotedge** has read permissions for the directory holdi
```bash sudo update-ca-certificates ```
+ This command should output that one certificate was added to /etc/ssl/certs.
+ * **IoT Edge for Linux on Windows (EFLOW)** ```bash
Make sure that the user **iotedge** has read permissions for the directory holdi
For more information, check [CBL-Mariner SSL CA certificates management](https://github.com/microsoft/CBL-Mariner/blob/1.0/toolkit/docs/security/ca-certificates.md).
- This command should output that one certificate was added to /etc/ssl/certs.
- 1. Open the IoT Edge configuration file. ```bash
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
Unless you're developing your module in C, you also need the Python-based [Azure
> [!NOTE] >
-> If you have multiple Python including pre-installed python 2.7 (for example, on Ubuntu or macOS), make sure you are using the correct `pip` or `pip3` to install **iotedgehubdev**
+> If you have multiple Python including pre-installed Python 2.7 (for example, on Ubuntu or macOS), make sure you are using the correct `pip` or `pip3` to install **iotedgehubdev**
To test your module on a device, you'll need an active IoT hub with at least one IoT Edge device. To use your computer as an IoT Edge device, follow the steps in the quickstart for [Linux](quickstart-linux.md) or [Windows](quickstart.md). If you are running IoT Edge daemon on your development machine, you might need to stop EdgeHub and EdgeAgent before you move to next step.
iot-edge Tutorial Machine Learning Edge 01 Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-machine-learning-edge-01-intro.md
In this document, we use the following set of tools:
* An Azure IoT hub for data capture
-* Azure Notebooks as our main front end for data preparation and machine learning experimentation. Running python code in a notebook on a subset of the sample data is a great way to get fast iterative and interactive turnaround during data preparation. Jupyter notebooks can also be used to prepare scripts to run at scale in a compute backend.
+* Azure Notebooks as our main front end for data preparation and machine learning experimentation. Running Python code in a notebook on a subset of the sample data is a great way to get fast iterative and interactive turnaround during data preparation. Jupyter notebooks can also be used to prepare scripts to run at scale in a compute backend.
* Azure Machine Learning as a backend for machine learning at scale and for machine learning image generation. We drive the Azure Machine Learning backend using scripts prepared and tested in Jupyter notebooks.
iot-hub Iot Hub Device Management Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-device-management-visual-studio.md
Title: Azure IoT device management w/ Visual Studio Cloud Explorer description: Use the Cloud Explorer for Visual Studio for Azure IoT Hub device management, featuring the Direct methods and the Twin's desired properties management options.-+ Last updated 08/20/2019-++ # Use Cloud Explorer for Visual Studio for Azure IoT Hub device management
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-support.md
The possible status codes are:
| 429 | Too many requests (throttled), as per [IoT Hub throttling](iot-hub-devguide-quotas-throttling.md) | | 5** | Server errors |
-The python code snippet below, demonstrates the twin reported properties update process over MQTT (using Paho MQTT client):
+The Python code snippet below, demonstrates the twin reported properties update process over MQTT (using Paho MQTT client):
```python from paho.mqtt import client as mqtt
iot-hub Iot Hub Visual Studio Cloud Device Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-visual-studio-cloud-device-messaging.md
Title: Use VS Cloud Explorer to manage Azure IoT Hub device messaging description: Learn how to use Cloud Explorer for Visual Studio to monitor device to cloud messages and send cloud to device messages in Azure IoT Hub.-+ Last updated 08/20/2019-++ # Use Cloud Explorer for Visual Studio to send and receive messages between your device and IoT Hub
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
iot-hub Quickstart Control Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-control-device.md
Previously updated : 07/26/2021 Last updated : 02/25/2022 zone_pivot_groups: iot-hub-set1 #Customer intent: As a developer new to IoT Hub, I need to see how to use a service application to control a device connected to the hub.
iot-hub Quickstart Send Telemetry Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-send-telemetry-cli.md
Previously updated : 03/08/2022 Last updated : 02/23/2022 # Quickstart: Send telemetry from a device to an IoT hub and monitor it with the Azure CLI
Last updated 03/08/2022
IoT Hub is an Azure service that enables you to ingest high volumes of telemetry from your IoT devices into the cloud for storage or processing. In this quickstart, you use the Azure CLI to create an IoT Hub and a simulated device, send device telemetry to the hub, and send a cloud-to-device message. You also use the Azure portal to visualize device metrics. This is a basic workflow for developers who use the CLI to interact with an IoT Hub application. ## Prerequisites+ - If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Azure CLI. You can run all commands in this quickstart using the Azure Cloud Shell, an interactive CLI shell that runs in your browser. If you use the Cloud Shell, you don't need to install anything. If you prefer to use the CLI locally, this quickstart requires Azure CLI version 2.0.76 or later. Run az --version to find the version. To install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+- Azure CLI. You can run all commands in this quickstart using the Azure Cloud Shell, an interactive CLI shell that runs in your browser. If you use the Cloud Shell, you don't need to install anything. If you prefer to use the CLI locally, this quickstart requires Azure CLI version 2.0.76 or later. Run `az --version` to find the version. To install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
## Sign in to the Azure portal
-Sign in to the Azure portal at https://portal.azure.com.
-Regardless whether you run the CLI locally or in the Cloud Shell, keep the portal open in your browser. You use it later in this quickstart.
+Sign in to the [Azure portal](https://portal.azure.com).
+
+Regardless of whether you run the CLI locally or in the Cloud Shell, keep the portal open in your browser. You use it later in this quickstart.
## Launch the Cloud Shell+ In this section, you launch an instance of the Azure Cloud Shell. If you use the CLI locally, skip to the section [Prepare two CLI sessions](#prepare-two-cli-sessions). To launch the Cloud Shell:
-1. Select the **Cloud Shell** button on the top-right menu bar in the Azure portal.
+1. Select the **Cloud Shell** button on the top-right menu bar in the Azure portal.
![Azure portal Cloud Shell button](media/quickstart-send-telemetry-cli/cloud-shell-button.png) > [!NOTE]
- > If this is the first time you've used the Cloud Shell, it prompts you to create storage, which is required to use the Cloud Shell. Select a subscription to create a storage account and Microsoft Azure Files share.
+ > If this is the first time you've used the Cloud Shell, it prompts you to create storage, which is required to use the Cloud Shell. Select a subscription to create a storage account and Microsoft Azure Files share.
-2. Select your preferred CLI environment in the **Select environment** dropdown. This quickstart uses the **Bash** environment. All the following CLI commands work in the PowerShell environment too.
+2. Select your preferred CLI environment in the **Select environment** dropdown. This quickstart uses the **Bash** environment. All the following CLI commands work in the PowerShell environment too.
![Select CLI environment](media/quickstart-send-telemetry-cli/cloud-shell-environment.png)
In this section, you prepare two Azure CLI sessions. If you're using the Cloud S
Azure CLI requires you to be logged into your Azure account. All communication between your Azure CLI shell session and your IoT hub is authenticated and encrypted. As a result, this quickstart does not need additional authentication that you'd use with a real device, such as a connection string.
-* Run the [az extension add](/cli/azure/extension#az_extension_add) command to add the Microsoft Azure IoT Extension for Azure CLI to your CLI shell. The IOT Extension adds IoT Hub, IoT Edge, and IoT Device Provisioning Service (DPS) specific commands to Azure CLI.
+- Run the [az extension add](/cli/azure/extension#az_extension_add) command to add the Microsoft Azure IoT Extension for Azure CLI to your CLI shell. The IOT Extension adds IoT Hub, IoT Edge, and IoT Device Provisioning Service (DPS) specific commands to Azure CLI.
```azurecli az extension add --name azure-iot ```
-
- After you install the Azure IOT extension, you don't need to install it again in any Cloud Shell session.
+
+ After you install the Azure IOT extension, you don't need to install it again in any Cloud Shell session.
[!INCLUDE [iot-hub-cli-version-info](../../includes/iot-hub-cli-version-info.md)]
-* Open a second CLI session. If you're using the Cloud Shell, select **Open new session**. If you're using the CLI locally, open a second instance.
+- Open a second CLI session. If you're using the Cloud Shell, select **Open new session**. If you're using the CLI locally, open a second instance.
>[!div class="mx-imgBorder"] >![Open new Cloud Shell session](media/quickstart-send-telemetry-cli/cloud-shell-new-session.png)
-## Create an IoT Hub
-In this section, you use the Azure CLI to create a resource group and an IoT Hub. An Azure resource group is a logical container into which Azure resources are deployed and managed. An IoT Hub acts as a central message hub for bi-directional communication between your IoT application and the devices.
+## Create an IoT hub
+
+In this section, you use the Azure CLI to create a resource group and an IoT hub. An Azure resource group is a logical container into which Azure resources are deployed and managed. An IoT hub acts as a central message hub for bi-directional communication between your IoT application and the devices.
> [!TIP]
-> Optionally, you can create an Azure resource group, an IoT Hub, and other resources by using the [Azure portal](iot-hub-create-through-portal.md), [Visual Studio Code](iot-hub-create-use-iot-toolkit.md), or other programmatic methods.
+> Optionally, you can create an Azure resource group, an IoT hub, and other resources by using the [Azure portal](iot-hub-create-through-portal.md), [Visual Studio Code](iot-hub-create-use-iot-toolkit.md), or other programmatic methods.
-1. Run the [az group create](/cli/azure/group#az_group_create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *eastus* location.
+1. Run the [az group create](/cli/azure/group#az_group_create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *eastus* location.
```azurecli az group create --name MyResourceGroup --location eastus ```
-1. Run the [az iot hub create](/cli/azure/iot/hub#az_iot_hub_create) command to create an IoT hub. It might take a few minutes to create an IoT hub.
+1. Run the [az iot hub create](/cli/azure/iot/hub#az_iot_hub_create) command to create an IoT hub. It might take a few minutes to create an IoT hub.
- *YourIotHubName*. Replace this placeholder name and the curly brackets with the name you chose for your IoT hub. An IoT hub name must be globally unique in Azure. This placeholder is used in the rest of this quickstart to represent your IoT hub name.
+ *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub. An IoT hub name must be globally unique in Azure. This placeholder is used in the rest of this quickstart to represent your IoT hub name.
```azurecli az iot hub create --resource-group MyResourceGroup --name {YourIoTHubName} ``` ## Create and monitor a device+ In this section, you create a simulated device in the first CLI session. The simulated device sends device telemetry to your IoT hub. In the second CLI session, you monitor events and telemetry, and send a cloud-to-device message to the simulated device. To create and start a simulated device:
-1. Run the [az iot hub device-identity create](/cli/azure/iot/hub/device-identity#az_iot_hub_device_identity_create) command in the first CLI session. This creates the simulated device identity.
- *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
+1. Run the [az iot hub device-identity create](/cli/azure/iot/hub/device-identity#az_iot_hub_device_identity_create) command in the first CLI session. This creates the simulated device identity.
- *simDevice*. You can use this name directly for the simulated device in the rest of this quickstart. Optionally, use a different name.
+ *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
+
+ *simDevice*. You can use this name directly for the simulated device in the rest of this quickstart. Optionally, use a different name.
```azurecli az iot hub device-identity create --device-id simDevice --hub-name {YourIoTHubName}
To create and start a simulated device:
1. Run the [az iot device simulate](/cli/azure/iot/device#az_iot_device_simulate) command in the first CLI session. This starts the simulated device. The device sends telemetry to your IoT hub and receives messages from it.
- *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
+ *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
```azurecli az iot device simulate -d simDevice -n {YourIoTHubName} ``` To monitor a device:+ 1. In the second CLI session, run the [az iot hub monitor-events](/cli/azure/iot/hub#az_iot_hub_monitor_events) command. This starts monitoring the simulated device. The output shows telemetry that the simulated device sends to the IoT hub.
- *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
+ *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
```azurecli az iot hub monitor-events --output table --hub-name {YourIoTHubName}
To monitor a device:
![Cloud Shell monitor events](media/quickstart-send-telemetry-cli/cloud-shell-monitor.png)
-1. After you monitor the simulated device in the second CLI session, press Ctrl+C to stop monitoring.
+1. After you monitor the simulated device in the second CLI session, press Ctrl+C to stop monitoring.
## Use the CLI to send a message+ In this section, you use the second CLI session to send a message to the simulated device. 1. In the first CLI session, confirm that the simulated device is running. If the device has stopped, run the following command to start it:
- *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
+ *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
```azurecli az iot device simulate -d simDevice -n {YourIoTHubName}
In this section, you use the second CLI session to send a message to the simulat
1. In the second CLI session, run the [az iot device c2d-message send](/cli/azure/iot/device/c2d-message#az_iot_device_c2d-message-send) command. This sends a cloud-to-device message from your IoT hub to the simulated device. The message includes a string and two key-value pairs.
- *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
+ *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
```azurecli az iot device c2d-message send -d simDevice --data "Hello World" --props "key0=value0;key1=value1" -n {YourIoTHubName} ```
- Optionally, you can send cloud-to-device messages by using the Azure portal. To do this, browse to the overview page for your IoT Hub, select **IoT Devices**, select the simulated device, and select **Message to Device**.
-1. In the first CLI session, confirm that the simulated device received the message.
+ Optionally, you can send cloud-to-device messages by using the Azure portal. To do this, browse to the overview page for your IoT Hub, select **IoT Devices**, select the simulated device, and select **Message to Device**.
+
+1. In the first CLI session, confirm that the simulated device received the message.
![Cloud Shell cloud-to-device message](media/quickstart-send-telemetry-cli/cloud-shell-receive-message.png) 1. After you view the message, close the second CLI session. Keep the first CLI session open. You use it to clean up resources in a later step. ## View messaging metrics in the portal
-The Azure portal enables you to manage all aspects of your IoT Hub and devices. In a typical IoT Hub application that ingests telemetry from devices, you might want to monitor devices or view metrics on device telemetry.
+
+The Azure portal enables you to manage all aspects of your IoT hub and devices. In a typical IoT Hub application that ingests telemetry from devices, you might want to monitor devices or view metrics on device telemetry.
To visualize messaging metrics in the Azure portal:
-1. In the left navigation menu on the portal, select **All Resources**. This lists all resources in your subscription, including the IoT hub you created.
+
+1. In the left navigation menu on the portal, select **All Resources**. This lists all resources in your subscription, including the IoT hub you created.
1. Select the link on the IoT hub you created. The portal displays the overview page for the hub.
-1. Select **Metrics** in the left pane of your IoT Hub.
+1. Select **Metrics** in the left pane of your IoT Hub.
![IoT Hub messaging metrics](media/quickstart-send-telemetry-cli/iot-hub-portal-metrics.png)
-1. Enter your IoT hub name in **Scope**.
+1. In the **Scope** field, enter your IoT hub name.
-2. Select *Iot Hub Standard Metrics* in **Metric Namespace**.
+1. In the **Metric Namespace** field, select *Iot Hub Standard Metrics*.
-3. Select *Total number of messages used* in **Metric**.
+1. In the **Metric** field, select *Total number of messages used*.
-4. Hover your mouse pointer over the area of the timeline in which your device sent messages. The total number of messages at a point in time appears in the lower left corner of the timeline.
+1. Hover your mouse pointer over the area of the timeline in which your device sent messages. The total number of messages at a point in time appears in the lower left corner of the timeline.
![View Azure IoT Hub metrics](media/quickstart-send-telemetry-cli/iot-hub-portal-view-metrics.png)
-5. Optionally, use the **Metric** dropdown to display other metrics on your simulated device. For example, *C2d message deliveries completed* or *Total devices (preview)*.
+1. Optionally, use the **Metric** dropdown to display other metrics on your simulated device. For example, *C2d message deliveries completed* or *Total devices (preview)*.
## Clean up resources+ If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete them.
-If you continue to the next recommended article, you can keep the resources you've already created and reuse them.
+If you continue to the next recommended article, you can keep the resources you've already created and reuse them.
> [!IMPORTANT]
-> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources.
+> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources.
To delete a resource group by name:+ 1. Run the [az group delete](/cli/azure/group#az_group_delete) command. This removes the resource group, the IoT Hub, and the device registration you created. ```azurecli az group delete --name MyResourceGroup ```+ 1. Run the [az group list](/cli/azure/group#az_group_list) command to confirm the resource group is deleted. ```azurecli
To delete a resource group by name:
``` ## Next steps+ In this quickstart, you used the Azure CLI to create an IoT hub, create a simulated device, send telemetry, monitor telemetry, send a cloud-to-device message, and clean up resources. You used the Azure portal to visualize messaging metrics on your device. If you are a device developer, the suggested next step is to see the telemetry quickstart that uses the Azure IoT Device SDK for C. Optionally, see one of the available Azure IoT Hub telemetry quickstart articles in your preferred language or SDK.
key-vault Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/private-link-service.md
Aliases: <your-key-vault-name>.vault.azure.net
## Limitations and Design Considerations
-> [!NOTE]
-> The number of key vaults with private endpoints enabled per subscription is an adjustable limit. The limit shown below is the default limit. If you would like to request a limit increase for your service, please send an email to azurekeyvault@microsoft.com. We will approve these requests on a case by case basis.
+**Limits**: See [Azure Private Link limits](https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#private-link-limits)
-**Pricing**: For pricing information, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
+**Pricing**: See [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
-**Limitations**: Private Endpoint for Azure Key Vault is only available in Azure public regions.
-
-**Maximum Number of Private Endpoints per Key Vault**: 64.
-
-**Default Number of Key Vaults with Private Endpoints per Subscription**: 400.
-
-For more, see [Azure Private Link service: Limitations](../../private-link/private-link-service-overview.md#limitations)
+**Limitations**: See [Azure Private Link service: Limitations](../../private-link/private-link-service-overview.md#limitations)
## Next Steps
key-vault Hsm Protected Keys Ncipher https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys-ncipher.md
The toolset includes:
* A Key Exchange Key (KEK) package that has a name beginning with **BYOK-KEK-pkg-.** * A Security World package that has a name beginning with **BYOK-SecurityWorld-pkg-.**
-* A python script named **verifykeypackage.py.**
+* A Python script named **verifykeypackage.py.**
* A command-line executable file named **KeyTransferRemote.exe** and associated DLLs. * A Visual C++ Redistributable Package, named **vcredist_x64.exe.**
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
key-vault Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Key Vault description: Sample Azure Resource Graph queries for Azure Key Vault showing use of resource types and tables to access Azure Key Vault related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
load-balancer Manage Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-inbound-nat-rules.md
+
+ Title: Manage inbound NAT rules for Azure Load Balancer
+description: In this article, you'll learn how to add and remove and inbound NAT rule in the Azure portal.
++++ Last updated : 03/10/2022++
+# Manage inbound NAT rules for Azure Load Balancer using the Azure portal
+
+An inbound NAT rule is used to forward traffic from a load balancer frontend to one or more instances in the backend pool.
+
+There are two types of inbound NAT rule:
+
+* Single virtual machine - An inbound NAT rule that targets a single machine in the backend pool of the load balancer
+
+* Multiple virtual machines - An inbound NAT rule that targets multiple virtual machines in the backend pool of the load balancer
+
+In this article, you'll learn how to add and remove an inbound NAT rule for both types. You'll learn how to change the frontend port allocation in a multiple instance inbound NAT rule.
+++
+- This quickstart requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+- A standard public load balancer in your subscription. For more information on creating an Azure Load Balancer, see [Quickstart: Create a public load balancer to load balance VMs using the Azure portal](quickstart-load-balancer-standard-public-portal.md). The load balancer name for the examples in this article is **myLoadBalancer**.
+
+## Add a single VM inbound NAT rule
+
+# [**Portal**](#tab/inbound-nat-rule-portal)
+
+In this example, you'll create an inbound NAT rule to forward port 500 to backend port 443.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Inbound NAT rules** in **Settings**.
+
+5. Select **+ Add** in **Inbound NAT rules** to add the rule.
+
+ :::image type="content" source="./media/manage-inbound-nat-rules/add-rule.png" alt-text="Screenshot of the inbound NAT rules page for Azure Load Balancer":::
+
+6. Enter or select the following information in **Add inbound NAT rule**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myInboundNATrule**. |
+ | Type | Select **Azure Virtual Machine**. |
+ | Target virtual machine | Select the virtual machine that you wish to forward the port to. In this example, it's **myVM1**. |
+ | Network IP configuration | Select the IP configuration of the virtual machine. In this example, it's **ipconfig1(10.1.0.4)**. |
+ | Frontend IP address | Select **myFrontend**. |
+ | Frontend Port | Enter **500**. |
+ | Service Tag | Leave the default of **Custom**. |
+ | Backend port | Enter **443**. |
+ | Protocol | Select **TCP**. |
+
+7. Leave the rest of the settings at the defaults and select **Add**.
+
+ :::image type="content" source="./media/manage-inbound-nat-rules/add-single-instance-rule.png" alt-text="Screenshot of the create inbound NAT rule page":::
+
+# [**CLI**](#tab/inbound-nat-rule-cli)
+
+In this example, you'll create an inbound NAT rule to forward port 500 to backend port 443.
+
+Use [az network lb inbound-nat-rule create](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-create) to create the NAT rule.
+
+```azurecli
+ az network lb inbound-nat-rule create \
+ --backend-port 443 \
+ --lb-name myLoadBalancer \
+ --name myInboundNATrule \
+ --protocol Tcp \
+ --resource-group myResourceGroup \
+ --backend-pool-name myBackendPool \
+ --frontend-ip-name myFrontend \
+ --frontend-port 500
+```
++
+## Add a multiple VMs inbound NAT rule
+
+# [**Portal**](#tab/inbound-nat-rule-portal)
+
+In this example, you'll create an inbound NAT rule to forward a range of ports starting at port 500 to backend port 443.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Inbound NAT rules** in **Settings**.
+
+5. Select **+ Add** in **Inbound NAT rules** to add the rule.
+
+ :::image type="content" source="./media/manage-inbound-nat-rules/add-rule.png" alt-text="Screenshot of the inbound NAT rules page for Azure Load Balancer":::
+
+6. Enter or select the following information in **Add inbound NAT rule**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myInboundNATrule**. |
+ | Type | Select **Backend pool**. |
+ | Target backend pool | Select your backend pool. In this example, it's **myBackendPool**. |
+ | Frontend IP address | Select your frontend IP address. In this example, it's **myFrontend**. |
+ | Frontend port range start | Enter **500**. |
+ | Maximum number of machines in backend pool | Enter **1000**. |
+ | Backend port | Enter **443**. |
+ | Protocol | Select **TCP**. |
+
+7. Leave the rest at the defaults and select **Add**.
+
+ :::image type="content" source="./media/manage-inbound-nat-rules/add-inbound-nat-rule.png" alt-text="Screenshot of the add inbound NAT rules page":::
+
+# [**CLI**](#tab/inbound-nat-rule-cli)
+
+In this example, you'll create an inbound NAT rule to forward a range of ports starting at port 500 to backend port 443.
+
+Use [az network lb inbound-nat-rule create](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-create) to create the NAT rule.
+
+```azurecli
+ az network lb inbound-nat-rule create \
+ --backend-port 443 \
+ --lb-name myLoadBalancer \
+ --name myInboundNATrule \
+ --protocol Tcp \
+ --resource-group myResourceGroup \
+ --backend-pool-name myBackendPool \
+ --frontend-ip-name myFrontend \
+ --frontend-port-range-end 1000 \
+ --frontend-port-range-start 500
+
+```
+++
+## Change frontend port allocation for a multiple VM rule
+
+# [**Portal**](#tab/inbound-nat-rule-portal)
+
+To accommodate more virtual machines in the backend pool in a multiple instance rule, change the frontend port allocation in the inbound NAT rule. In this example, you'll change the frontend port allocation from 500 to 1000.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Inbound NAT rules** in **Settings**.
+
+5. Select the inbound NAT rule you wish to change. In this example, it's **myInboundNATrule**.
+
+ :::image type="content" source="./media/manage-inbound-nat-rules/select-inbound-nat-rule.png" alt-text="Screenshot of inbound NAT rule overview.":::
+
+6. In the properties of the inbound NAT rule, change the value in **Frontend port range start** to **1000**.
+
+7. Select **Save**.
+
+ :::image type="content" source="./media/manage-inbound-nat-rules/change-frontend-ports.png" alt-text="Screenshot of inbound NAT rule properties page.":::
+
+# [**CLI**](#tab/inbound-nat-rule-cli)
+
+To accommodate more virtual machines in the backend pool, change the frontend port allocation in the inbound NAT rule. In this example, you'll change the frontend port allocation from 500 to 1000.
+
+Use [az network lb inbound-nat-rule update](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-update) to change the frontend port allocation.
+
+```azurecli
+ az network lb inbound-nat-rule update \
+ --frontend-port-range-start 1000 \
+ --lb-name myLoadBalancer \
+ --name myInboundNATrule \
+ --resource-group myResourceGroup
+
+```
+++
+## Remove an inbound NAT rule
+
+# [**Portal**](#tab/inbound-nat-rule-portal)
+
+In this example, you'll remove an inbound NAT rule.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page in, select **Inbound NAT rules** in **Settings**.
+
+5. Select the three dots next to the rule you want to remove.
+
+6. Select **Delete**.
+
+ :::image type="content" source="./media/manage-inbound-nat-rules/remove-inbound-nat-rule.png" alt-text="Screenshot of inbound NAT rule removal.":::
+
+# [**CLI**](#tab/inbound-nat-rule-cli)
+
+In this example, you'll remove an inbound NAT rule.
+
+Use [az network lb inbound-nat-rule delete](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-delete) to remove the NAT rule.
+
+```azurecli
+ az network lb inbound-nat-rule delete \
+ --lb-name myLoadBalancer \
+ --name myInboundNATrule \
+ --resource-group myResourceGroup
+```
+++
+## Next steps
+
+In this article, you learned how to manage inbound NAT rules for an Azure Load Balancer.
+
+For more information about Azure Load Balancer, see:
+- [What is Azure Load Balancer?](load-balancer-overview.md)
+- [Frequently asked questions - Azure Load Balancer](load-balancer-faqs.yml)
load-balancer Load Balancer Linux Cli Load Balance Multiple Websites Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-load-balance-multiple-websites-vm.md
ms.devlang: azurecli Previously updated : 04/20/2018 Last updated : 03/04/2022
This Azure CLI script sample creates a virtual network with two virtual machines (VM) that are members of an availability set. A load balancer directs traffic for two separate IP addresses to the two VMs. After running the script, you could deploy web server software to the VMs and host multiple web sites, each with its own IP address. - [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script +
+### Run the script
-[!code-azurecli-interactive[main](../../../cli_scripts/load-balancer/load-balance-multiple-web-sites-vm/load-balance-multiple-web-sites-vm.sh "Load balance multiple web sites")]
-## Clean up deployment
+## Clean up resources
-Run the following command to remove the resource group, VM, and all related resources.
```azurecli
-az group delete --name myResourceGroup --yes
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, virtual network, load balancer, and all related resources. Each command in the table links to command specific documentation.
This script uses the following commands to create a resource group, virtual netw
For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-Additional networking CLI script samples can be found in the [Azure Networking Overview documentation](../cli-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
+Additional networking CLI script samples can be found in the [Azure Networking Overview documentation](../cli-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
load-balancer Load Balancer Linux Cli Sample Nlb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-sample-nlb.md
ms.devlang: azurecli Previously updated : 04/20/2018 Last updated : 03/04/2022 # Azure CLI script example: Load balance traffic to VMs for high availability
-This Azure CLI script example creates everything needed to run several Ubuntu virtual machines configured in a highly available and load balanced configuration. After running the script, you will have three virtual machines, joined to an Azure Availability Set, and accessible through an Azure Load Balancer.
-
+This Azure CLI script example creates everything needed to run several Ubuntu virtual machines configured in a highly available and load balanced configuration. After running the script, you will have three virtual machines, joined to an Azure Availability Set, and accessible through an Azure Load Balancer.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-## Clean up deployment
+
+### Run the script
++
+## Clean up resources
-Run the following command to remove the resource group, VM, and all related resources.
```azurecli
-az group delete --name myResourceGroup
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, virtual machine, availability set, load balancer, and all related resources. Each command in the table links to command specific documentation.
This script uses the following commands to create a resource group, virtual mach
For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
+Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
load-balancer Load Balancer Linux Cli Sample Zonal Frontend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-sample-zonal-frontend.md
ms.devlang: azurecli
Previously updated : 06/14/2018 Last updated : 03/04/2022 # Azure CLI script example: Load balance traffic to VMs within a specific availability zone
-This Azure CLI script example creates everything needed to run several Ubuntu virtual machines configured in a highly available and load balanced configuration within a specific availability zone. After running the script, you will have three virtual machines in a single availability zones within a region that are accessible through an Azure Standard Load Balancer.
-
+This Azure CLI script example creates everything needed to run several Ubuntu virtual machines configured in a highly available and load balanced configuration within a specific availability zone. After running the script, you will have three virtual machines in a single availability zones within a region that are accessible through an Azure Standard Load Balancer.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-```azurecli-interactive
- #!/bin/bash
-
- # Create a resource group.
- az group create \
- --name myResourceGroup \
- --location westeurope
-
- # Create a virtual network.
- az network vnet create \
- --resource-group myResourceGroup \
- --location westeurope \
- --name myVnet \
- --subnet-name mySubnet
-
- # Create a zonal Standard public IP address.
- az network public-ip create \
- --resource-group myResourceGroup \
- --name myPublicIP \
- --sku Standard
- --zone 1
-
- # Create an Azure Load Balancer.
- az network lb create \
- --resource-group myResourceGroup \
- --name myLoadBalancer \
- --public-ip-address myPublicIP \
- --frontend-ip-name myFrontEndPool \
- --backend-pool-name myBackEndPool \
- --sku Standard
-
- # Creates an LB probe on port 80.
- az network lb probe create \
- --resource-group myResourceGroup \
- --lb-name myLoadBalancer \
- --name myHealthProbe \
- --protocol tcp \
- --port 80
-
- # Creates an LB rule for port 80.
- az network lb rule create \
- --resource-group myResourceGroup \
- --lb-name myLoadBalancer \
- --name myLoadBalancerRuleWeb \
- --protocol tcp \
- --frontend-port 80 \
- --backend-port 80 \
- --frontend-ip-name myFrontEndPool \
- --backend-pool-name myBackEndPool \
- --probe-name myHealthProbe
-
- # Create three NAT rules for port 22.
- for i in `seq 1 3`; do
- az network lb inbound-nat-rule create \
- --resource-group myResourceGroup \
- --lb-name myLoadBalancer \
- --name myLoadBalancerRuleSSH$i \
- --protocol tcp \
- --frontend-port 422$i \
- --backend-port 22 \
- --frontend-ip-name myFrontEndPool
- done
-
- # Create a network security group
- az network nsg create \
- --resource-group myResourceGroup \
- --name myNetworkSecurityGroup
-
- # Create a network security group rule for port 22.
- az network nsg rule create \
- --resource-group myResourceGroup \
- --nsg-name myNetworkSecurityGroup \
- --name myNetworkSecurityGroupRuleSSH \
- --protocol tcp \
- --direction inbound \
- --source-address-prefix '*' \
- --source-port-range '*' \
- --destination-address-prefix '*' \
- --destination-port-range 22 \
- --access allow \
- --priority 1000
-
- # Create a network security group rule for port 80.
- az network nsg rule create \
- --resource-group myResourceGroup \
- --nsg-name myNetworkSecurityGroup \
- --name myNetworkSecurityGroupRuleHTTP \
- --protocol tcp \
- --direction inbound \
- --source-address-prefix '*' \
- --source-port-range '*' \
- --destination-address-prefix '*' \
- --destination-port-range 80 \
- --access allow \
- --priority 2000
-
- # Create three virtual network cards and associate with public IP address and NSG.
- for i in `seq 1 3`; do
- az network nic create \
- --resource-group myResourceGroup \
- --name myNic$i \
- --vnet-name myVnet \
- --subnet mySubnet \
- --network-security-group myNetworkSecurityGroup \
- --lb-name myLoadBalancer \
- --lb-address-pools myBackEndPool \
- --lb-inbound-nat-rules myLoadBalancerRuleSSH$i
- done
-
-# Create three virtual machines, this creates SSH keys if not present.
-for i in `seq 1 3`; do
- az vm create \
- --resource-group myResourceGroup \
- --name myVM$i \
- --zone 1 \
- --nics myNic$i \
- --image UbuntuLTS \
- --generate-ssh-keys \
- --no-wait
-done
-```
+### Run the script
+
-## Clean up deployment
+## Clean up resources
-Run the following command to remove the resource group, VM, and all related resources.
```azurecli
-az group delete --name myResourceGroup
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, virtual machine, availability set, load balancer, and all related resources. Each command in the table links to command specific documentation.
This script uses the following commands to create a resource group, virtual mach
For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
+Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
load-balancer Load Balancer Linux Cli Sample Zone Redundant Frontend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-sample-zone-redundant-frontend.md
# Azure CLI script example: Load balance VMs across availability zones
-This Azure CLI script example creates everything needed to run several Ubuntu virtual machines configured in a highly available and load balanced configuration. After running the script, you will have three virtual machines across all availability zones within a region that are accessible through an Azure Standard Load Balancer.
-
+This Azure CLI script example creates everything needed to run several Ubuntu virtual machines configured in a highly available and load balanced configuration. After running the script, you will have three virtual machines across all availability zones within a region that are accessible through an Azure Standard Load Balancer.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-```azurecli-interactive
- #!/bin/bash
-
- # Create a resource group.
- az group create \
- --name myResourceGroup \
- --location westeurope
-
- # Create a virtual network.
- az network vnet create \
- --resource-group myResourceGroup \
- --location westeurope \
- --name myVnet \
- --subnet-name mySubnet
-
- # Create a zonal Standard public IP address.
- az network public-ip create \
- --resource-group myResourceGroup \
- --name myPublicIP \
- --sku Standard
-
- # Create an Azure Load Balancer.
- az network lb create \
- --resource-group myResourceGroup \
- --name myLoadBalancer \
- --public-ip-address myPublicIP \
- --frontend-ip-name myFrontEndPool \
- --backend-pool-name myBackEndPool \
- --sku Standard
-
- # Creates an LB probe on port 80.
- az network lb probe create \
- --resource-group myResourceGroup \
- --lb-name myLoadBalancer \
- --name myHealthProbe \
- --protocol tcp \
- --port 80
-
- # Creates an LB rule for port 80.
- az network lb rule create \
- --resource-group myResourceGroup \
- --lb-name myLoadBalancer \
- --name myLoadBalancerRuleWeb \
- --protocol tcp \
- --frontend-port 80 \
- --backend-port 80 \
- --frontend-ip-name myFrontEndPool \
- --backend-pool-name myBackEndPool \
- --probe-name myHealthProbe
-
- # Create three NAT rules for port 22.
- for i in `seq 1 3`; do
- az network lb inbound-nat-rule create \
- --resource-group myResourceGroup \
- --lb-name myLoadBalancer \
- --name myLoadBalancerRuleSSH$i \
- --protocol tcp \
- --frontend-port 422$i \
- --backend-port 22 \
- --frontend-ip-name myFrontEndPool
- done
-
- # Create a network security group
- az network nsg create \
- --resource-group myResourceGroup \
- --name myNetworkSecurityGroup
-
- # Create a network security group rule for port 22.
- az network nsg rule create \
- --resource-group myResourceGroup \
- --nsg-name myNetworkSecurityGroup \
- --name myNetworkSecurityGroupRuleSSH \
- --protocol tcp \
- --direction inbound \
- --source-address-prefix '*' \
- --source-port-range '*' \
- --destination-address-prefix '*' \
- --destination-port-range 22 \
- --access allow \
- --priority 1000
-
- # Create a network security group rule for port 80.
- az network nsg rule create \
- --resource-group myResourceGroup \
- --nsg-name myNetworkSecurityGroup \
- --name myNetworkSecurityGroupRuleHTTP \
- --protocol tcp \
- --direction inbound \
- --source-address-prefix '*' \
- --source-port-range '*' \
- --destination-address-prefix '*' \
- --destination-port-range 80 \
- --access allow \
- --priority 2000
-
- # Create three virtual network cards and associate with load balancer and NSG.
- for i in `seq 1 3`; do
- az network nic create \
- --resource-group myResourceGroup \
- --name myNic$i \
- --vnet-name myVnet \
- --subnet mySubnet \
- --network-security-group myNetworkSecurityGroup \
- --lb-name myLoadBalancer \
- --lb-address-pools myBackEndPool \
- --lb-inbound-nat-rules myLoadBalancerRuleSSH$i
- done
-
-# Create three virtual machines, this creates SSH keys if not present.
-for i in `seq 1 3`; do
- az vm create \
- --resource-group myResourceGroup \
- --name myVM$i \
- --zone $i \
- --nics myNic$i \
- --image UbuntuLTS \
- --generate-ssh-keys \
- --no-wait
-done
-```
+### Run the script
+
-## Clean up deployment
+## Clean up resources
-Run the following command to remove the resource group, VM, and all related resources.
```azurecli
-az group delete --name myResourceGroup
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, virtual machine, availability set, load balancer, and all related resources. Each command in the table links to command specific documentation.
This script uses the following commands to create a resource group, virtual mach
For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
+Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
load-balancer Tutorial Nat Rule Multi Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-nat-rule-multi-instance-portal.md
+
+ Title: "Tutorial: Create a multiple instance inbound NAT rule - Azure portal"
+
+description: This tutorial shows how to configure port forwarding using Azure Load Balancer to create a connection to multiple virtual machines in an Azure virtual network.
++++ Last updated : 03/10/2022+++
+# Tutorial: Create a multiple instance inbound NAT rule using the Azure portal
+
+Inbound NAT rules allow you to connect to virtual machines (VMs) in an Azure virtual network by using an Azure Load Balancer public IP address and port number.
+
+For more information about Azure Load Balancer rules, see [Manage rules for Azure Load Balancer using the Azure portal](manage-rules-how-to.md).
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a virtual network and virtual machines
+> * Create a standard SKU public load balancer with frontend IP, health probe, backend configuration, and load-balancing rule
+> * Create a multiple instance inbound NAT rule
+> * Create a NAT gateway for outbound internet access for the backend pool
+> * Install and configure a web server on the VMs to demonstrate the port forwarding and load-balancing rules
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Create virtual network and virtual machines
+
+A virtual network and subnet is required for the resources in the tutorial. In this section, you'll create a virtual network and virtual machines for the later steps.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+3. In **Virtual machines**, select **+ Create** > **+ Virtual machine**.
+
+4. In **Create a virtual machine**, enter or select the following values in the **Basics** tab:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **Create new**. </br> Enter **TutorialLBPF-rg**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM1**. |
+ | Region | Enter **(US) West US 2**. |
+ | Availability options | Select **Availability zone**. |
+ | Availability zone | Enter **1**. |
+ | Security type | Select **Standard**. |
+ | Image | Select **Ubuntu Server 20.04 LTS - Gen2**. |
+ | Azure Spot instance | Leave the default of unchecked. |
+ | Size | Select a VM size. |
+ | **Administrator account** | |
+ | Authentication type | Select **SSH public key**. |
+ | Username | Enter **azureuser**. |
+ | SSH public key source | Select **Generate new key pair**. |
+ | Key pair name | Enter **myKey**. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
+
+5. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+
+6. In the **Networking** tab, enter or select the following information.
+
+ | Setting | Value |
+ | - | -- |
+ | **Network interface** | |
+ | Virtual network | Select **Create new**. </br> Enter **myVNet** in **Name**. </br> In **Address space**, under **Address range**, enter **10.1.0.0/16**. </br> In **Subnets**, under **Subnet name**, enter **myBackendSubnet**. </br> In **Address range**, enter **10.1.0.0/24**. </br> Select **OK**. |
+ | Subnet | Select **myBackendSubnet**. |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Advanced**. |
+ | Configure network security group | Select **Create new**. </br> Enter **myNSG** in **Name**. </br> Select **+ Add an inbound rule** under **Inbound rules**. </br> In **Service**, select **HTTP**. </br> Enter **100** in **Priority**. </br> Enter **myNSGRule** for **Name**. </br> Select **Add**. </br> Select **OK**. |
+
+7. Select the **Review + create** tab, or select the **Review + create** button at the bottom of the page.
+
+8. Select **Create**.
+
+9. At the **Generate new key pair** prompt, select **Download private key and create resource**. Your key file will be downloaded as myKey.pem. Ensure you know where the .pem file was downloaded, you'll need the path to the key file in later steps.
+
+8. Follow the steps 1 through 8 to create another VM with the following values and all the other settings the same as **myVM1**:
+
+ | Setting | VM 2 |
+ | - | -- |
+ | **Basics** | |
+ | **Instance details** | |
+ | Virtual machine name | **myVM2** |
+ | Availability zone | **2** |
+ | **Administrator account** | |
+ | Authentication type | **SSH public key** |
+ | SSH public key source | Select **Use existing key stored in Azure**. |
+ | Stored Keys | Select **myKey**. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
+ | **Networking** | |
+ | **Network interface** | |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Advanced**. |
+ | Configure network security group | Select the existing **myNSG** |
+
+## Create load balancer
+
+You'll create a load balancer in this section. The frontend IP, backend pool, load-balancing, and inbound NAT rules are configured as part of the creation.
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. In the **Load balancer** page, select **Create**.
+
+3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
+
+ | Setting | Value |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialLBPF-rg**. |
+ | **Instance details** | |
+ | Name | Enter **myLoadBalancer** |
+ | Region | Select **West US 2**. |
+ | SKU | Leave the default **Standard**. |
+ | Type | Select **Public**. |
+ | Tier | Leave the default **Regional**. |
+
+4. Select **Next: Frontend IP configuration** at the bottom of the page.
+
+5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
+
+6. Enter **myFrontend** in **Name**.
+
+7. Select **IPv4** or **IPv6** for the **IP version**.
+
+ > [!NOTE]
+ > IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier).
+
+8. Select **IP address** for the **IP type**.
+
+ > [!NOTE]
+ > For more information on IP prefixes, see [Azure Public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md).
+
+9. Select **Create new** in **Public IP address**.
+
+10. In **Add a public IP address**, enter **myPublicIP** for **Name**.
+
+11. Select **Zone-redundant** in **Availability zone**.
+
+ > [!NOTE]
+ > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
+
+12. Leave the default of **Microsoft Network** for **Routing preference**.
+
+13. Select **OK**.
+
+14. Select **Add**.
+
+15. Select **Next: Backend pools** at the bottom of the page.
+
+16. In the **Backend pools** tab, select **+ Add a backend pool**.
+
+17. Enter or select the following information in **Add backend pool**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myBackendPool**. |
+ | Virtual network | Select **myVNet (TutorialLBPF-rg)**. |
+ | Backend Pool Configuration | Select **NIC**. |
+ | IP version | Select **IPv4**. |
+
+18. Select **+ Add** in **Virtual machines**.
+
+19. Select the checkboxes next to **myVM1** and **myVM2** in **Add virtual machines to backend pool**.
+
+20. Select **Add**.
+
+21. Select **Add**.
+
+22. Select the **Next: Inbound rules** button at the bottom of the page.
+
+23. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+
+24. In **Add load balancing rule**, enter or select the following information.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHTTPRule** |
+ | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
+ | Frontend IP address | Select **myFrontend**. |
+ | Backend pool | Select **myBackendPool**. |
+ | Protocol | Select **TCP**. |
+ | Port | Enter **80**. |
+ | Backend port | Enter **80**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Session persistence | Select **None**. |
+ | Idle timeout (minutes) | Enter or select **15**. |
+ | TCP reset | Select **Enabled**. |
+ | Floating IP | Select **Disabled**. |
+ | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
+
+ For more information about load-balancing rules, see [Load-balancing rules](manage-rules-how-to.md#load-balancing-rules).
+
+25. Select **Add**.
+
+26. Select the blue **Review + create** button at the bottom of the page.
+
+27. Select **Create**.
+
+## Create multiple instance inbound NAT rule
+
+In this section, you'll create a multiple instance inbound NAT rule to the backend pool of the load balancer.
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. Select **myLoadBalancer**.
+
+3. In **myLoadBalancer**, select **Inbound NAT rules** in settings.
+
+4. Select **+ Add** in **Inbound NAT rules**.
+
+5. Enter or select the following information in **Add inbound NAT rule**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myNATRule-SSH**. |
+ | Type | Select **Backend pool**. |
+ | Target backend pool | Select **myBackendPool**. |
+ | Frontend IP address | Select **myFrontend**. |
+ | Frontend port range start | Enter **221**. |
+ | Maximum number of machines in backend pool | Enter **500**. |
+ | Backend port | Enter **22**. |
+ | Protocol | Select **TCP**. |
+
+6. Leave the rest at the default and select **Add**.
+
+## Create NAT gateway
+
+In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
+
+For more information about outbound connections and Azure Virtual Network NAT, see [Using Source Network Address Translation (SNAT) for outbound connections](load-balancer-outbound-connections.md) and [What is Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md).
+
+1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
+
+2. In **NAT gateways**, select **+ Create**.
+
+3. In **Create network address translation (NAT) gateway**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialLBPF-rg**. |
+ | **Instance details** | |
+ | NAT gateway name | Enter **myNATgateway**. |
+ | Region | Select **West US 2**. |
+ | Availability zone | Select **None**. |
+ | Idle timeout (minutes) | Enter **15**. |
+
+4. Select the **Outbound IP** tab or select the **Next: Outbound IP** button at the bottom of the page.
+
+5. In **Outbound IP**, select **Create a new public IP address** next to **Public IP addresses**.
+
+6. Enter **myNATGatewayIP** in **Name** in **Add a public IP address**.
+
+7. Select **OK**.
+
+8. Select the **Subnet** tab or select the **Next: Subnet** button at the bottom of the page.
+
+9. In **Virtual network** in the **Subnet** tab, select **myVNet**.
+
+10. Select **myBackendSubnet** under **Subnet name**.
+
+11. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab.
+
+12. Select **Create**.
+
+## Install web server
+
+In this section, you'll SSH to the virtual machines through the inbound NAT rules and install a web server.
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. Select **myLoadBalancer**.
+
+3. Select **Fronted IP configuration** in **Settings**.
+
+3. In the **Frontend IP configuration**, make note of the **IP address** for **myFrontend**. In this example, it's **20.99.165.176**.
+
+ :::image type="content" source="./media/tutorial-nat-rule-multi-instance-portal/get-public-ip.png" alt-text="Screenshot of public IP in Azure portal.":::
+
+4. If you're using a Mac or Linux computer, open a Bash prompt. If you're using a Windows computer, open a PowerShell prompt.
+
+5. At your prompt, open an SSH connection to **myVM1**. Replace the IP address with the address you retrieved in the previous step and port **221** you used for the myVM1 inbound NAT rule. Replace the path to the .pem with the path to where the key file was downloaded.
+
+ ```console
+ ssh -i .\Downloads\myKey.pem azureuser@20.99.165.176 -p 221
+ ```
+
+ > [!TIP]
+ > The SSH key you created can be used the next time your create a VM in Azure. Just select the **Use a key stored in Azure** for **SSH public key source** the next time you create a VM. You already have the private key on your computer, so you won't need to download anything.
+
+6. From your SSH session, update your package sources and then install the latest NGINX package.
+
+ ```bash
+ sudo apt-get -y update
+ sudo apt-get -y install nginx
+ ```
+
+7. Enter `Exit` to leave the SSH session
+
+8. At your prompt, open an SSH connection to **myVM2**. Replace the IP address with the address you retrieved in the previous step and port **222** you used for the myVM2 inbound NAT rule. Replace the path to the .pem with the path to where the key file was downloaded.
+
+ ```console
+ ssh -i .\Downloads\myKey.pem azureuser@20.99.165.176 -p 222
+ ```
+
+9. From your SSH session, update your package sources and then install the latest NGINX package.
+
+ ```bash
+ sudo apt-get -y update
+ sudo apt-get -y install nginx
+ ```
+
+10. Enter `Exit` to leave the SSH session.
+
+## Test the web server
+
+You'll open your web browser in this section and enter the IP address for the load balancer you retrieved in the previous step.
+
+1. Open your web browser.
+
+2. In the address bar, enter the IP address for the load balancer. In this example, it's **20.99.165.176**.
+
+3. The default NGINX website is displayed.
+
+ :::image type="content" source="./media/tutorial-nat-rule-multi-instance-portal/web-server-test.png" alt-text="Screenshot of testing the NGINX web server.":::
+
+## Clean up resources
+
+If you're not going to continue to use this application, delete
+the virtual machines and load balancer with the following steps:
+
+1. In the search box at the top of the portal, enter **Resource group**. Select **Resource groups** in the search results.
+
+2. Select **TutorialLBPF-rg** in **Resource groups**.
+
+3. Select **Delete resource group**.
+
+4. Enter **TutorialLBPF-rg** in **TYPE THE RESOURCE GROUP NAME:**. Select **Delete**.
+
+## Next steps
+
+Advance to the next article to learn how to create a cross-region load balancer:
+
+> [!div class="nextstepaction"]
+> [Create a cross-region load balancer using the Azure portal](tutorial-cross-region-portal.md)
load-balancer Upgrade Basicinternal Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basicInternal-standard.md
This article introduces a PowerShell script which creates a Standard Load Balanc
An Azure PowerShell script is available that does the following:
-* Creates a Standard Internal SKU Load Balancer in the location that you specify. Note that no [outbound connection](./load-balancer-outbound-connections.md) will not be provided by the Standard Internal Load Balancer.
+* Creates a Standard Internal SKU Load Balancer in the location that you specify. Note that [outbound connection](./load-balancer-outbound-connections.md) will not be provided by the Standard Internal Load Balancer.
* Seamlessly copies the configurations of the Basic SKU Load Balancer to the newly created Standard Load Balancer. * Seamlessly move the private IPs from Basic Load Balancer to the newly created Standard Load Balancer. * Seamlessly move the VMs from backend pool of the Basic Load Balancer to the backend pool of the Standard Load Balancer
load-testing How To Compare Multiple Test Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-compare-multiple-test-runs.md
Use the client-side metrics, such as requests per second or response time, on th
## Identify the root cause
-When there's a performance issue, you can use the server-side metrics to analyze what the root cause of the problem is. Azure Load Testing can [capture server-side resource metrics](./how-to-update-rerun-test.md) for Azure-hosted applications.
+When there's a performance issue, you can use the server-side metrics to analyze what the root cause of the problem is. Azure Load Testing can [capture server-side resource metrics](./how-to-monitor-server-side-metrics.md) for Azure-hosted applications.
1. Hover over the server-side metrics graphs to compare the values across the different test runs.
load-testing How To Find Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-find-download-logs.md
In this section, you retrieve and download the Azure Load Testing logs from the
## Next steps -- Learn how to [Monitor server-side application metrics](./how-to-update-rerun-test.md).
+- Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).
- Learn how to [Get detailed insights for Azure App Service based applications](./how-to-appservice-insights.md).
load-testing How To High Scale Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-high-scale-load.md
In the Apache JMeter script, you define the number of parallel threads. This num
For example, to simulate 1,000 threads (or virtual users), set the number of threads in the Apache JMeter script to 250. Then configure the test with four test engine instances (that is, 4 x 250 threads).
+The location of the Azure Load Testing resource determines the location of the test engine instances. All test engine instances within a Load Testing resource are hosted in the same Azure region.
+ > [!IMPORTANT] > For preview release, Azure Load Testing supports up to 45 engine instances for a test run.
In this section, you configure the scaling settings of your load test.
:::image type="content" source="media/how-to-high-scale-load/configure-test.png" alt-text="Screenshot that shows the 'Configure' and 'Test' buttons on the test details page.":::
-1. On the **Edit test** page, select the **Load** tab. In the **Engine instances** box, enter the number of test engines required to run your test.
+1. On the **Edit test** page, select the **Load** tab. Use the **Engine instances** slider control to update the number of test engine instances, or enter the value directly in the input box.
:::image type="content" source="media/how-to-high-scale-load/edit-test-load.png" alt-text="Screenshot of the 'Load' tab on the 'Edit test' pane.":::
load-testing How To Monitor Server Side Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-monitor-server-side-metrics.md
+
+ Title: Monitor server-side application metrics for load testing
+
+description: Learn how to configure a load test to monitor server-side application metrics by using Azure Load Testing.
++++ Last updated : 02/08/2022+++
+# Monitor server-side application metrics by using Azure Load Testing Preview
+
+You can monitor server-side application metrics for Azure-hosted applications when running a load test with Azure Load Testing Preview. In this article, you'll learn how to configure app components and metrics for your load test.
+
+To capture metrics during your load test, you'll first [select the Azure components](#select-azure-application-components) that make up your application. Optionally, you can then [configure the list of server-side metrics](#select-server-side-resource-metrics) for each Azure component.
+
+Azure Load Testing integrates with Azure Monitor to capture server-side resource metrics for Azure-hosted applications. Read more about which [Azure resource types that Azure Load Testing supports](./resource-supported-azure-resource-types.md).
+
+> [!IMPORTANT]
+> Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure Load Testing resource with at least one completed test run. If you need to create an Azure Load Testing resource, see [Tutorial: Run a load test to identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md).
+
+## Select Azure application components
+
+To monitor resource metrics for an Azure-hosted application, you need to specify the list of Azure application components in your load test. Azure Load Testing automatically captures a set of relevant resource metrics for each selected component. When your load test finishes, you can view the server-side metrics in the dashboard.
+
+For the list of Azure components that Azure Load Testing supports, see [Supported Azure resource types](./resource-supported-azure-resource-types.md).
+
+Use the following steps to configure the Azure components for your load test:
+
+1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+
+1. On the left pane, select **Tests**, and then select your load test from the list.
+
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/select-test.png" alt-text="Screenshot that shows a list of load tests to select from.":::
+
+1. On the test runs page, select **Configure**, and then select **App Components** to add or remove Azure resources to monitor during the load test.
+
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/configure-app-components.png" alt-text="Screenshot that shows the 'App Components' button for displaying app components to configure for a load test.":::
+
+1. Select or clear the checkboxes next to the Azure resources you want to add or remove, and then select **Apply**.
+
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/modify-app-components.png" alt-text="Screenshot that shows how to add or remove app components from a load test configuration.":::
+
+ When you run the load test, Azure Load Testing will display the default resource metrics in the test run dashboard.
+
+You can change the list of resource metrics at any time. In the next section, you'll view and configure the list of resource metrics.
+
+## Select server-side resource metrics
+
+For each Azure application component, you can select the resource metrics to monitor during your load test.
+
+Use the following steps to view and update the list of resource metrics:
+
+1. On the test runs page, select **Configure**, and then select **Metrics** to select the specific resource metrics to capture during the load test.
+
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/configure-metrics.png" alt-text="Screenshot that shows the 'Metrics' button to configure metrics for a load test.":::
+
+1. Update the list of metrics you want to capture, and then select **Apply**.
+
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/modify-metrics.png" alt-text="Screenshot that shows a list of resource metrics to configure for a load test.":::
+
+ Alternatively, you can update the app components and metrics from the page that shows test result details.
+
+1. Select **Run** to run the load test with the new configuration settings.
+
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/run-load-test.png" alt-text="Screenshot that shows the 'Run' button for running the load test from the test runs page.":::
+
+ Notice that the test result dashboard now shows the updated server-side metrics.
+
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/dashboard-updated-metrics.png" alt-text="Screenshot that shows the updated server-side metrics on the test result dashboard.":::
+
+When you update the configuration of a load test, all future test runs will use that configuration. On the other hand, if you update a test run, the new configuration will only apply to that test run.
+
+## Next steps
+
+- Learn how you can [identify performance problems by comparing metrics across multiple test runs](./how-to-compare-multiple-test-runs.md).
+
+- Learn how to [set up a high-scale load test](./how-to-high-scale-load.md).
+
+- Learn how to [configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
load-testing Resource Supported Azure Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-supported-azure-resource-types.md
Last updated 01/04/2022
Learn which Azure resource types Azure Load Testing Preview supports for server-side monitoring. You can select specific metrics for each resource type to track and report on for a load test.
-To learn how to configure your load test, see [Monitor server-side application metrics](./how-to-update-rerun-test.md).
+To learn how to configure your load test, see [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
This section lists the Azure resource types that Azure Load Testing supports for
## Next steps
-* Learn how to [Monitor server-side application metrics](./how-to-update-rerun-test.md).
+* Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).
* Learn how to [Get more insights from App Service diagnostics](./how-to-appservice-insights.md). * Learn how to [Compare multiple test runs](./how-to-compare-multiple-test-runs.md).
load-testing Tutorial Cicd Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-cicd-azure-pipelines.md
# Tutorial: Identify performance regressions with Azure Load Testing Preview and Azure Pipelines
-This tutorial describes how to automate performance regression testing by using Azure Load Testing Preview and Azure Pipelines. You'll configure an Azure Pipelines continuous integration and continuous delivery (CI/CD) workflow to run a load test for a sample web application. You'll then use the test results to identify performance regressions.
+This tutorial describes how to automate performance regression testing by using Azure Load Testing Preview and Azure Pipelines. You'll configure an Azure Pipelines CI/CD workflow with the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing?view=azure-devops) to run a load test for a sample web application. You'll then use the test results to identify performance regressions.
If you're using GitHub Actions for your CI/CD workflows, see the corresponding [GitHub Actions tutorial](./tutorial-cicd-github-actions.md).
You'll learn how to:
* An Azure DevOps organization and project. If you don't have an Azure DevOps organization, you can [create one for free](/azure/devops/pipelines/get-started/pipelines-sign-up?view=azure-devops&preserve-view=true). If you need help with getting started with Azure Pipelines, see [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline?preserve-view=true&view=azure-devops&tabs=java%2Ctfs-2018-2%2Cbrowser). * A GitHub account, where you can create a repository. If you don't have one, you can [create one for free](https://github.com/).
-## Set up your repository
+## Set up the sample application repository
To get started, you need a GitHub repository with the sample web application. You'll use this repository to configure an Azure Pipelines workflow to run the load test.
To access Azure resources, create a service connection in Azure DevOps and use r
```azurecli az role assignment create --assignee "<sp-object-id>" \ --role "Load Test Contributor" \
+ --scope /subscriptions/<subscription-name-or-id>/resourceGroups/<resource-group-name> \
--subscription "<subscription-name-or-id>" ``` ## Configure the Azure Pipelines workflow to run a load test
-In this section, you'll set up an Azure Pipelines workflow that triggers the load test. The sample application repository contains a pipelines definition file. The pipeline first deploys the sample web application to Azure App Service, and then invokes the load test. The pipeline uses an environment variable to pass the URL of the web application to the Apache JMeter script.
+In this section, you'll set up an Azure Pipelines workflow that triggers the load test.
-First, you'll install the Azure Load Testing extension from the Azure DevOps Marketplace, create a new pipeline, and then connect it to the sample application's forked repository.
+The sample application repository already contains a pipelines definition file. This pipeline first deploys the sample web application to Azure App Service, and then invokes the load test by using the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing?view=azure-devops). The pipeline uses an environment variable to pass the URL of the web application to the Apache JMeter script.
-1. Install the Azure Load Testing task extension from the Azure DevOps Marketplace.
+1. Install the **Azure Load Testing** task extension from the Azure DevOps Marketplace.
:::image type="content" source="./media/tutorial-cicd-azure-pipelines/browse-marketplace.png" alt-text="Screenshot that shows how to browse the Visual Studio Marketplace for extensions.":::
First, you'll install the Azure Load Testing extension from the Azure DevOps Mar
:::image type="content" source="./media/tutorial-cicd-azure-pipelines/create-pipeline-select-repo.png" alt-text="Screenshot that shows how to select the sample application's GitHub repository.":::
- The repository contains an *azure-pipeline.yml* pipeline definition file. You'll now modify this definition to connect to your Azure Load Testing service.
+ The repository contains an *azure-pipeline.yml* pipeline definition file. The following snippet shows how to use the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing?view=azure-devops) in Azure Pipelines:
+
+ ```yml
+ - task: AzureLoadTest@1
+ inputs:
+ azureSubscription: $(serviceConnection)
+ loadTestConfigFile: 'SampleApp.yaml'
+ resourceGroup: $(loadTestResourceGroup)
+ loadTestResource: $(loadTestResource)
+ env: |
+ [
+ {
+ "name": "webapp",
+ "value": "$(webAppName).azurewebsites.net"
+ }
+ ]
+ ```
+
+ You'll now modify the pipeline to connect to your Azure Load Testing service.
1. On the **Review** tab, replace the following placeholder text in the YAML code:
In this tutorial, you'll reconfigure the sample application to accept only secur
The Azure Load Testing task securely passes the secret from the pipeline to the test engine. The secret parameter is used only while you're running the load test, and then the value is discarded from memory.
-## Configure and use the Azure Load Testing task
-
-This section describes the Azure Load Testing task for Azure Pipelines. The task is cross-platform and runs on Windows, Linux, or Mac agents.
-
-You can use the following parameters to configure the Azure Load Testing task:
-
-|Parameter |Description |
-|||
-|`azureSubscription` | *Required*. Name of the Azure Resource Manager service connection. |
-|`loadTestConfigFile` | *Required*. Path to the YAML configuration file for the load test. The path is fully qualified or relative to the default working directory. |
-|`resourceGroup` | *Required*. Name of the resource group that contains the Azure Load Testing resource. |
-|`loadTestResource` | *Required*. Name of an existing Azure Load Testing resource. |
-|`secrets` | Array of JSON objects that consist of the name and value for each secret. The name should match the secret name used in the Apache JMeter test script. |
-|`env` | Array of JSON objects that consist of the name and value for each environment variable. The name should match the variable name used in the Apache JMeter test script. |
-
-The following YAML code snippet describes how to use the task in an Azure Pipelines CI/CD workflow:
-
-```yaml
-- task: AzureLoadTest@1
- inputs:
- azureSubscription: '<Azure service connection>'
- loadTestConfigFile: '< YAML File path>'
- loadTestResource: '<name of the load test resource>'
- resourceGroup: '<name of the resource group of your load test resource>'
- secrets: |
- [
- {
- "name": "<Name of the secret>",
- "value": "$(mySecret1)"
- },
- {
- "name": "<Name of the secret>",
- "value": "$(mySecret1)"
- }
- ]
- env: |
- [
- {
- "name": "<Name of the variable>",
- "value": "<Value of the variable>"
- },
- {
- "name": "<Name of the variable>",
- "value": "<Value of the variable>"
- }
- ]
-```
- ## Clean up resources [!INCLUDE [alt-delete-resource-group](../../includes/alt-delete-resource-group.md)]
The following YAML code snippet describes how to use the task in an Azure Pipeli
You've now created an Azure Pipelines CI/CD workflow that uses Azure Load Testing for automatically running load tests. By using pass/fail criteria, you can set the status of the CI/CD workflow. With parameters, you can make the running of load tests configurable.
-* For more information about parameterizing load tests, see [Parameterize a load test](./how-to-parameterize-load-tests.md).
-* For more information about defining test pass/fail criteria, see [Define test criteria](./how-to-define-test-criteria.md).
+* Learn more about the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing?view=azure-devops).
+* Learn more about [Parameterizing a load test](./how-to-parameterize-load-tests.md).
+* Learn more [Define test pass/fail criteria](./how-to-define-test-criteria.md).
+* Learn more about [Configuring server-side monitoring](./how-to-monitor-server-side-metrics.md).
load-testing Tutorial Cicd Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-cicd-github-actions.md
First, you'll create an Azure Active Directory [service principal](../active-dir
```azurecli az role assignment create --assignee "<sp-object-id>" \ --role "Load Test Contributor" \
+ --scope /subscriptions/<subscription-id>/resourceGroups/<resource-group-name> \
--subscription "<subscription-id>" ```
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
Before you can create your logic app, create a local project so that you can man
![Screenshot that shows the "Create new Stateful Workflow (3/4)" box and "Fabrikam-Stateful-Workflow" as the workflow name.](./media/create-single-tenant-workflows-visual-studio-code/name-your-workflow.png)
+ > [!NOTE]
+ > You might get an error named **azureLogicAppsStandard.createNewProject** with the error message,
+ > **Unable to write to Workspace Settings because azureFunctions.suppressProject is not a registered configuration**.
+ > If you do, try installing the [Azure Functions extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions), either directly from the Visual Studio Marketplace or from inside Visual Studio Code.
+ Visual Studio Code finishes creating your project, and opens the **workflow.json** file for your workflow in the code editor. > [!NOTE]
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022 ms.suite: integration
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes.md
__RSS feed__: Get notified when this page is updated by copying and pasting the
+ **azureml-synapse** + Fix the issue that magic widget is disappeared. + **azureml-train-automl-runtime**
- + Updating AutoML dependencies to support python 3.8. This change will break compatibility with models trained with SDK 1.37 or below due to newer Pandas interfaces being saved in the model.
+ + Updating AutoML dependencies to support Python 3.8. This change will break compatibility with models trained with SDK 1.37 or below due to newer Pandas interfaces being saved in the model.
+ Automl training now supports numpy version 1.19 + Fix automl reset index logic for ensemble models in automl_setup_model_explanations API + In automl, use lightgbm surrogate model instead of linear surrogate model for sparse case after latest lightgbm version upgrade
Learn more about [image instance segmentation labeling](how-to-label-data.md).
+ Throw exception and clean up workspace and dependent resources if workspace private endpoint creation fails. + Support workspace sku upgrade in workspace update method. + **azureml-datadrift**
- + Update matplotlib version from 3.0.2 to 3.2.1 to support python 3.8.
+ + Update matplotlib version from 3.0.2 to 3.2.1 to support Python 3.8.
+ **azureml-dataprep** + Added support of web url data sources with `Range` or `Head` request. + Improved stability for file dataset mount and download.
Learn more about [image instance segmentation labeling](how-to-label-data.md).
+ **azureml-datadrift** + Data Drift results query from the SDK had a bug that didn't differentiate the minimum, maximum, and mean feature metrics, resulting in duplicate values. We have fixed this bug by prefixing target or baseline to the metric names. Before: duplicate min, max, mean. After: target_min, target_max, target_mean, baseline_min, baseline_max, baseline_mean. + **azureml-dataprep**
- + Improve handling of write restricted python environments when ensuring .NET Dependencies required for data delivery.
+ + Improve handling of write restricted Python environments when ensuring .NET Dependencies required for data delivery.
+ Fixed Dataflow creation on file with leading empty records. + Added error handling options for `to_partition_iterator` similar to `to_pandas_dataframe`. + **azureml-interpret**
Access the following web-based authoring tools from the studio:
### Azure Machine Learning SDK for Python v1.2.0 + **Breaking changes**
- + Drop support for python 2.7
+ + Drop support for Python 2.7
+ **Bug fixes and improvements** + **azure-cli-ml**
Access the following web-based authoring tools from the studio:
+ **Feature deprecation** + **Python 2.7**
- + Last version to support python 2.7
+ + Last version to support Python 2.7
+ **Breaking changes** + **Semantic Versioning 2.0.0**
Access the following web-based authoring tools from the studio:
+ **azureml-contrib-interpret** + Removed text explainers from azureml-contrib-interpret as text explanation has been moved to the interpret-text repo that will be released soon. + **azureml-core**
- + Dataset: usages for file dataset no longer depend on numpy and pandas to be installed in the python env.
+ + Dataset: usages for file dataset no longer depend on numpy and pandas to be installed in the Python env.
+ Changed LocalWebservice.wait_for_deployment() to check the status of the local Docker container before trying to ping its health endpoint, greatly reducing the amount of time it takes to report a failed deployment. + Fixed the initialization of an internal property used in LocalWebservice.reload() when the service object is created from an existing deployment using the LocalWebservice() constructor. + Edited error message for clarification.
The Experiment tab in the [new workspace portal](https://ml.azure.com) has been
+ Exception will be thrown out when either coarse grain or fine grained timestamp column is not included in keep columns list with indication for user that keeping can be done after either including timestamp column in keep column list or call with_time_stamp with None value to release timestamp columns. + Added logging for the size of a registered model. + **azureml-explain-model**
- + Fixed warning printed to console when "packaging" python package is not installed: "Using older than supported version of lightgbm, please upgrade to version greater than 2.2.1"
+ + Fixed warning printed to console when "packaging" Python package is not installed: "Using older than supported version of lightgbm, please upgrade to version greater than 2.2.1"
+ Fixed download model explanation with sharding for global explanations with many features + Fixed mimic explainer missing initialization examples on output explanation + Fixed immutable error on set properties when uploading with explanation client using two different types of models
At the time, of this release, the following browsers are supported: Chrome, Fire
+ **azureml-pipeline-core** + Added support to create, update, and use PipelineDrafts - can be used to maintain mutable pipeline definitions and use them interactively to run + **azureml-train-automl**
- + Created feature to install specific versions of gpu-capable pytorch v1.1.0, :::no-loc text="cuda"::: toolkit 9.0, pytorch-transformers, which is required to enable BERT/ XLNet in the remote python runtime environment.
+ + Created feature to install specific versions of gpu-capable pytorch v1.1.0, :::no-loc text="cuda"::: toolkit 9.0, pytorch-transformers, which is required to enable BERT/ XLNet in the remote Python runtime environment.
+ **azureml-train-core** + Early failure of some hyperparameter space definition errors directly in the sdk instead of server side.
At the time, of this release, the following browsers are supported: Chrome, Fire
+ Improve reliability of API calls be expanding retries to common requests library exceptions. + Add support for submitting runs from a submitted run. + Fixed expiring SAS token issue in FileWatcher, which caused files to stop being uploaded after their initial token had expired.
- + Supported importing HTTP csv/tsv files in dataset python SDK.
+ + Supported importing HTTP csv/tsv files in dataset Python SDK.
+ Deprecated the Workspace.setup() method. Warning message shown to users suggests using create() or get()/from_config() instead.
- + Added Environment.add_private_pip_wheel(), which enables uploading private custom python packages `whl`to the workspace and securely using them to build/materialize the environment.
+ + Added Environment.add_private_pip_wheel(), which enables uploading private custom Python packages `whl`to the workspace and securely using them to build/materialize the environment.
+ You can now update the TLS/SSL certificate for the scoring endpoint deployed on AKS cluster both for Microsoft generated and customer certificate. + **azureml-explain-model** + Added parameter to add a model ID to explanations on upload.
machine-learning Designer Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/designer-error-codes.md
To get more help, we recommend that you post the detailed message that accompani
## Execute Python Script component
-Search **in azureml_main** in **70_driver_logs** of **Execute Python Script component** and you could find which line occurred error. For example, "File "/tmp/tmp01_ID/user_script.py", line 17, in azureml_main" indicates that the error occurred in the 17 line of your python script.
+Search **in azureml_main** in **70_driver_logs** of **Execute Python Script component** and you could find which line occurred error. For example, "File "/tmp/tmp01_ID/user_script.py", line 17, in azureml_main" indicates that the error occurred in the 17 line of your Python script.
## Distributed training
machine-learning Execute Python Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/execute-python-script.md
The Execute Python Script component contains sample Python code that you can use
> [!IMPORTANT] > Please use unique and meaningful name for files in the script bundle since some common words (like `test`, `app` and etc) are reserved for built-in services.
- Following is a script bundle example, which contains a python script file and a txt file:
+ Following is a script bundle example, which contains a Python script file and a txt file:
> [!div class="mx-imgBorder"] > ![Script bundle example](media/module/python-script-bundle.png)
The Execute Python Script component contains sample Python code that you can use
# Execution logic goes here print(f'Input pandas.DataFrame #1: {dataframe1}')
- # Test the custom defined python function
+ # Test the custom defined Python function
dataframe1 = my_func(dataframe1) # Test to read custom uploaded files by relative path
The Execute Python Script component contains sample Python code that you can use
If the component is completed, check the output if as expected.
- If the component is failed, you need to do some troubleshooting. Select the component, and open **Outputs+logs** in the right pane. Open **70_driver_log.txt** and search **in azureml_main**, then you could find which line caused the error. For example, "File "/tmp/tmp01_ID/user_script.py", line 17, in azureml_main" indicates that the error occurred in the 17 line of your python script.
+ If the component is failed, you need to do some troubleshooting. Select the component, and open **Outputs+logs** in the right pane. Open **70_driver_log.txt** and search **in azureml_main**, then you could find which line caused the error. For example, "File "/tmp/tmp01_ID/user_script.py", line 17, in azureml_main" indicates that the error occurred in the 17 line of your Python script.
## Results
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-register-datasets.md
partition_keys = new_dataset.partition_keys # ['country']
After you're done wrangling your data, you can [register](#register-datasets) your dataset, and then load it into your notebook for data exploration prior to model training.
-For FileDatasets, you can either **mount** or **download** your dataset, and apply the python libraries you'd normally use for data exploration. [Learn more about mount vs download](how-to-train-with-datasets.md#mount-vs-download).
+For FileDatasets, you can either **mount** or **download** your dataset, and apply the Python libraries you'd normally use for data exploration. [Learn more about mount vs download](how-to-train-with-datasets.md#mount-vs-download).
```python # download the dataset
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
The following table provides an overview of scenarios to help you choose what wo
| Scenario | Inference HTTP Server | Local endpoint | |--|--|--|
-| Update local python environment, **without** Docker image rebuild | Yes | No |
+| Update local Python environment, **without** Docker image rebuild | Yes | No |
| Update scoring script | Yes | Yes | | Update deployment configurations (deployment, environment, code, model) | No | Yes | | VS Code Debugger integration | Yes | Yes |
machine-learning How To Debug Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-pipelines.md
run.log("scalar_value", 0.95)
# Python print statement print("I am a python print statement, I will be sent to the driver logs.")
-# Initialize python logger
+# Initialize Python logger
logger = logging.getLogger(__name__) logger.setLevel(args.log_level)
-# Plain python logging statements
+# Plain Python logging statements
logger.debug("I am a plain debug statement, I will be sent to the driver logs.") logger.info("I am a plain info statement, I will be sent to the driver logs.")
machine-learning How To Deploy Inferencing Gpus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-inferencing-gpus.md
The conda environment file specifies the dependencies for the service. It includ
```yaml name: project_environment dependencies:
- # The python interpreter version.
+ # The Python interpreter version.
# Currently Azure ML only supports 3.5.2 and later. - python=3.6.2
machine-learning How To Designer Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-designer-python.md
This article uses the sample dataset, **Automobile price data (Raw)**.
1. Connect the output port of the dataset to the top-left input port of the **Execute Python Script** component. The designer exposes the input as a parameter to the entry point script.
- The right input port is reserved for zipped python libraries.
+ The right input port is reserved for zipped Python libraries.
![Connect datasets](media/how-to-designer-python/connect-dataset.png)
machine-learning How To Differential Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-differential-privacy.md
Use pip to install the [SmartNoise Python packages](https://pypi.org/project/ope
`pip install opendp-smartnoise`
-To verify that the packages are installed, launch a python prompt and type:
+To verify that the packages are installed, launch a Python prompt and type:
```python import opendp.smartnoise.core
machine-learning How To Inference Server Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-server-http.md
The following table contains the parameters accepted by the server:
The following steps explain how the Azure Machine Learning inference HTTP server works handles incoming requests:
-1. A python CLI wrapper sits around the server's network stack and is used to start the server.
+1. A Python CLI wrapper sits around the server's network stack and is used to start the server.
1. A client sends a request to the server. 1. When a request is received, it goes through the [WSGI](https://www.fullstackpython.com/wsgi-servers.html) server and is then dispatched to one of the workers. - [Gunicorn](https://docs.gunicorn.org/) is used on __Linux__.
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-environments.md
The examples in this article show how to:
* Use an environment for training. * Use an environment for web service deployment.
-For a high-level overview of how environments work in Azure Machine Learning, see [What are ML environments?](concept-environments.md) For information about managing environments in the Azure ML studio, see [Manage environments in the studio](how-to-manage-environments-in-studio.md). For information about configuring development environments, see [here](how-to-configure-environment.md).
+For a high-level overview of how environments work in Azure Machine Learning, see [What are ML environments?](concept-environments.md) For information about managing environments in the Azure ML studio, see [Manage environments in the studio](how-to-manage-environments-in-studio.md). For information about configuring development environments, see [Set up a Python development environment for Azure ML](how-to-configure-environment.md).
## Prerequisites
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Title: Built-in policy definitions for Azure Database for MariaDB description: Lists Azure Policy built-in policy definitions for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
media-services Limits Quotas Constraints Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/limits-quotas-constraints-reference.md
This article lists some of the most common Microsoft Azure Media Services limits
## Storage limits
-Azure Storage block blog limits apply to storage accounts used with Media Services. See [Azure Blob Storage limits](/azure/azure-resource-manager/management/azure-subscription-service-limits.md#azure-blob-storage-limits).
+Azure Storage block blog limits apply to storage accounts used with Media Services. See [Azure Blob Storage limits](/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-blob-storage-limits).
These limit includes the total stored data storage size of the files that you upload for encoding and the file sizes of the encoded files. The limit for file size for encoding is a different limit. See [File size for encoding](#file-size-for-encoding-limit).
media-services Migrate V 2 V 3 Migration Scenario Based Publishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-publishing.md
After migration, you should avoid making any calls to the v2 API to modify strea
### How to guides - [Manage streaming endpoints with Media Services v3](stream-manage-streaming-endpoints-how-to.md)-- [CLI example: Publish an asset](cli-publish-asset.md) - [Create a streaming locator and build URLs](create-streaming-locator-build-url.md) - [Download the results of a job](job-download-results-how-to.md) - [Signal descriptive audio tracks](signal-descriptive-audio-howto.md)
migrate Concepts Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-assessment-calculation.md
For an Azure VM Assessment, the assessment reviews the following properties of a
Property | Details | Azure readiness status | |
-**Boot type** | Azure supports VMs with a boot type of BIOS, not UEFI. | Conditionally ready if the boot type is UEFI
+**Boot type** | Azure supports UEFI boot type for OS mentioned [here](./common-questions-server-migration.md#which-operating-systems-are-supported-for-migration-of-uefi-based-machines-to-azure)| Not ready if the boot type is UEFI and Operating System running on the VM is: Windows Server 2003/Windows Server 2003 R2/Windows Server 2008/Windows Server 2008 R2
**Cores** | Each server must have no more than 128 cores, which is the maximum number an Azure VM supports.<br/><br/> If performance history is available, Azure Migrate considers the utilized cores for comparison. If the assessment settings specify a comfort factor, the number of utilized cores is multiplied by the comfort factor.<br/><br/> If there's no performance history, Azure Migrate uses the allocated cores to apply the comfort factor. | Ready if the number of cores is within the limit **RAM** | Each server must have no more than 3,892 GB of RAM, which is the maximum size an Azure M-series Standard_M128m&nbsp;<sup>2</sup> VM supports. [Learn more](../virtual-machines/sizes.md).<br/><br/> If performance history is available, Azure Migrate considers the utilized RAM for comparison. If a comfort factor is specified, the utilized RAM is multiplied by the comfort factor.<br/><br/> If there's no history, the allocated RAM is used to apply a comfort factor.<br/><br/> | Ready if the amount of RAM is within the limit **Storage disk** | The allocated size of a disk must be no more than 64 TB.<br/><br/> The number of disks attached to the server, including the OS disk, must be 65 or fewer. | Ready if the disk size and number are within the limits
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
migrate Troubleshoot Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-assessment.md
This table lists help for fixing the following assessment readiness issues.
**Issue** | **Fix** |
-Unsupported boot type | Azure doesn't support VMs with an EFI boot type. Convert the boot type to BIOS before you run a migration. <br/><br/>You can use Azure Migrate Server Migration to handle the migration of such VMs. It will convert the boot type of the VM to BIOS during the migration.
+Unsupported boot type | Azure does not support UEFI boot type for VMs with the Operating Systems: Windows Server 2003/Windows Server 2003 R2/Windows Server 2008/Windows Server 2008 R2. Check list of OS that support UEFI-based machines [here](./common-questions-server-migration.md#which-operating-systems-are-supported-for-migration-of-uefi-based-machines-to-azure)
Conditionally supported Windows operating system | The operating system has passed its end-of-support date and needs a Custom Support Agreement for [support in Azure](/troubleshoot/azure/virtual-machines/server-software-support). Consider upgrading before you migrate to Azure. Review information about [preparing servers running Windows Server 2003](prepare-windows-server-2003-migration.md) for migration to Azure. Unsupported Windows operating system | Azure supports only [selected Windows OS versions](/troubleshoot/azure/virtual-machines/server-software-support). Consider upgrading the server before you migrate to Azure. Conditionally endorsed Linux OS | Azure endorses only [selected Linux OS versions](../virtual-machines/linux/endorsed-distros.md). Consider upgrading the server before you migrate to Azure. For more information, see [this website](#linux-vms-are-conditionally-ready-in-an-azure-vm-assessment).
mysql How To Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-cli.md
[[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-> [!IMPORTANT]
-> Read replicas in Azure Database for MySQL - Flexible Server is in preview.
- In this article, you will learn how to create and manage read replicas in the Azure Database for MySQL flexible server using the Azure CLI. To learn more about read replicas, see the [overview](concepts-read-replicas.md). > [!Note]
mysql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-portal.md
Last updated 06/17/2021
[[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-> [!IMPORTANT]
-> Read replicas in Azure Database for MySQL - Flexible Server is in preview.
- In this article, you will learn how to create and manage read replicas in the Azure Database for MySQL flexible server using the Azure portal. > [!Note]
mysql Howto Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-common-errors.md
Last updated 5/21/2021
-# Commonly encountered errors during or post migration to Azure Database for MySQL
+# Troubleshoot errors commonly encountered during or post migration to Azure Database for MySQL
[!INCLUDE[applies-to-mysql-single-flexible-server](includes/applies-to-mysql-single-flexible-server.md)]
mysql Howto Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-query-performance.md
Title: Troubleshoot query performance - Azure Database for MySQL
-description: Learn how to use EXPLAIN to troubleshoot query performance in Azure Database for MySQL.
+ Title: Profile query performance - Azure Database for MySQL
+description: Learn how to profile query performance in Azure Database for MySQL by using EXPLAIN.
Previously updated : 3/18/2020 Last updated : 3/10/2022
-# How to use EXPLAIN to profile query performance in Azure Database for MySQL
+# Profile query performance in Azure Database for MySQL using EXPLAIN
[!INCLUDE[applies-to-mysql-single-flexible-server](includes/applies-to-mysql-single-flexible-server.md)]
possible_keys: NULL
Extra: Using where ```
-As can be seen from this example, the value of *key* is NULL. This output means MySQL cannot find any indexes optimized for the query and it performs a full table scan. Let's optimize this query by adding an index on the **ID** column.
+As can be seen from this example, the value of *key* is NULL. This output means MySQL can't find any indexes optimized for the query and it performs a full table scan. Let's optimize this query by adding an index on the **ID** column.
```sql mysql> ALTER TABLE tb1 ADD KEY (id);
possible_keys: NULL
Extra: Using where; Using temporary; Using filesort ```
-As can be seen from the output, MySQL does not use any indexes because no proper indexes are available. It also shows *Using temporary; Using file sort*, which means MySQL creates a temporary table to satisfy the **GROUP BY** clause.
+As can be seen from the output, MySQL doesn't use any indexes because no proper indexes are available. It also shows *Using temporary; Using file sort*, which means MySQL creates a temporary table to satisfy the **GROUP BY** clause.
Creating an index on column **c2** alone makes no difference, and MySQL still needs to create a temporary table:
possible_keys: NULL
Extra: Using where; Using index ```
-The EXPLAIN now shows that MySQL is able to use combined index to avoid additional sorting since the index is already sorted.
+The EXPLAIN now shows that MySQL can use a combined index to avoid additional sorting since the index is already sorted.
## Conclusion
-Using EXPLAIN and different type of Indexes can increase performance significantly. Having an index on the table does not necessarily mean MySQL would be able to use it for your queries. Always validate your assumptions using EXPLAIN and optimize your queries using indexes.
+Using EXPLAIN and different type of Indexes can increase performance significantly. Having an index on the table doesn't necessarily mean MySQL would be able to use it for your queries. Always validate your assumptions using EXPLAIN and optimize your queries using indexes.
## Next steps
mysql Howto Troubleshoot Sys Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-sys-schema.md
Title: Utilize sys_schema - Azure Database for MySQL
-description: Learn how to use sys_schema to find performance issues and maintain database in Azure Database for MySQL.
+ Title: Use the sys_schema - Azure Database for MySQL
+description: Learn how to use the sys_schema to find performance issues and maintain databases in Azure Database for MySQL.
Previously updated : 3/30/2020 Last updated : 3/10/2022
-# How to use sys_schema for performance tuning and database maintenance in Azure Database for MySQL
+# Tune performance and maintain databases in Azure Database for MySQL using the sys_schema
[!INCLUDE[applies-to-mysql-single-flexible-server](includes/applies-to-mysql-single-flexible-server.md)]
-The MySQL performance_schema, first available in MySQL 5.5, provides instrumentation for many vital server resources such as memory allocation, stored programs, metadata locking, etc. However, the performance_schema contains more than 80 tables, and getting the necessary information often requires joining tables within the performance_schema, as well as tables from the information_schema. Building on both performance_schema and information_schema, the sys_schema provides a powerful collection of [user-friendly views](https://dev.mysql.com/doc/refman/5.7/en/sys-schema-views.html) in a read-only database and is fully enabled in Azure Database for MySQL version 5.7.
+The MySQL performance_schema, first available in MySQL 5.5, provides instrumentation for many vital server resources such as memory allocation, stored programs, metadata locking, etc. However, the performance_schema contains more than 80 tables, and getting the necessary information often requires joining tables within the performance_schema, and tables from the information_schema. Building on both performance_schema and information_schema, the sys_schema provides a powerful collection of [user-friendly views](https://dev.mysql.com/doc/refman/5.7/en/sys-schema-views.html) in a read-only database and is fully enabled in Azure Database for MySQL version 5.7.
:::image type="content" source="./media/howto-troubleshoot-sys-schema/sys-schema-views.png" alt-text="views of sys_schema":::
To troubleshoot database performance issues, it may be beneficial to identify th
:::image type="content" source="./media/howto-troubleshoot-sys-schema/summary-by-statement.png" alt-text="summary by statement":::
-In this example Azure Database for MySQL spent 53 minutes flushing the slog query log 44579 times. That's a long time and many IOs. You can reduce this activity by either disable your slow query log or decrease the frequency of slow query login Azure portal.
+In this example Azure Database for MySQL spent 53 minutes flushing the slog query log 44579 times. That's a long time and many IOs. You can reduce this activity by either disabling your slow query log or decreasing the frequency of slow query login to the Azure portal.
## Database maintenance
In this example Azure Database for MySQL spent 53 minutes flushing the slog quer
[!IMPORTANT] > Querying this view can impact performance. It is recommended to perform this troubleshooting during off-peak business hours.
-The InnoDB buffer pool resides in memory and is the main cache mechanism between the DBMS and storage. The size of the InnoDB buffer pool is tied to the performance tier and cannot be changed unless a different product SKU is chosen. As with memory in your operating system, old pages are swapped out to make room for fresher data. To find out which tables consume most of the InnoDB buffer pool memory, you can query the *sys.innodb_buffer_stats_by_table* view.
+The InnoDB buffer pool resides in memory and is the main cache mechanism between the DBMS and storage. The size of the InnoDB buffer pool is tied to the performance tier and canΓÇÖt be changed unless a different product SKU is chosen. As with memory in your operating system, old pages are swapped out to make room for fresher data. To find out which tables consume most of the InnoDB buffer pool memory, you can query the *sys.innodb_buffer_stats_by_table* view.
:::image type="content" source="./media/howto-troubleshoot-sys-schema/innodb-buffer-status.png" alt-text="InnoDB buffer status":::
-In the graphic above, it is apparent that other than system tables and views, each table in the mysqldatabase033 database, which hosts one of my WordPress sites, occupies 16 KB, or 1 page, of data in memory.
+In the graphic above, it's apparent that other than system tables and views, each table in the mysqldatabase033 database, which hosts one of my WordPress sites, occupies 16 KB, or 1 page, of data in memory.
### *Sys.schema_unused_indexes* & *sys.schema_redundant_indexes*
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/policy-reference.md
Title: Built-in policy definitions for Azure Database for MySQL description: Lists Azure Policy built-in policy definitions for Azure Database for MySQL. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
networking Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure networking description: Sample Azure Resource Graph queries for Azure networking showing use of resource types and tables to access Azure networking related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
openshift Howto Create A Storageclass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-a-storageclass.md
ARO_RESOURCE_GROUP=aro-rg
CLUSTER=cluster ARO_SERVICE_PRINCIPAL_ID=$(az aro show -g $ARO_RESOURCE_GROUP -n $CLUSTER --query servicePrincipalProfile.clientId -o tsv)
-az role assignment create --role Contributor --assignee $ARO_SERVICE_PRINCIPAL_ID -g $AZURE_FILES_RESOURCE_GROUP
+az role assignment create --role Contributor --scope /subscriptions/mySubscriptionID/resourceGroups/myResourceGroupName --assignee $ARO_SERVICE_PRINCIPAL_ID -g $AZURE_FILES_RESOURCE_GROUP
``` ### Set ARO cluster permissions
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/policy-reference.md
Title: Built-in policy definitions for Azure Database for PostgreSQL description: Lists Azure Policy built-in policy definitions for Azure Database for PostgreSQL. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
purview Catalog Lineage User Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-lineage-user-guide.md
This article provides an overview of the data lineage features in Azure Purview
One of the platform features of Azure Purview is the ability to show the lineage between datasets created by data processes. Systems like Data Factory, Data Share, and Power BI capture the lineage of data as it moves. Custom lineage reporting is also supported via Atlas hooks and REST API. ## Lineage collection
- Metadata collected in Azure Purview from enterprise data systems are stitched across to show an end to end data lineage. Data systems that collect lineage into Azure Purview are broadly categorized into following three types.
+ Metadata collected in Azure Purview from enterprise data systems are stitched across to show an end to end data lineage. Data systems that collect lineage into Azure Purview are broadly categorized into following three types:
+
+ - [Data processing systems](#data-processing-systems)
+ - [Data storage systems](#data-storage-systems)
+ - [Data analytics and reporting systems](#data-analytics-and-reporting-systems)
+
+Each system supports a different level of lineage scope. Check the sections below, or your system's individual lineage article, to confirm the scope of lineage currently available.
-### Data processing system
+### Data processing systems
Data integration and ETL tools can push lineage in to Azure Purview at execution time. Tools such as Data Factory, Data Share, Synapse, Azure Databricks, and so on, belong to this category of data systems. The data processing systems reference datasets as source from different databases and storage solutions to create target datasets. The list of data processing systems currently integrated with Azure Purview for lineage are listed in below table. | Data processing system | Supported scope | | - | | | Azure Data Factory | [Copy activity](how-to-link-azure-data-factory.md#copy-activity-support) <br> [Data flow activity](how-to-link-azure-data-factory.md#data-flow-support) <br> [Execute SSIS package activity](how-to-link-azure-data-factory.md#execute-ssis-package-support) | | Azure Synapse Analytics | [Copy activity](how-to-lineage-azure-synapse-analytics.md#copy-activity-support) <br> [Data flow activity](how-to-lineage-azure-synapse-analytics.md#data-flow-support) |
+| Azure SQL Database (Preview) | [Lineage extraction](register-scan-azure-sql-database.md?tabs=sql-authentication#lineagepreview) |
| Azure Data Share | [Share snapshot](how-to-link-azure-data-share.md) | ### Data storage systems
Databases & storage solutions such as Oracle, Teradata, and SAP have query engin
|| [SAP ECC](register-scan-sapecc-source.md)| || [SAP S/4HANA](register-scan-saps4hana-source.md) |
-### Data analytics & reporting systems
+### Data analytics and reporting systems
Data systems like Azure ML and Power BI report lineage into Azure Purview. These systems will use the datasets from storage systems and process through their meta model to create BI Dashboard, ML experiments and so on. | Data analytics & reporting system | Supported scope |
purview Catalog Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-managed-vnet.md
Currently, the following data sources are supported to have a managed private en
- Azure Blob Storage - Azure Data Lake Storage Gen 2 - Azure SQL Database -- Azure SQL Database Managed Instances
+- Azure SQL Database Managed Instance
- Azure Cosmos DB - Azure Synapse Analytics - Azure Files
purview How To Lineage Spark Atlas Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-lineage-spark-atlas-connector.md
The connectors require a version of Spark 2.4.0+. But Spark version 3 is not sup
| spark.sql.queryExecutionListeners | 2.3.0 | | spark.sql.streaming.streamingQueryListeners | 2.4.0 |
-If the Spark cluster version is below 2.4.0, Stream query lineage and most of the query lineage will not be captured.
+>[!IMPORTANT]
+> * If the Spark cluster version is below 2.4.0, Stream query lineage and most of the query lineage will not be captured.
+>
+> * Spark version 3 is not supported.
### Step 1. Prepare Spark Atlas connector package The following steps are documented based on DataBricks as an example:
purview Register Scan Azure Sql Database Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database-managed-instance.md
This article outlines how to register and Azure SQL Database Managed Instance, a
* [Configure public endpoint in Azure SQL Managed Instance](../azure-sql/managed-instance/public-endpoint-configure.md) > [!Note]
- > We now support scanning Azure SQL Database Managed Instances that are configured with private endpoints using Azure Purview ingestion private endpoints and a self-hosted integration runtime VM.
+ > We now support scanning Azure SQL Database Managed Instances over the private connection using Azure Purview ingestion private endpoints and a self-hosted integration runtime VM.
> For more information related to prerequisites, see [Connect to your Azure Purview and scan data sources privately and securely](./catalog-private-link-end-to-end.md) ## Register
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
Select your method of authentication from the tabs below for steps to authentica
> [!Note] > Only the server-level principal login (created by the provisioning process) or members of the `loginmanager` database role in the master database can create new logins. It takes about **15 minutes** after granting permission, the Azure Purview account should have the appropriate permissions to be able to scan the resource(s).
-You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1) to create a sign in for Azure SQL Database. You'll' need **username** and **password** for the next steps.
+1. You'll need a SQL login with at least `db_datareader` permissions to be able to access the information Azure Purview needs to scan the database. You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1) to create a sign in for Azure SQL Database. You'll need to save the **username** and **password** for the next steps.
-1. Navigate to your key vault in the Azure portal
+1. Navigate to your key vault in the Azure portal.
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-key-vault.png" alt-text="Screenshot that shows the key vault.":::
You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-l
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-secret.png" alt-text="Screenshot that shows the key vault option to generate a secret.":::
-1. Enter the **Name** and **Value** as the *password* from your Azure SQL Database
+1. Enter the **Name** and **Value** as the *password* from your Azure SQL Database.
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-secret-sql.png" alt-text="Screenshot that shows the key vault option to enter the sql secret values.":::
You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-l
1. If your key vault isn't connected to Azure Purview yet, you'll need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
-1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the key to set up your scan
+1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the key to set up your scan.
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-credentials.png" alt-text="Screenshot that shows the key vault option to set up credentials.":::
purview Tutorial Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-resource-group.md
The limit for Azure Purview policies that can be enforced by Storage accounts is
Check blog, demo and related tutorials * [Blog: resource group-level governance can significantly reduce effort](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-resource-group-level-governance-can/ba-p/3096314)
-* [Demo of data owner access policies for Azure Storage](https://docs.microsoft.com/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
+* [Demo of data owner access policies for Azure Storage](https://docs.microsoft.com/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4.)
* [Fine-grain data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Previously updated : 02/14/2022 Last updated : 03/10/2022
The following table provides a brief description of each built-in role. Click th
> | [Log Analytics Reader](#log-analytics-reader) | Log Analytics Reader can view and search all monitoring data as well as and view monitoring settings, including viewing the configuration of Azure diagnostics on all Azure resources. | 73c42c96-874c-492b-b04d-ab87d138a893 | > | [Schema Registry Contributor (Preview)](#schema-registry-contributor-preview) | Read, write, and delete Schema Registry groups and schemas. | 5dffeca3-4936-4216-b2bc-10343a5abb25 | > | [Schema Registry Reader (Preview)](#schema-registry-reader-preview) | Read and list Schema Registry groups and schemas. | 2c56ea50-c6b3-40a6-83c0-9d98858bc7d2 |
+> | [Stream Analytics Query Tester](#stream-analytics-query-tester) | Lets you perform query testing without creating a stream analytics job first | 1ec5b3c1-b17e-4e25-8312-2acb3c3c5abf |
> | **AI + machine learning** | | | > | [AzureML Data Scientist](#azureml-data-scientist) | Can perform all actions within an Azure Machine Learning workspace, except for creating or deleting compute resources and modifying the workspace itself. | f6c7c914-8db3-469d-8ca1-694a8f32e121 | > | [Cognitive Services Contributor](#cognitive-services-contributor) | Lets you create, read, update, delete and manage keys of Cognitive Services. | 25fbc0a9-bd7c-42a3-aa1a-3b75d497ee68 |
The following table provides a brief description of each built-in role. Click th
> | [Logic App Contributor](#logic-app-contributor) | Lets you manage logic apps, but not change access to them. | 87a39d53-fc1b-424a-814c-f7e04687dc9e | > | [Logic App Operator](#logic-app-operator) | Lets you read, enable, and disable logic apps, but not edit or update them. | 515c2055-d9d4-4321-b1b9-bd0c9a0f79fe | > | **Identity** | | |
+> | [Domain Services Contributor](#domain-services-contributor) | Can manage Azure AD Domain Services and related network configurations | eeaeda52-9324-47f6-8069-5d5bade478b2 |
+> | [Domain Services Reader](#domain-services-reader) | Can view Azure AD Domain Services and related network configurations | 361898ef-9ed1-48c2-849c-a832951106bb |
> | [Managed Identity Contributor](#managed-identity-contributor) | Create, Read, Update, and Delete User Assigned Identity | e40ec5ca-96e0-45a2-b4ff-59039f2c2b59 | > | [Managed Identity Operator](#managed-identity-operator) | Read and Assign User Assigned Identity | f1a07417-d97a-45cb-824c-7a7467783830 | > | **Security** | | |
Read and list Schema Registry groups and schemas.
} ```
+### Stream Analytics Query Tester
+
+Lets you perform query testing without creating a stream analytics job first
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.StreamAnalytics](resource-provider-operations.md#microsoftstreamanalytics)/locations/TestQuery/action | Test Query for Stream Analytics Resource Provider |
+> | [Microsoft.StreamAnalytics](resource-provider-operations.md#microsoftstreamanalytics)/locations/OperationResults/read | Read Stream Analytics Operation Result |
+> | [Microsoft.StreamAnalytics](resource-provider-operations.md#microsoftstreamanalytics)/locations/SampleInput/action | Sample Input for Stream Analytics Resource Provider |
+> | [Microsoft.StreamAnalytics](resource-provider-operations.md#microsoftstreamanalytics)/locations/CompileQuery/action | Compile Query for Stream Analytics Resource Provider |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Lets you perform query testing without creating a stream analytics job first",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/1ec5b3c1-b17e-4e25-8312-2acb3c3c5abf",
+ "name": "1ec5b3c1-b17e-4e25-8312-2acb3c3c5abf",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.StreamAnalytics/locations/TestQuery/action",
+ "Microsoft.StreamAnalytics/locations/OperationResults/read",
+ "Microsoft.StreamAnalytics/locations/SampleInput/action",
+ "Microsoft.StreamAnalytics/locations/CompileQuery/action"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Stream Analytics Query Tester",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ## AI + machine learning
Lets you read, enable, and disable logic apps, but not edit or update them. [Lea
## Identity
+### Domain Services Contributor
+
+Can manage Azure AD Domain Services and related network configurations [Learn more](../active-directory-domain-services/tutorial-create-instance.md)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/read | Gets or lists deployments. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/write | Creates or updates an deployment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/delete | Deletes a deployment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/cancel/action | Cancels a deployment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/validate/action | Validates an deployment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/whatIf/action | Predicts template deployment changes. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/exportTemplate/action | Export template for a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operationstatuses/read | Gets or lists deployment operation statuses. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Write | Create or update a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Delete | Delete a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Read | Read a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Activated/Action | Classic metric alert activated |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Resolved/Action | Classic metric alert resolved |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Throttled/Action | Classic metric alert rule throttled |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Incidents/Read | Read a classic metric alert incident |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/register/action | Register Domain Service |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/unregister/action | Unregister Domain Service |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/read | Read Domain Services |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/write | Write Domain Service |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/delete | Delete Domain Service |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for Domain Service |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the Domain Service resource |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for Domain Service |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/oucontainer/read | Read Ou Containers |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/oucontainer/write | Write Ou Container |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/oucontainer/delete | Delete Ou Container |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/register/action | Registers the subscription |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/unregister/action | Unregisters the subscription |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/read | Get the virtual network definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/write | Creates a virtual network or updates an existing virtual network |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/delete | Deletes a virtual network |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/peer/action | Peers a virtual network with another virtual network |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/join/action | Joins a virtual network. Not Alertable. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/subnets/read | Gets a virtual network subnet definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/subnets/write | Creates a virtual network subnet or updates an existing virtual network subnet |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/subnets/delete | Deletes a virtual network subnet |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/subnets/join/action | Joins a virtual network. Not Alertable. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/virtualNetworkPeerings/read | Gets a virtual network peering definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/virtualNetworkPeerings/write | Creates a virtual network peering or updates an existing virtual network peering |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/virtualNetworkPeerings/delete | Deletes a virtual network peering |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/providers/Microsoft.Insights/diagnosticSettings/read | Get the diagnostic settings of Virtual Network |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/providers/Microsoft.Insights/metricDefinitions/read | Gets available metrics for the PingMesh |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/azureFirewalls/read | Get Azure Firewall |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/ddosProtectionPlans/read | Gets a DDoS Protection Plan |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/ddosProtectionPlans/join/action | Joins a DDoS Protection Plan. Not alertable. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/read | Gets a load balancer definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/delete | Deletes a load balancer |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/*/read | |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/backendAddressPools/join/action | Joins a load balancer backend address pool. Not Alertable. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/inboundNatRules/join/action | Joins a load balancer inbound nat rule. Not Alertable. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/natGateways/join/action | Joins a NAT Gateway |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkInterfaces/read | Gets a network interface definition. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkInterfaces/write | Creates a network interface or updates an existing network interface. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkInterfaces/delete | Deletes a network interface |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkInterfaces/join/action | Joins a Virtual Machine to a network interface. Not Alertable. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/defaultSecurityRules/read | Gets a default security rule definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/read | Gets a network security group definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/write | Creates a network security group or updates an existing network security group |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/delete | Deletes a network security group |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/join/action | Joins a network security group. Not Alertable. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/securityRules/read | Gets a security rule definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/securityRules/write | Creates a security rule or updates an existing security rule |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/securityRules/delete | Deletes a security rule |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/read | Gets a route table definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/write | Creates a route table or Updates an existing route table |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/delete | Deletes a route table definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/join/action | Joins a route table. Not Alertable. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/routes/read | Gets a route definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/routes/write | Creates a route or Updates an existing route |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/routes/delete | Deletes a route definition |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Can manage Azure AD Domain Services and related network configurations",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/eeaeda52-9324-47f6-8069-5d5bade478b2",
+ "name": "eeaeda52-9324-47f6-8069-5d5bade478b2",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/deployments/read",
+ "Microsoft.Resources/deployments/write",
+ "Microsoft.Resources/deployments/delete",
+ "Microsoft.Resources/deployments/cancel/action",
+ "Microsoft.Resources/deployments/validate/action",
+ "Microsoft.Resources/deployments/whatIf/action",
+ "Microsoft.Resources/deployments/exportTemplate/action",
+ "Microsoft.Resources/deployments/operations/read",
+ "Microsoft.Resources/deployments/operationstatuses/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Insights/AlertRules/Write",
+ "Microsoft.Insights/AlertRules/Delete",
+ "Microsoft.Insights/AlertRules/Read",
+ "Microsoft.Insights/AlertRules/Activated/Action",
+ "Microsoft.Insights/AlertRules/Resolved/Action",
+ "Microsoft.Insights/AlertRules/Throttled/Action",
+ "Microsoft.Insights/AlertRules/Incidents/Read",
+ "Microsoft.AAD/register/action",
+ "Microsoft.AAD/unregister/action",
+ "Microsoft.AAD/domainServices/read",
+ "Microsoft.AAD/domainServices/write",
+ "Microsoft.AAD/domainServices/delete",
+ "Microsoft.AAD/domainServices/providers/Microsoft.Insights/diagnosticSettings/read",
+ "Microsoft.AAD/domainServices/providers/Microsoft.Insights/diagnosticSettings/write",
+ "Microsoft.AAD/domainServices/providers/Microsoft.Insights/logDefinitions/read",
+ "Microsoft.AAD/domainServices/oucontainer/read",
+ "Microsoft.AAD/domainServices/oucontainer/write",
+ "Microsoft.AAD/domainServices/oucontainer/delete",
+ "Microsoft.Network/register/action",
+ "Microsoft.Network/unregister/action",
+ "Microsoft.Network/virtualNetworks/read",
+ "Microsoft.Network/virtualNetworks/write",
+ "Microsoft.Network/virtualNetworks/delete",
+ "Microsoft.Network/virtualNetworks/peer/action",
+ "Microsoft.Network/virtualNetworks/join/action",
+ "Microsoft.Network/virtualNetworks/subnets/read",
+ "Microsoft.Network/virtualNetworks/subnets/write",
+ "Microsoft.Network/virtualNetworks/subnets/delete",
+ "Microsoft.Network/virtualNetworks/subnets/join/action",
+ "Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read",
+ "Microsoft.Network/virtualNetworks/virtualNetworkPeerings/write",
+ "Microsoft.Network/virtualNetworks/virtualNetworkPeerings/delete",
+ "Microsoft.Network/virtualNetworks/providers/Microsoft.Insights/diagnosticSettings/read",
+ "Microsoft.Network/virtualNetworks/providers/Microsoft.Insights/metricDefinitions/read",
+ "Microsoft.Network/azureFirewalls/read",
+ "Microsoft.Network/ddosProtectionPlans/read",
+ "Microsoft.Network/ddosProtectionPlans/join/action",
+ "Microsoft.Network/loadBalancers/read",
+ "Microsoft.Network/loadBalancers/delete",
+ "Microsoft.Network/loadBalancers/*/read",
+ "Microsoft.Network/loadBalancers/backendAddressPools/join/action",
+ "Microsoft.Network/loadBalancers/inboundNatRules/join/action",
+ "Microsoft.Network/natGateways/join/action",
+ "Microsoft.Network/networkInterfaces/read",
+ "Microsoft.Network/networkInterfaces/write",
+ "Microsoft.Network/networkInterfaces/delete",
+ "Microsoft.Network/networkInterfaces/join/action",
+ "Microsoft.Network/networkSecurityGroups/defaultSecurityRules/read",
+ "Microsoft.Network/networkSecurityGroups/read",
+ "Microsoft.Network/networkSecurityGroups/write",
+ "Microsoft.Network/networkSecurityGroups/delete",
+ "Microsoft.Network/networkSecurityGroups/join/action",
+ "Microsoft.Network/networkSecurityGroups/securityRules/read",
+ "Microsoft.Network/networkSecurityGroups/securityRules/write",
+ "Microsoft.Network/networkSecurityGroups/securityRules/delete",
+ "Microsoft.Network/routeTables/read",
+ "Microsoft.Network/routeTables/write",
+ "Microsoft.Network/routeTables/delete",
+ "Microsoft.Network/routeTables/join/action",
+ "Microsoft.Network/routeTables/routes/read",
+ "Microsoft.Network/routeTables/routes/write",
+ "Microsoft.Network/routeTables/routes/delete"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Domain Services Contributor",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Domain Services Reader
+
+Can view Azure AD Domain Services and related network configurations
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/read | Gets or lists deployments. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operationstatuses/read | Gets or lists deployment operation statuses. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Read | Read a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Incidents/Read | Read a classic metric alert incident |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/read | Read Domain Services |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/oucontainer/read | Read Ou Containers |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/OutboundNetworkDependenciesEndpoints/read | Get the network endpoints of all outbound dependencies |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for Domain Service |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for Domain Service |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/read | Get the virtual network definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/subnets/read | Gets a virtual network subnet definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/virtualNetworkPeerings/read | Gets a virtual network peering definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/providers/Microsoft.Insights/diagnosticSettings/read | Get the diagnostic settings of Virtual Network |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/providers/Microsoft.Insights/metricDefinitions/read | Gets available metrics for the PingMesh |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/azureFirewalls/read | Get Azure Firewall |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/ddosProtectionPlans/read | Gets a DDoS Protection Plan |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/read | Gets a load balancer definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/*/read | |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/natGateways/read | Gets a Nat Gateway Definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkInterfaces/read | Gets a network interface definition. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/defaultSecurityRules/read | Gets a default security rule definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/read | Gets a network security group definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/securityRules/read | Gets a security rule definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/read | Gets a route table definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/routes/read | Gets a route definition |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Can view Azure AD Domain Services and related network configurations",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/361898ef-9ed1-48c2-849c-a832951106bb",
+ "name": "361898ef-9ed1-48c2-849c-a832951106bb",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/deployments/read",
+ "Microsoft.Resources/deployments/operations/read",
+ "Microsoft.Resources/deployments/operationstatuses/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Insights/AlertRules/Read",
+ "Microsoft.Insights/AlertRules/Incidents/Read",
+ "Microsoft.AAD/domainServices/read",
+ "Microsoft.AAD/domainServices/oucontainer/read",
+ "Microsoft.AAD/domainServices/OutboundNetworkDependenciesEndpoints/read",
+ "Microsoft.AAD/domainServices/providers/Microsoft.Insights/diagnosticSettings/read",
+ "Microsoft.AAD/domainServices/providers/Microsoft.Insights/logDefinitions/read",
+ "Microsoft.Network/virtualNetworks/read",
+ "Microsoft.Network/virtualNetworks/subnets/read",
+ "Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read",
+ "Microsoft.Network/virtualNetworks/providers/Microsoft.Insights/diagnosticSettings/read",
+ "Microsoft.Network/virtualNetworks/providers/Microsoft.Insights/metricDefinitions/read",
+ "Microsoft.Network/azureFirewalls/read",
+ "Microsoft.Network/ddosProtectionPlans/read",
+ "Microsoft.Network/loadBalancers/read",
+ "Microsoft.Network/loadBalancers/*/read",
+ "Microsoft.Network/natGateways/read",
+ "Microsoft.Network/networkInterfaces/read",
+ "Microsoft.Network/networkSecurityGroups/defaultSecurityRules/read",
+ "Microsoft.Network/networkSecurityGroups/read",
+ "Microsoft.Network/networkSecurityGroups/securityRules/read",
+ "Microsoft.Network/routeTables/read",
+ "Microsoft.Network/routeTables/routes/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Domain Services Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### Managed Identity Contributor Create, Read, Update, and Delete User Assigned Identity [Learn more](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md)
Lets you manage managed HSM pools, but not access to them. [Learn more](../key-v
> | Actions | Description | > | | | > | [Microsoft.KeyVault](resource-provider-operations.md#microsoftkeyvault)/managedHSMs/* | |
+> | [Microsoft.KeyVault](resource-provider-operations.md#microsoftkeyvault)/deletedManagedHsms/read | View the properties of a deleted managed hsm |
+> | [Microsoft.KeyVault](resource-provider-operations.md#microsoftkeyvault)/locations/deletedManagedHsms/read | View the properties of a deleted managed hsm |
+> | [Microsoft.KeyVault](resource-provider-operations.md#microsoftkeyvault)/locations/deletedManagedHsms/purge/action | Purge a soft deleted managed hsm |
+> | [Microsoft.KeyVault](resource-provider-operations.md#microsoftkeyvault)/locations/managedHsmOperationResults/read | Check the result of a long run operation |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Lets you manage managed HSM pools, but not access to them. [Learn more](../key-v
"permissions": [ { "actions": [
- "Microsoft.KeyVault/managedHSMs/*"
+ "Microsoft.KeyVault/managedHSMs/*",
+ "Microsoft.KeyVault/deletedManagedHsms/read",
+ "Microsoft.KeyVault/locations/deletedManagedHsms/read",
+ "Microsoft.KeyVault/locations/deletedManagedHsms/purge/action",
+ "Microsoft.KeyVault/locations/managedHsmOperationResults/read"
], "notActions": [], "dataActions": [],
Can read all monitoring data and edit monitoring settings. See also [Get started
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/activityLogAlerts/* | | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/* | Create and manage a classic metric alert | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/components/* | Create and manage Insights components |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/createNotifications/* | |
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/dataCollectionEndpoints/* | | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/dataCollectionRules/* | | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/dataCollectionRuleAssociations/* | |
Can read all monitoring data and edit monitoring settings. See also [Get started
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/metricalerts/* | | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/MetricDefinitions/* | Read metric definitions (list of available metric types for a resource). | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/Metrics/* | Read metrics for a resource. |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/notificationStatus/* | |
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/Register/Action | Register the Microsoft Insights provider | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/scheduledqueryrules/* | | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/webtests/* | Create and manage Insights web tests |
Can read all monitoring data and edit monitoring settings. See also [Get started
> | [Microsoft.AlertsManagement](resource-provider-operations.md#microsoftalertsmanagement)/smartDetectorAlertRules/* | | > | [Microsoft.AlertsManagement](resource-provider-operations.md#microsoftalertsmanagement)/actionRules/* | | > | [Microsoft.AlertsManagement](resource-provider-operations.md#microsoftalertsmanagement)/smartGroups/* | |
+> | [Microsoft.AlertsManagement](resource-provider-operations.md#microsoftalertsmanagement)/migrateFromSmartDetection/* | |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Can read all monitoring data and edit monitoring settings. See also [Get started
"Microsoft.Insights/activityLogAlerts/*", "Microsoft.Insights/AlertRules/*", "Microsoft.Insights/components/*",
+ "Microsoft.Insights/createNotifications/*",
"Microsoft.Insights/dataCollectionEndpoints/*", "Microsoft.Insights/dataCollectionRules/*", "Microsoft.Insights/dataCollectionRuleAssociations/*",
Can read all monitoring data and edit monitoring settings. See also [Get started
"Microsoft.Insights/metricalerts/*", "Microsoft.Insights/MetricDefinitions/*", "Microsoft.Insights/Metrics/*",
+ "Microsoft.Insights/notificationStatus/*",
"Microsoft.Insights/Register/Action", "Microsoft.Insights/scheduledqueryrules/*", "Microsoft.Insights/webtests/*",
Can read all monitoring data and edit monitoring settings. See also [Get started
"Microsoft.WorkloadMonitor/monitors/*", "Microsoft.AlertsManagement/smartDetectorAlertRules/*", "Microsoft.AlertsManagement/actionRules/*",
- "Microsoft.AlertsManagement/smartGroups/*"
+ "Microsoft.AlertsManagement/smartGroups/*",
+ "Microsoft.AlertsManagement/migrateFromSmartDetection/*"
], "notActions": [], "dataActions": [],
role-based-access-control Conditions Role Assignments Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-cli.md
To add a role assignment condition, use [az role assignment create](/cli/azure/r
The following example shows how to assign the [Storage Blob Data Reader](built-in-roles.md#storage-blob-data-reader) role with a condition. The condition checks whether container name equals 'blobs-example-container'. ```azurecli
-az role assignment create --role "Storage Blob Data Reader" --assignee "user1@contoso.com" --resource-group {resourceGroup} \
+az role assignment create --role "Storage Blob Data Reader" --scope /subscriptions/mySubscriptionID/resourceGroups/myResourceGroupName --scope /subscriptions/mySubscriptionID/resourceGroups/myResourceGroupName --assignee "user1@contoso.com" --resource-group {resourceGroup} \
--description "Read access if container name equals blobs-example-container" \ --condition "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'))" \ --condition-version "2.0"
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Previously updated : 02/14/2022 Last updated : 03/10/2022
Azure service: [Virtual Machines](../virtual-machines/index.yml), [Virtual Machi
> | Microsoft.Compute/capacityReservationGroups/read | Get the properties of a capacity reservation group | > | Microsoft.Compute/capacityReservationGroups/write | Creates a new capacity reservation group or updates an existing capacity reservation group | > | Microsoft.Compute/capacityReservationGroups/delete | Deletes the capacity reservation group |
+> | Microsoft.Compute/capacityReservationGroups/deploy/action | Deploy a new VM/VMSS using Capacity Reservation Group |
> | Microsoft.Compute/capacityReservationGroups/capacityReservations/read | Get the properties of a capacity reservation | > | Microsoft.Compute/capacityReservationGroups/capacityReservations/write | Creates a new capacity reservation or updates an existing capacity reservation | > | Microsoft.Compute/capacityReservationGroups/capacityReservations/delete | Deletes the capacity reservation |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/virtualHubs/read | Get a Virtual Hub | > | Microsoft.Network/virtualHubs/write | Create or update a Virtual Hub | > | Microsoft.Network/virtualHubs/effectiveRoutes/action | Gets effective route configured on Virtual Hub |
-> | Microsoft.Network/virtualHubs/migrateRouteService/action | Migrate the route service of a virtual hub from traditional CloudService To Virtual Machine Scale Set |
+> | Microsoft.Network/virtualHubs/migrateRouteService/action | Validate or execute the hub router migration |
> | Microsoft.Network/virtualHubs/inboundRoutes/action | Gets routes learnt from a virtual wan connection | > | Microsoft.Network/virtualHubs/outboundRoutes/action | Get Routes advertised by a virtual wan connection | > | Microsoft.Network/virtualHubs/bgpConnections/read | Gets a Hub Bgp Connection child resource of Virtual Hub |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/virtualNetworks/peer/action | Peers a virtual network with another virtual network | > | Microsoft.Network/virtualNetworks/join/action | Joins a virtual network. Not Alertable. | > | Microsoft.Network/virtualNetworks/BastionHosts/action | Gets Bastion Host references in a Virtual Network. |
+> | Microsoft.Network/virtualNetworks/listNetworkManagerEffectiveConnectivityConfigurations/action | List Network Manager Effective Connectivity Configurations |
+> | Microsoft.Network/virtualNetworks/listNetworkManagerEffectiveSecurityAdminRules/action | List Network Manager Effective Security Admin Rules |
> | Microsoft.Network/virtualNetworks/bastionHosts/default/action | Gets Bastion Host references in a Virtual Network. | > | Microsoft.Network/virtualNetworks/checkIpAddressAvailability/read | Check if Ip Address is available at the specified virtual network | > | Microsoft.Network/virtualNetworks/customViews/read | Get definition of a custom view of Virtual Network |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/vpnGateways/write | Puts a VpnGateway. | > | Microsoft.Network/vpnGateways/delete | Deletes a VpnGateway. | > | microsoft.network/vpngateways/reset/action | Resets a VpnGateway |
+> | microsoft.network/vpngateways/getbgppeerstatus/action | Gets bgp peer status of a VpnGateway |
+> | microsoft.network/vpngateways/getlearnedroutes/action | Gets learned routes of a VpnGateway |
+> | microsoft.network/vpngateways/getadvertisedroutes/action | Gets advertised routes of a VpnGateway |
> | microsoft.network/vpngateways/startpacketcapture/action | Start Vpn gateway Packet Capture with according resource | > | microsoft.network/vpngateways/stoppacketcapture/action | Stop Vpn gateway Packet Capture with sasURL | > | microsoft.network/vpngateways/listvpnconnectionshealth/action | Gets connection health for all or a subset of connections on a VpnGateway |
Azure service: [Azure NetApp Files](../azure-netapp-files/index.yml)
> | Microsoft.NetApp/locations/checkinventory/action | Checks ReservedCapacity inventory. | > | Microsoft.NetApp/locations/operationresults/read | Reads an operation result resource. | > | Microsoft.NetApp/locations/quotaLimits/read | Reads a Quotalimit resource type. |
+> | Microsoft.NetApp/locations/RegionInfo/read | Reads a regionInfo resource. |
> | Microsoft.NetApp/netAppAccounts/read | Reads an account resource. | > | Microsoft.NetApp/netAppAccounts/write | Writes an account resource. | > | Microsoft.NetApp/netAppAccounts/delete | Deletes an account resource. |
Azure service: [Azure NetApp Files](../azure-netapp-files/index.yml)
> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/subvolumes/write | Write a subvolume resource. | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/subvolumes/delete | | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/subvolumes/GetMetadata/action | Read subvolume metadata resource. |
+> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/volumeQuotaRules/read | Reads a Volume quota rule resource. |
+> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/volumeQuotaRules/write | Writes Volume quota rule resource. |
+> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/volumeQuotaRules/delete | Deletes a Volume quota rule resource. |
> | Microsoft.NetApp/netAppAccounts/snapshotPolicies/read | Reads a snapshot policy resource. | > | Microsoft.NetApp/netAppAccounts/snapshotPolicies/write | Writes a snapshot policy resource. | > | Microsoft.NetApp/netAppAccounts/snapshotPolicies/delete | Deletes a snapshot policy resource. |
Azure service: [Azure Spring Cloud](../spring-cloud/index.yml)
> | Microsoft.AppPlatform/Spring/stop/action | Stop a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/start/action | Start a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/configServers/action | Validate the config server settings for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/read | Get the API portal for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/write | Create or update the API portal for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/delete | Delete the API portal for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/validateDomain/action | Validate the API portal domain for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/domains/read | Get the API portal domain for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/domains/write | Create or update the API portal domain for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/domains/delete | Delete the API portal domain for a specific Azure Spring Cloud service instance |
> | Microsoft.AppPlatform/Spring/apps/write | Create or update the application for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/apps/delete | Delete the application for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/apps/read | Get the applications for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/apps/getResourceUploadUrl/action | Get the resource upload URL of a specific Microsoft Azure Spring Cloud application | > | Microsoft.AppPlatform/Spring/apps/validateDomain/action | Validate the custom domain for a specific application |
+> | Microsoft.AppPlatform/Spring/apps/setActiveDeployments/action | Set active deployments for a specific Microsoft Azure Spring Cloud application |
> | Microsoft.AppPlatform/Spring/apps/bindings/write | Create or update the binding for a specific application | > | Microsoft.AppPlatform/Spring/apps/bindings/delete | Delete the binding for a specific application | > | Microsoft.AppPlatform/Spring/apps/bindings/read | Get the bindings for a specific application |
Azure service: [Azure Spring Cloud](../spring-cloud/index.yml)
> | Microsoft.AppPlatform/Spring/apps/domains/write | Create or update the custom domain for a specific application | > | Microsoft.AppPlatform/Spring/apps/domains/delete | Delete the custom domain for a specific application | > | Microsoft.AppPlatform/Spring/apps/domains/read | Get the custom domains for a specific application |
+> | Microsoft.AppPlatform/Spring/buildServices/read | Get the Build Services for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/getResourceUploadUrl/action | Get the Upload URL of a specific Microsoft Azure Spring Cloud build |
+> | Microsoft.AppPlatform/Spring/buildServices/agentPools/read | Get the Agent Pools for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/agentPools/write | Create or update the Agent Pools for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/builders/read | Get the Builders for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/builders/write | Create or update the Builders for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/builders/delete | Delete the Builders for a specific Azure Spring Cloud service instance |
> | Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/read | Get the BuildpackBinding for a specific Azure Spring Cloud service instance Builder | > | Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/write | Create or update the BuildpackBinding for a specific Azure Spring Cloud service instance Builder | > | Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/delete | Delete the BuildpackBinding for a specific Azure Spring Cloud service instance Builder |
+> | Microsoft.AppPlatform/Spring/buildServices/builds/read | Get the Builds for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/builds/write | Create or update the Builds for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/builds/results/read | Get the Build Results for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/builds/results/getLogFileUrl/action | Get the Log File URL of a specific Microsoft Azure Spring Cloud build result |
+> | Microsoft.AppPlatform/Spring/buildServices/supportedBuildpacks/read | Get the Supported Buildpacks for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/supportedStacks/read | Get the Supported Stacks for a specific Azure Spring Cloud service instance |
> | Microsoft.AppPlatform/Spring/certificates/write | Create or update the certificate for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/certificates/delete | Delete the certificate for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/certificates/read | Get the certificates for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/configServers/read | Get the config server for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/configServers/write | Create or update the config server for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/configurationServices/read | Get the Application Configuration Services for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/configurationServices/write | Create or update the Application Configuration Service for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/configurationServices/delete | Delete the Application Configuration Service for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/configurationServices/validate/action | Validate the settings for a specific Application Configuration Service |
> | Microsoft.AppPlatform/Spring/deployments/read | Get the deployments for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/detectors/read | Get the detectors for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/gateways/read | Get the Spring Cloud Gateways for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/gateways/write | Create or update the Spring Cloud Gateway for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/gateways/delete | Delete the Spring Cloud Gateway for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/gateways/validateDomain/action | Validate the Spring Cloud Gateway domain for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/gateways/domains/read | Get the Spring Cloud Gateways domain for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/gateways/domains/write | Create or update the Spring Cloud Gateway domain for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/gateways/domains/delete | Delete the Spring Cloud Gateway domain for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/gateways/routeConfigs/read | Get the Spring Cloud Gateway route config for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/gateways/routeConfigs/write | Create or update the Spring Cloud Gateway route config for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/gateways/routeConfigs/delete | Delete the Spring Cloud Gateway route config for a specific Azure Spring Cloud service instance |
> | Microsoft.AppPlatform/Spring/monitoringSettings/read | Get the monitoring setting for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/monitoringSettings/write | Create or update the monitoring setting for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/providers/Microsoft.Insights/diagnosticSettings/read | Get the diagnostic settings for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/providers/Microsoft.Insights/diagnosticSettings/write | Create or update the diagnostic settings for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/providers/Microsoft.Insights/logDefinitions/read | Get definitions of logs from Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/providers/Microsoft.Insights/metricDefinitions/read | Get definitions of metrics from Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/serviceRegistries/read | Get the Service Registrys for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/serviceRegistries/write | Create or update the Service Registry for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/serviceRegistries/delete | Delete the Service Registry for a specific Azure Spring Cloud service instance |
> | Microsoft.AppPlatform/Spring/storages/write | Create or update the storage for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/storages/delete | Delete the storage for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/storages/read | Get storage for a specific Azure Spring Cloud service instance |
Azure service: [Media Services](../media-services/index.yml)
> | Microsoft.Media/unregister/action | Unregisters the subscription for the Media Services resource provider | > | Microsoft.Media/checknameavailability/action | Checks if a Media Services account name is available | > | Microsoft.Media/locations/checkNameAvailability/action | Checks if a Media Services account name is available |
+> | Microsoft.Media/locations/mediaServiceOperationResults/read | Read any Media Services Operation Result |
+> | Microsoft.Media/locations/mediaserviceOperationStatuses/read | Read Any Media Service Operation Status |
+> | Microsoft.Media/locations/videoAnalyzerOperationResults/read | Read any Video Analyzer Operation Result |
+> | Microsoft.Media/locations/videoAnalyzerOperationStatuses/read | Read any Video Analyzer Operation Status |
> | Microsoft.Media/mediaservices/read | Read any Media Services Account | > | Microsoft.Media/mediaservices/write | Create or Update any Media Services Account | > | Microsoft.Media/mediaservices/delete | Delete any Media Services Account |
Azure service: [Container Registry](../container-registry/index.yml)
> | Microsoft.ContainerRegistry/registries/queueBuild/action | Creates a new build based on the request parameters and add it to the build queue. | > | Microsoft.ContainerRegistry/registries/listBuildSourceUploadUrl/action | Get source upload url location for a container registry. | > | Microsoft.ContainerRegistry/registries/scheduleRun/action | Schedule a run against a container registry. |
+> | Microsoft.ContainerRegistry/registries/privateEndpointConnectionsApproval/action | Auto Approves a Private Endpoint Connection |
> | Microsoft.ContainerRegistry/registries/agentpools/read | Get a agentpool for a container registry or list all agentpools. | > | Microsoft.ContainerRegistry/registries/agentpools/write | Create or Update an agentpool for a container registry. | > | Microsoft.ContainerRegistry/registries/agentpools/delete | Delete an agentpool for a container registry. |
Azure service: [Azure Kubernetes Service (AKS)](../aks/index.yml)
> | Microsoft.ContainerService/managedClusters/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for Managed Cluster | > | Microsoft.ContainerService/managedClusters/providers/Microsoft.Insights/metricDefinitions/read | Gets the available metrics for Managed Cluster | > | Microsoft.ContainerService/managedClusters/upgradeProfiles/read | Gets the upgrade profile of the cluster |
+> | Microsoft.ContainerService/managedclustersnapshots/read | Get a managed cluster snapshot |
+> | Microsoft.ContainerService/managedclustersnapshots/write | Creates a new managed cluster snapshot |
+> | Microsoft.ContainerService/managedclustersnapshots/delete | Deletes a managed cluster snapshot |
> | Microsoft.ContainerService/openShiftClusters/read | Get an Open Shift Cluster | > | Microsoft.ContainerService/openShiftClusters/write | Creates a new Open Shift Cluster or updates an existing one | > | Microsoft.ContainerService/openShiftClusters/delete | Delete an Open Shift Cluster |
Azure service: [Azure Database Migration Service](../dms/index.yml)
> [!div class="mx-tableFixed"] > | Action | Description | > | | |
+> | Microsoft.DataMigration/register/action | Registers the subscription with the Azure Database Migration Service provider |
+> | Microsoft.DataMigration/locations/operationResults/read | Get the status of a long-running operation related to a 202 Accepted response |
+> | Microsoft.DataMigration/locations/operationStatuses/read | Get the status of a long-running operation related to a 202 Accepted response |
> | Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read | Retrieve Service Operation Results | > | Microsoft.DataMigration/operations/read | Get all REST Operations |
+> | Microsoft.DataMigration/services/read | Read information about resources |
+> | Microsoft.DataMigration/services/write | Create or update resources and their properties |
+> | Microsoft.DataMigration/services/delete | Deletes a resource and all of its children |
+> | Microsoft.DataMigration/services/stop/action | Stop the Azure Database Migration Service to minimize its cost |
+> | Microsoft.DataMigration/services/start/action | Start the Azure Database Migration Service to allow it to process migrations again |
+> | Microsoft.DataMigration/services/checkStatus/action | Check whether the service is deployed and running |
+> | Microsoft.DataMigration/services/configureWorker/action | Configures a Azure Database Migration Service worker to the Service's availiable workers |
+> | Microsoft.DataMigration/services/addWorker/action | Adds a Azure Database Migration Service worker to the Service's availiable workers |
+> | Microsoft.DataMigration/services/removeWorker/action | Removes a Azure Database Migration Service worker to the Service's availiable workers |
+> | Microsoft.DataMigration/services/updateAgentConfig/action | Updates Azure Database Migration Service agent configuration with provided values. |
+> | Microsoft.DataMigration/services/getHybridDownloadLink/action | Gets a Azure Database Migration Service worker package download link from RP Blob Storage. |
+> | Microsoft.DataMigration/services/projects/read | Read information about resources |
+> | Microsoft.DataMigration/services/projects/write | Run tasks Azure Database Migration Service tasks |
+> | Microsoft.DataMigration/services/projects/delete | Deletes a resource and all of its children |
+> | Microsoft.DataMigration/services/projects/accessArtifacts/action | Generate a URL that can be used to GET or PUT project artifacts |
+> | Microsoft.DataMigration/services/projects/tasks/read | Read information about resources |
+> | Microsoft.DataMigration/services/projects/tasks/write | Run tasks Azure Database Migration Service tasks |
+> | Microsoft.DataMigration/services/projects/tasks/delete | Deletes a resource and all of its children |
+> | Microsoft.DataMigration/services/projects/tasks/cancel/action | Cancel the task if it's currently running |
+> | Microsoft.DataMigration/services/serviceTasks/read | Read information about resources |
+> | Microsoft.DataMigration/services/serviceTasks/write | Run tasks Azure Database Migration Service tasks |
+> | Microsoft.DataMigration/services/serviceTasks/delete | Deletes a resource and all of its children |
+> | Microsoft.DataMigration/services/serviceTasks/cancel/action | Cancel the task if it's currently running |
+> | Microsoft.DataMigration/services/slots/read | Read information about resources |
+> | Microsoft.DataMigration/services/slots/write | Create or update resources and their properties |
+> | Microsoft.DataMigration/services/slots/delete | Deletes a resource and all of its children |
+> | Microsoft.DataMigration/skus/read | Get a list of SKUs supported by Azure Database Migration Service resources. |
> | Microsoft.DataMigration/sqlMigrationServices/write | Create a new or change properties of existing Service | > | Microsoft.DataMigration/sqlMigrationServices/delete | Delete existing Service | > | Microsoft.DataMigration/sqlMigrationServices/read | Retrieve details of Migration Service |
Azure service: [Azure Cosmos DB](../cosmos-db/index.yml)
> | Microsoft.DocumentDB/databaseAccounts/failoverPriorityChange/action | Change failover priorities of regions of a database account. This is used to perform manual failover operation | > | Microsoft.DocumentDB/databaseAccounts/offlineRegion/action | Offline a region of a database account. | > | Microsoft.DocumentDB/databaseAccounts/onlineRegion/action | Online a region of a database account. |
+> | Microsoft.DocumentDB/databaseAccounts/refreshDelegatedResourceIdentity/action | Update existing delegate resources on database account. |
> | Microsoft.DocumentDB/databaseAccounts/delete | Deletes the database accounts. | > | Microsoft.DocumentDB/databaseAccounts/getBackupPolicy/action | Get the backup policy of database account | > | Microsoft.DocumentDB/databaseAccounts/PrivateEndpointConnectionsApproval/action | Manage a private endpoint connection of Database Account |
Azure service: [Azure Cosmos DB](../cosmos-db/index.yml)
> | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/collections/write | Create or update a MongoDB collection. | > | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/collections/read | Read a MongoDB collection or list all the MongoDB collections. | > | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/collections/delete | Delete a MongoDB collection. |
+> | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/collections/partitionMerge/action | Merge the physical partitions of a MongoDB collection |
> | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/collections/operationResults/read | Read status of the asynchronous operation. |
+> | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/collections/partitionMerge/operationResults/read | Read status of the asynchronous operation. |
> | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/collections/throughputSettings/write | Update a MongoDB collection throughput. | > | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/collections/throughputSettings/read | Read a MongoDB collection throughput. | > | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/collections/throughputSettings/migrateToAutoscale/action | Migrate MongoDB collection offer to autoscale. | > | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/collections/throughputSettings/migrateToManualThroughput/action | Migrate MongoDB collection offer to to manual throughput. |
+> | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/collections/throughputSettings/redistributeThroughput/action | Redistribute throughput for the specified physical partitions of the MongoDB collection. |
+> | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/collections/throughputSettings/retrieveThroughputDistribution/action | Retrieve throughput for the specified physical partitions of the MongoDB collection. |
> | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/collections/throughputSettings/migrateToAutoscale/operationResults/read | Read status of the asynchronous operation. | > | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/collections/throughputSettings/migrateToManualThroughput/operationResults/read | Read status of the asynchronous operation. | > | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/collections/throughputSettings/operationResults/read | Read status of the asynchronous operation. |
Azure service: [Azure Cosmos DB](../cosmos-db/index.yml)
> | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/throughputSettings/read | Read a MongoDB database throughput. | > | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/throughputSettings/migrateToAutoscale/action | Migrate MongoDB database offer to autoscale. | > | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/throughputSettings/migrateToManualThroughput/action | Migrate MongoDB database offer to to manual throughput. |
+> | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/throughputSettings/redistributeThroughput/action | Redistribute throughput for the specified physical partitions of the MongoDB database. |
+> | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/throughputSettings/retrieveThroughputDistribution/action | Retrieve throughput for the specified physical partitions of the MongoDB database. |
> | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/throughputSettings/migrateToAutoscale/operationResults/read | Read status of the asynchronous operation. | > | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/throughputSettings/migrateToManualThroughput/operationResults/read | Read status of the asynchronous operation. | > | Microsoft.DocumentDB/databaseAccounts/mongodbDatabases/throughputSettings/operationResults/read | Read status of the asynchronous operation. |
Azure service: [Azure Cosmos DB](../cosmos-db/index.yml)
> | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/write | Create or update a SQL container. | > | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/read | Read a SQL container or list all the SQL containers. | > | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/delete | Delete a SQL container. |
+> | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/partitionMerge/action | Read status of the asynchronous operation. |
> | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/operationResults/read | Read status of the asynchronous operation. |
+> | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/partitionMerge/operationResults/read | Merge the physical partitions of a SQL container. |
> | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/storedProcedures/write | Create or update a SQL stored procedure. | > | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/storedProcedures/read | Read a SQL stored procedure or list all the SQL stored procedures. | > | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/storedProcedures/delete | Delete a SQL stored procedure. |
Azure service: [Azure Cosmos DB](../cosmos-db/index.yml)
> | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/throughputSettings/write | Update a SQL container throughput. | > | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/throughputSettings/read | Read a SQL container throughput. | > | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/throughputSettings/migrateToAutoscale/action | Migrate SQL container offer to autoscale. |
-> | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/throughputSettings/migrateToManualThroughput/action | Migrate a SQL database throughput offer to manual throughput. |
+> | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/throughputSettings/migrateToManualThroughput/action | Migrate a SQL container throughput offer to manual throughput. |
+> | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/throughputSettings/redistributeThroughput/action | Redistribute throughput for the specified physical partitions of the SQL container. |
+> | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/throughputSettings/retrieveThroughputDistribution/action | Retrieve throughput information for each physical partition of the SQL container. |
> | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/throughputSettings/migrateToAutoscale/operationResults/read | Read status of the asynchronous operation. | > | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/throughputSettings/migrateToManualThroughput/operationResults/read | Read status of the asynchronous operation. | > | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/throughputSettings/operationResults/read | Read status of the asynchronous operation. |
Azure service: [Azure Cosmos DB](../cosmos-db/index.yml)
> | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/throughputSettings/read | Read a SQL database throughput. | > | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/throughputSettings/migrateToAutoscale/action | Migrate SQL database offer to autoscale. | > | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/throughputSettings/migrateToManualThroughput/action | Migrate a SQL database throughput offer to manual throughput. |
+> | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/throughputSettings/redistributeThroughput/action | Redistribute throughput for the specified physical partitions of the database. |
+> | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/throughputSettings/retrieveThroughputDistribution/action | Retrieve throughput information for each physical partition of the database. |
> | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/throughputSettings/migrateToAutoscale/operationResults/read | Read status of the asynchronous operation. | > | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/throughputSettings/migrateToManualThroughput/operationResults/read | Read status of the asynchronous operation. | > | Microsoft.DocumentDB/databaseAccounts/sqlDatabases/throughputSettings/operationResults/read | Read status of the asynchronous operation. |
Azure service: [Azure SQL Database](../azure-sql/database/index.yml), [Azure SQL
> | Microsoft.Sql/instancePools/write | Creates or updates an instance pool | > | Microsoft.Sql/instancePools/delete | Deletes an instance pool | > | Microsoft.Sql/instancePools/usages/read | Gets an instance pool's usage info |
+> | Microsoft.Sql/locations/notifyNetworkSecurityPerimeterUpdatesAvailable/action | Notify of NSP Update |
> | Microsoft.Sql/locations/deleteVirtualNetworkOrSubnets/action | Deletes Virtual network rules associated to a virtual network or subnet | > | Microsoft.Sql/locations/read | Gets the available locations for a given subscription | > | Microsoft.Sql/locations/administratorAzureAsyncOperation/read | Gets the Managed instance azure async administrator operations result. |
Azure service: [Azure SQL Database](../azure-sql/database/index.yml), [Azure SQL
> | Microsoft.Sql/locations/managedTransparentDataEncryptionOperationResults/read | Gets in-progress operations on managed database transparent data encryption | > | Microsoft.Sql/locations/networkSecurityPerimeterAssociationProxyAzureAsyncOperation/read | Get network security perimeter proxy azure async operation | > | Microsoft.Sql/locations/networkSecurityPerimeterAssociationProxyOperationResults/read | Get network security perimeter operation result |
-> | Microsoft.Sql/locations/networkSecurityPerimeterProxies/read | Get NSP Proxy |
-> | Microsoft.Sql/locations/networkSecurityPerimeterProxies/write | Create or Update NSP Proxy |
-> | Microsoft.Sql/locations/networkSecurityPerimeterProxies/delete | Delete NSP Proxy |
-> | Microsoft.Sql/locations/networkSecurityPerimeterProxies/profileProxies/read | Get NSP Profile Proxy |
-> | Microsoft.Sql/locations/networkSecurityPerimeterProxies/profileProxies/write | Create or Update NSP Profile Proxy |
-> | Microsoft.Sql/locations/networkSecurityPerimeterProxies/profileProxies/delete | Delete NSP Proxy |
-> | Microsoft.Sql/locations/networkSecurityPerimeterProxies/profileProxies/accessRuleProxies/read | Get NSP Access Rule Proxy |
-> | Microsoft.Sql/locations/networkSecurityPerimeterProxies/profileProxies/accessRuleProxies/write | Create or Update NSP Access Rule Proxy |
-> | Microsoft.Sql/locations/networkSecurityPerimeterProxies/profileProxies/accessRuleProxies/delete | Delete NSP Access Rule Proxy |
+> | Microsoft.Sql/locations/networkSecurityPerimeterUpdatesAvailableAzureAsyncOperation/read | Get network security perimeter updates available azure async operation |
> | Microsoft.Sql/locations/operationsHealth/read | Gets health status of the service operation in a location | > | Microsoft.Sql/locations/privateEndpointConnectionAzureAsyncOperation/read | Gets the result for a private endpoint connection operation | > | Microsoft.Sql/locations/privateEndpointConnectionOperationResults/read | Gets the result for a private endpoint connection operation |
Azure service: [Azure SQL Database](../azure-sql/database/index.yml), [Azure SQL
> | Microsoft.Sql/managedInstances/databases/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for managed instance databases | > | Microsoft.Sql/managedInstances/databases/queries/read | Get query text by query id | > | Microsoft.Sql/managedInstances/databases/queries/statistics/read | Get query execution statistics by query id |
-> | Microsoft.Sql/managedInstances/databases/recommendedSensitivityLabels/read | List sensitivity labels of a given database |
+> | Microsoft.Sql/managedInstances/databases/recommendedSensitivityLabels/read | List the recommended sensitivity labels for a given database |
> | Microsoft.Sql/managedInstances/databases/recommendedSensitivityLabels/write | Batch update recommended sensitivity labels | > | Microsoft.Sql/managedInstances/databases/restoreDetails/read | Returns managed database restore details while restore is in progress. | > | Microsoft.Sql/managedInstances/databases/schemas/read | Get a managed database schema. |
Azure service: [Azure SQL Database](../azure-sql/database/index.yml), [Azure SQL
> | Microsoft.Sql/managedInstances/distributedAvailabilityGroups/read | Return the list of distributed availability groups or gets the properties for the specified distributed availability group. | > | Microsoft.Sql/managedInstances/distributedAvailabilityGroups/write | Creates distributed availability groups with a specified parameters. | > | Microsoft.Sql/managedInstances/distributedAvailabilityGroups/delete | Deletes a distributed availability group. |
+> | Microsoft.Sql/managedInstances/dnsAliases/read | Return the list of Azure SQL Managed Instance Dns Aliases for the specified instance. |
+> | Microsoft.Sql/managedInstances/dnsAliases/write | Creates an Azure SQL Managed Instance Dns Alias with the specified parameters or updates the properties for the specified Azure SQL Managed Instance Dns Alias. |
+> | Microsoft.Sql/managedInstances/dnsAliases/delete | Deletes an existing Azure SQL Managed Instance Dns Alias. |
+> | Microsoft.Sql/managedInstances/dnsAliases/acquire/action | Acquire Azure SQL Managed Instance Dns Alias from another Managed Instance. |
> | Microsoft.Sql/managedInstances/encryptionProtector/revalidate/action | Update the properties for the specified Server Encryption Protector. | > | Microsoft.Sql/managedInstances/encryptionProtector/read | Returns a list of server encryption protectors or gets the properties for the specified server encryption protector. | > | Microsoft.Sql/managedInstances/encryptionProtector/write | Update the properties for the specified Server Encryption Protector. |
Azure service: [Azure SQL Database](../azure-sql/database/index.yml), [Azure SQL
> | Microsoft.Sql/managedInstances/serverTrustCertificates/delete | Delete server trust certificate with a given name | > | Microsoft.Sql/managedInstances/serverTrustCertificates/read | Return the list of server trust certificates. | > | Microsoft.Sql/managedInstances/serverTrustGroups/read | Returns the existing SQL Server Trust Groups by Managed Instance name |
-> | Microsoft.Sql/managedInstances/startStopSchedule/write | Creates Azure SQL Managed Instance's Start/Stop schedule with the specified parameters or updates the properties of the schedule for the specified instance. |
-> | Microsoft.Sql/managedInstances/startStopSchedule/delete | Deletes Azure SQL Managed Instance's Start/Stop schedule. |
-> | Microsoft.Sql/managedInstances/startStopSchedule/read | Gets Azure SQL Managed Instance's Start/Stop schedule. |
-> | Microsoft.Sql/managedInstances/startStopSchedules/read | Lists Azure SQL Managed Instance's Start/Stop schedules. |
+> | Microsoft.Sql/managedInstances/startStopSchedules/write | Creates Azure SQL Managed Instance's Start/Stop schedule with the specified parameters or updates the properties of the schedule for the specified instance. |
+> | Microsoft.Sql/managedInstances/startStopSchedules/delete | Deletes Azure SQL Managed Instance's Start/Stop schedule. |
+> | Microsoft.Sql/managedInstances/startStopSchedules/read | Get properties for specified Start/Stop schedule for the Azure SQL Managed Instance or a List of all Start/Stop schedules. |
> | Microsoft.Sql/managedInstances/topqueries/read | Get top resource consuming queries of a managed instance | > | Microsoft.Sql/managedInstances/vulnerabilityAssessments/write | Change the vulnerability assessment for a given managed instance | > | Microsoft.Sql/managedInstances/vulnerabilityAssessments/delete | Remove the vulnerability assessment for a given managed instance |
Azure service: [Azure SQL Database](../azure-sql/database/index.yml), [Azure SQL
> | Microsoft.Sql/servers/databases/queryStore/read | Returns current values of Query Store settings for the database. | > | Microsoft.Sql/servers/databases/queryStore/write | Updates Query Store setting for the database | > | Microsoft.Sql/servers/databases/queryStore/queryTexts/read | Returns the collection of query texts that correspond to the specified parameters. |
-> | Microsoft.Sql/servers/databases/recommendedSensitivityLabels/read | List sensitivity labels of a given database |
+> | Microsoft.Sql/servers/databases/recommendedSensitivityLabels/read | List the recommended sensitivity labels for a given database |
> | Microsoft.Sql/servers/databases/recommendedSensitivityLabels/write | Batch update recommended sensitivity labels | > | Microsoft.Sql/servers/databases/replicationLinks/read | Return the list of replication links or gets the properties for the specified replication links. | > | Microsoft.Sql/servers/databases/replicationLinks/delete | Terminate the replication relationship forcefully and with potential data loss |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/deployments/read | Reads deployments. | > | Microsoft.CognitiveServices/accounts/deployments/write | Writes deployments. | > | Microsoft.CognitiveServices/accounts/deployments/delete | Deletes deployments. |
+> | Microsoft.CognitiveServices/accounts/models/read | Reads available models. |
> | Microsoft.CognitiveServices/accounts/networkSecurityPerimeterAssociationProxies/read | Reads a network security perimeter association. | > | Microsoft.CognitiveServices/accounts/networkSecurityPerimeterAssociationProxies/write | Writes a network security perimeter association. | > | Microsoft.CognitiveServices/accounts/networkSecurityPerimeterAssociationProxies/delete | Deletes a network security perimeter association. |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/deletedAccounts/read | List deleted accounts. | > | Microsoft.CognitiveServices/locations/checkSkuAvailability/action | Reads available SKUs for a subscription. | > | Microsoft.CognitiveServices/locations/deleteVirtualNetworkOrSubnets/action | Notification from Microsoft.Network of deleting VirtualNetworks or Subnets. |
-> | Microsoft.CognitiveServices/locations/commitmentTiers/read | Reads available SKUs commitment tiers |
+> | Microsoft.CognitiveServices/locations/notifyNetworkSecurityPerimeterUpdatesAvailable/action | Notification from Microsoft.Network of NetworkSecurityPerimeter updates. |
+> | Microsoft.CognitiveServices/locations/commitmentTiers/read | Reads available commitment tiers. |
> | Microsoft.CognitiveServices/locations/networkSecurityPerimeterProxies/read | Reads a network security perimeter. | > | Microsoft.CognitiveServices/locations/networkSecurityPerimeterProxies/write | Writes a network security perimeter. | > | Microsoft.CognitiveServices/locations/networkSecurityPerimeterProxies/delete | Deletes a network security perimeter. |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/ComputerVision/batch/write | This internal operation creates a new batch with the specified name. | > | Microsoft.CognitiveServices/accounts/ComputerVision/batch/read | This internal operation returns the list of batches. | > | Microsoft.CognitiveServices/accounts/ComputerVision/batch/analyzestatus/read | This internal operation returns the status of the specified batch. |
+> | Microsoft.CognitiveServices/accounts/ComputerVision/datasets/read | Get information about a specific dataset. Get a list of datasets that have been registered. |
+> | Microsoft.CognitiveServices/accounts/ComputerVision/datasets/write | Register a new dataset. Update the properties of an existing dataset. |
+> | Microsoft.CognitiveServices/accounts/ComputerVision/datasets/delete | Unregister a dataset. |
+> | Microsoft.CognitiveServices/accounts/ComputerVision/deployments/write | Deploy an operation to be run on the target device. Update the properties of an existing deployment. |
+> | Microsoft.CognitiveServices/accounts/ComputerVision/deployments/delete | Delete a deployment, removing the operation from the target device. |
+> | Microsoft.CognitiveServices/accounts/ComputerVision/deployments/read | Get information about a specific deployment. Get a list of deployments that have been created. |
> | Microsoft.CognitiveServices/accounts/ComputerVision/models/read | This operation returns the list of domain-specific models that are supported by the Computer Vision API. Currently, the API supports following domain-specific models: celebrity recognizer, landmark recognizer. | > | Microsoft.CognitiveServices/accounts/ComputerVision/models/analyze/action | This operation recognizes content within an image by applying a domain-specific model.<br> The list of domain-specific models that are supported by the Computer Vision API can be retrieved using the /models GET request.<br> Currently, the API provides following domain-specific models: celebrities, landmarks. |
+> | Microsoft.CognitiveServices/accounts/ComputerVision/models/:cancel/action | Cancel model training. |
+> | Microsoft.CognitiveServices/accounts/ComputerVision/models/delete | Delete a custom model. A model can be deleted if it is in one of the 'Succeeded', 'Failed', or 'Canceled' states. |
+> | Microsoft.CognitiveServices/accounts/ComputerVision/models/write | Start training a custom model. |
+> | Microsoft.CognitiveServices/accounts/ComputerVision/operations/imageanalysis:analyze/action | Analyze the input image of incoming request without deployment. The request either contains image stream |
+> | Microsoft.CognitiveServices/accounts/ComputerVision/operations/read | Get information about a specific operation. Get a list of the available operations. |
> | Microsoft.CognitiveServices/accounts/ComputerVision/read/analyze/action | Use this interface to perform a Read operation, employing the state-of-the-art Optical Character Recognition (OCR) algorithms optimized for text-heavy documents.<br>It can handle hand-written, printed or mixed documents.<br>When you use the Read interface, the response contains a header called 'Operation-Location'.<br>The 'Operation-Location' header contains the URL that you must use for your Get Read Result operation to access OCR results.** | > | Microsoft.CognitiveServices/accounts/ComputerVision/read/analyzeresults/read | Use this interface to retrieve the status and OCR result of a Read operation. The URL containing the 'operationId' is returned in the Read operation 'Operation-Location' response header.* | > | Microsoft.CognitiveServices/accounts/ComputerVision/read/core/asyncbatchanalyze/action | Use this interface to get the result of a Batch Read File operation, employing the state-of-the-art Optical Character |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/ImageSearch/search/action | Get relevant images for a given query. | > | Microsoft.CognitiveServices/accounts/ImageSearch/trending/action | Get currently trending images. | > | Microsoft.CognitiveServices/accounts/ImmersiveReader/getcontentmodelforreader/action | Creates an Immersive Reader session |
+> | Microsoft.CognitiveServices/accounts/Language/query-dataverse/action | Query Dataverse |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/action | Answer Knowledgebase. |
+> | Microsoft.CognitiveServices/accounts/Language/query-text/action | Answer Text. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/internal/projects/export/jobs/result/read | Get export job result details. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/internal/projects/models/read | Get a trained model info. Get trained models info.* |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/write | Creates a new or update a project. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/delete | Deletes a project. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/read | Gets a project info. Returns the list of projects.* |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/export/action | Triggers a job to export project data in JSON format. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/import/action | Triggers a job to import a project in JSON format. If a project with the same name already exists, the data of that project is replaced. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/train/action | Trigger training job. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/deletion/jobs/read | Get project deletion job status and result details. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/deployments/read | Get a deployment info. List all deployments.* |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/deployments/delete | Delete a deployment. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/deployments/write | Trigger a new deployment or replace an existing one. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/deployments/swap/action | Trigger job to swap two deployments. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/deployments/jobs/read | Get deployment job status and result details. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/deployments/swap/jobs/read | Gets a swap deployment job status and result details. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/export/jobs/read | Get export job status details. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/export/jobs/result/read | Get export job result details. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/global/deletion-jobs/read | Get project deletion job status and result details. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/global/languages/read | Get List of Supported languages. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/global/prebuilts/read | Get list of Supported prebuilts for conversational projects. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/import/jobs/read | Get import or replace project job status and result details. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/languages/read | Get List of Supported languages. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/models/delete | Delete a trained model. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/models/read | Get a trained model info. List all trained models.* |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/models/evaluation/read | Get trained model evaluation report. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/models/verification/read | Get trained model verification report. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/prebuilts/read | Get list of Supported prebuilts for conversational projects. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/projects/train/jobs/read | Get training jobs. Get training job status and result details.* |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/internal/projects/export/jobs/result/read | Get export job result details. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/internal/projects/models/read | Get a trained model info. Get trained models info.* |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/write | Creates a new or update a project. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/delete | Deletes a project. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/read | Gets a project info. Returns the list of projects.* |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/export/action | Triggers a job to export project data in JSON format. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/import/action | Triggers a job to import a project in JSON format. If a project with the same name already exists, the data of that project is replaced. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/train/action | Trigger training job. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/deletion/jobs/read | Get project deletion job status and result details. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/deployments/read | Get a deployment info. List all deployments.* |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/deployments/delete | Delete a deployment. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/deployments/write | Trigger a new deployment or replace an existing one. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/deployments/swap/action | Trigger job to swap two deployments. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/deployments/jobs/read | Get deployment job status and result details. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/deployments/swap/jobs/read | Gets a swap deployment job status and result details. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/export/jobs/read | Get export job status details. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/export/jobs/result/read | Get export job result details. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/global/deletion-jobs/read | Get project deletion job status and result details. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/global/languages/read | Get List of Supported languages. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/import/jobs/read | Get import or replace project job status and result details. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/languages/read | Get List of Supported languages. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/models/delete | Delete a trained model. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/models/read | Get a trained model info. List all trained models.* |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/models/evaluation/read | Get trained model evaluation report. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/models/verification/read | Get trained model verification report. |
+> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/train/jobs/read | Get training jobs. Get training job status and result details.* |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/read | List Projects. Get Project Details.* |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/write | Create Project. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/delete | Delete Project. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/export/action | Export Project. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/import/action | Import Project. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/feedback/action | Train Active Learning. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/deletion-jobs/read | Get Import Job Status. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/deployments/read | Get Project Deployment. List Deployments.* |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/deployments/write | Deploy Project. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/deployments/jobs/read | Get Deploy Job Status. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/export/jobs/read | Get Export Job Status. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/export/jobs/result/read | Get Export Job Status. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/import/jobs/read | Get Import Job Status. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/qnas/read | Get QnAs. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/qnas/write | Update QnAs. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/qnas/jobs/read | Get Update QnAs Job Status. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/sources/read | Get Sources. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/sources/write | Update QnAs. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/sources/jobs/read | Get Update Sources Job Status. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/synonyms/read | Get Synonyms. |
+> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/synonyms/write | Update Synonyms. |
> | Microsoft.CognitiveServices/accounts/LanguageAuthoring/projects/action | Creates a new project. | > | Microsoft.CognitiveServices/accounts/LanguageAuthoring/projects/delete | Deletes a project. | > | Microsoft.CognitiveServices/accounts/LanguageAuthoring/projects/read | Returns a project. Returns the list of projects.* |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/metrics/series/query/action | List series (dimension combinations) from metric | > | Microsoft.CognitiveServices/accounts/MetricsAdvisor/metrics/status/enrichment/anomalydetection/query/action | Query anomaly detection status | > | Microsoft.CognitiveServices/accounts/MetricsAdvisor/stats/latest/read | Get latest usage stats |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/write | Create or update a time series group |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/delete | Delete a time series group |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/read | Get a time series group |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/appinstances/write | Create or update an application instance to a time series group |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/appinstances/delete | Delete an application instance from a time series group |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/appinstances/read | Get a time series group's application instance |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/appinstances/inference/action | Inference time series group application instance model |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/appinstances/train/action | Train time series group application instance model |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/appinstances/history/read | Get the running result history from a time series group application instance by its id |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/appinstances/inferencescore/read | Get the inference score values from a time series group application instance |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/appinstances/inferenceseverity/read | Get the inference severity values from a time series group application instance |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/appinstances/latestresult/read | Get the latest running result from a time series group application instance by its id |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/appinstances/modelstate/read | Get time series group application instance model state |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/appinstances/ops/read | Get time series group application instance operation records |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/appinstances/ops/inferencestatus/read | Get time series group application instance inference status |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/seriessets/write | Add or update a time series set to a time series group |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/seriessets/delete | Delete a time series set from a time series group |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/timeseriesgroups/seriessets/read | Get a time series set |
> | Microsoft.CognitiveServices/accounts/NewsSearch/categorysearch/action | Returns news for a provided category. | > | Microsoft.CognitiveServices/accounts/NewsSearch/search/action | Get news articles relevant for a given query. | > | Microsoft.CognitiveServices/accounts/NewsSearch/trendingtopics/action | Get trending topics identified by Bing. These are the same topics shown in the banner at the bottom of the Bing home page. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/deployments/action | Creates a deployment owned by the Azure OpenAI resource. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/files/action | Creates a file owned by the Azure OpenAI resource by uploading data. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/fine-tunes/action | Creates a fine-tuned model. |
> | Microsoft.CognitiveServices/accounts/OpenAI/deployments/search/action | Search for the most relevant documents using the current engine. |
-> | Microsoft.CognitiveServices/accounts/OpenAI/deployments/completions/action | Create a completion from a chosen model |
+> | Microsoft.CognitiveServices/accounts/OpenAI/deployments/completions/action | Create a completion from a chosen model. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/deployments/delete | Deletes a deployment. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/deployments/read | Gets information about deployments. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/deployments/write | Updates deployments. |
> | Microsoft.CognitiveServices/accounts/OpenAI/engines/read | Read engine information. | > | Microsoft.CognitiveServices/accounts/OpenAI/engines/completions/action | Create a completion from a chosen model | > | Microsoft.CognitiveServices/accounts/OpenAI/engines/search/action | Search for the most relevant documents using the current engine. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/files/delete | Deletes files. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/files/read | Gets information about files. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/files/import/action | Creates a file by uploading data. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/files/content/read | Gets the content of the specified file. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/fine-tunes/cancel/action | Cancels the adaptation of a fine-tuned model. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/fine-tunes/delete | Deletes a fine-tuned model. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/fine-tunes/read | Gets information about fine-tuned models. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/fine-tunes/events/read | Gets event information for a fine-tuning model adaptation. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/models/read | Gets information about fine-tuned models |
> | Microsoft.CognitiveServices/accounts/Personalizer/rank/action | A personalization rank request. | > | Microsoft.CognitiveServices/accounts/Personalizer/evaluations/action | Submit a new evaluation. | > | Microsoft.CognitiveServices/accounts/Personalizer/configurations/client/action | Get the client configuration. |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/Personalizer/events/reward/action | Report reward between 0 and 1 that resulted from using the action specified in rewardActionId, for the specified event. | > | Microsoft.CognitiveServices/accounts/Personalizer/logs/delete | Deletes all the logs. | > | Microsoft.CognitiveServices/accounts/Personalizer/logs/delete | Delete all logs of Rank and Reward calls stored by Personalizer. |
+> | Microsoft.CognitiveServices/accounts/Personalizer/logs/interactions/action | The endpoint is intended to be used from within a SDK for logging interactions and accepts specific format defined in https://github.com/VowpalWabbit/reinforcement_learning. This endpoint should not be used by the customer. |
+> | Microsoft.CognitiveServices/accounts/Personalizer/logs/observations/action | The endpoint is intended to be used from within a SDK for logging observations and accepts specific format defined in https://github.com/VowpalWabbit/reinforcement_learning. This endpoint should not be used by the customer. |
> | Microsoft.CognitiveServices/accounts/Personalizer/logs/properties/read | Gets logs properties. | > | Microsoft.CognitiveServices/accounts/Personalizer/logs/properties/read | Get properties of the Personalizer logs. | > | Microsoft.CognitiveServices/accounts/Personalizer/model/read | Get current model. | > | Microsoft.CognitiveServices/accounts/Personalizer/model/delete | Resets the model. | > | Microsoft.CognitiveServices/accounts/Personalizer/model/read | Get the model file generated by Personalizer service. | > | Microsoft.CognitiveServices/accounts/Personalizer/model/delete | Resets the model file generated by Personalizer service. |
+> | Microsoft.CognitiveServices/accounts/Personalizer/model/write | Replace the existing model file for the Personalizer service. |
> | Microsoft.CognitiveServices/accounts/Personalizer/model/properties/read | Get model properties. | > | Microsoft.CognitiveServices/accounts/Personalizer/model/properties/read | Get properties of the model file generated by Personalizer service. | > | Microsoft.CognitiveServices/accounts/Personalizer/multislot/rank/action | Submit a Personalizer multi-slot rank request. Receives a context, a list of actions, and a list of slots. Returns which of the provided actions should be used in each slot, in each rewardActionId. |
Azure service: [IoT security](../iot-fundamentals/iot-security-architecture.md)
> | Microsoft.IoTSecurity/locations/deviceGroups/recommendations/read | Gets IoT Recommendations | > | Microsoft.IoTSecurity/locations/deviceGroups/recommendations/write | Updates IoT Recommendation properties | > | Microsoft.IoTSecurity/locations/deviceGroups/vulnerabilities/read | Gets device vulnerabilities |
+> | Microsoft.IoTSecurity/locations/remoteConfigurations/read | Gets remote configuration |
+> | Microsoft.IoTSecurity/locations/remoteConfigurations/write | Creates remote configuration |
+> | Microsoft.IoTSecurity/locations/remoteConfigurations/delete | Deletes remote configuration |
+> | Microsoft.IoTSecurity/locations/sensors/read | Gets IoT Sensors |
> | Microsoft.IoTSecurity/locations/sites/read | Gets IoT site | > | Microsoft.IoTSecurity/locations/sites/write | Creates IoT site | > | Microsoft.IoTSecurity/locations/sites/delete | Deletes IoT site |
Azure service: [Notification Hubs](../notification-hubs/index.yml)
> | Microsoft.NotificationHubs/CheckNamespaceAvailability/action | Checks whether or not a given Namespace resource name is available within the NotificationHub service. | > | Microsoft.NotificationHubs/Namespaces/write | Create a Namespace Resource and Update its properties. Tags and Capacity of the Namespace are the properties which can be updated. | > | Microsoft.NotificationHubs/Namespaces/read | Get the list of Namespace Resource Description |
-> | Microsoft.NotificationHubs/Namespaces/Delete | Delete Namespace Resource |
+> | Microsoft.NotificationHubs/Namespaces/delete | Delete Namespace Resource |
> | Microsoft.NotificationHubs/Namespaces/authorizationRules/action | Get the list of Namespaces Authorization Rules description. | > | Microsoft.NotificationHubs/Namespaces/CheckNotificationHubAvailability/action | Checks whether or not a given NotificationHub name is available inside a Namespace. |
+> | Microsoft.NotificationHubs/namespaces/privateEndpointConnectionsApproval/action | Approve Private Endpoint Connection |
> | Microsoft.NotificationHubs/Namespaces/authorizationRules/write | Create a Namespace level Authorization Rules and update its properties. The Authorization Rules Access Rights, the Primary and Secondary Keys can be updated. | > | Microsoft.NotificationHubs/Namespaces/authorizationRules/read | Get the list of Namespaces Authorization Rules description. |
-> | Microsoft.NotificationHubs/Namespaces/authorizationRules/delete | Delete Namespace Authorization Rule. The Default Namespace Authorization Rule cannot be deleted. |
+> | Microsoft.NotificationHubs/Namespaces/authorizationRules/delete | Delete Namespace Authorization Rule. The Default Namespace Authorization Rule cannot be deleted. |
> | Microsoft.NotificationHubs/Namespaces/authorizationRules/listkeys/action | Get the Connection String to the Namespace | > | Microsoft.NotificationHubs/Namespaces/authorizationRules/regenerateKeys/action | Namespace Authorization Rule Regenerate Primary/SecondaryKey, Specify the Key that needs to be regenerated |
-> | Microsoft.NotificationHubs/namespaces/diagnosticSettings/read | Get list of Namespace diagnostic settings Resource Descriptions |
-> | Microsoft.NotificationHubs/namespaces/diagnosticSettings/write | Get list of Namespace diagnostic settings Resource Descriptions |
-> | Microsoft.NotificationHubs/namespaces/logDefinitions/read | Get list of Namespace logs Resource Descriptions |
> | Microsoft.NotificationHubs/Namespaces/NotificationHubs/write | Create a Notification Hub and Update its properties. Its properties mainly include PNS Credentials. Authorization Rules and TTL | > | Microsoft.NotificationHubs/Namespaces/NotificationHubs/read | Get list of Notification Hub Resource Descriptions |
-> | Microsoft.NotificationHubs/Namespaces/NotificationHubs/Delete | Delete Notification Hub Resource |
+> | Microsoft.NotificationHubs/Namespaces/NotificationHubs/delete | Delete Notification Hub Resource |
> | Microsoft.NotificationHubs/Namespaces/NotificationHubs/authorizationRules/action | Get the list of Notification Hub Authorization Rules | > | Microsoft.NotificationHubs/Namespaces/NotificationHubs/pnsCredentials/action | Get All Notification Hub PNS Credentials. This includes, WNS, MPNS, APNS, GCM and Baidu credentials |
-> | Microsoft.NotificationHubs/Namespaces/NotificationHubs/debugSend/action | Send a test push notification. |
+> | Microsoft.NotificationHubs/Namespaces/NotificationHubs/debugSend/action | Send a test push notification to 10 matched devices. |
> | Microsoft.NotificationHubs/Namespaces/NotificationHubs/authorizationRules/write | Create Notification Hub Authorization Rules and Update its properties. The Authorization Rules Access Rights, the Primary and Secondary Keys can be updated. | > | Microsoft.NotificationHubs/Namespaces/NotificationHubs/authorizationRules/read | Get the list of Notification Hub Authorization Rules | > | Microsoft.NotificationHubs/Namespaces/NotificationHubs/authorizationRules/delete | Delete Notification Hub Authorization Rules | > | Microsoft.NotificationHubs/Namespaces/NotificationHubs/authorizationRules/listkeys/action | Get the Connection String to the Notification Hub | > | Microsoft.NotificationHubs/Namespaces/NotificationHubs/authorizationRules/regenerateKeys/action | Notification Hub Authorization Rule Regenerate Primary/SecondaryKey, Specify the Key that needs to be regenerated | > | Microsoft.NotificationHubs/Namespaces/NotificationHubs/metricDefinitions/read | Get list of Namespace metrics Resource Descriptions |
+> | Microsoft.NotificationHubs/namespaces/privateEndpointConnectionProxies/validate/action | Validate Private Endpoint Connection Proxy |
+> | Microsoft.NotificationHubs/namespaces/privateEndpointConnectionProxies/read | Get Private Endpoint Connection Proxy |
+> | Microsoft.NotificationHubs/namespaces/privateEndpointConnectionProxies/write | Create Private Endpoint Connection Proxy |
+> | Microsoft.NotificationHubs/namespaces/privateEndpointConnectionProxies/delete | Delete Private Endpoint Connection Proxy |
+> | Microsoft.NotificationHubs/namespaces/privateEndpointConnectionProxies/operationstatus/read | Get the status of an asynchronous private endpoint operation |
+> | Microsoft.NotificationHubs/namespaces/privateEndpointConnections/read | Get Private Endpoint Connection |
+> | Microsoft.NotificationHubs/namespaces/privateEndpointConnections/write | Create or Update Private Endpoint Connection |
+> | Microsoft.NotificationHubs/namespaces/privateEndpointConnections/delete | Removes Private Endpoint Connection |
+> | Microsoft.NotificationHubs/namespaces/privateEndpointConnections/operationstatus/read | Removes Private Endpoint Connection |
+> | Microsoft.NotificationHubs/namespaces/providers/Microsoft.Insights/diagnosticSettings/read | Get Namespace diagnostic settings |
+> | Microsoft.NotificationHubs/namespaces/providers/Microsoft.Insights/diagnosticSettings/write | Create or Update Namespace diagnostic settings |
+> | Microsoft.NotificationHubs/namespaces/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for Namespace |
> | Microsoft.NotificationHubs/operationResults/read | Returns operation results for Notification Hubs provider | > | Microsoft.NotificationHubs/operations/read | Returns a list of supported operations for Notification Hubs provider |
Azure service: [Microsoft Sentinel](../sentinel/index.yml)
> | Microsoft.SecurityInsights/onboardingStates/write | Updates an onboarding state | > | Microsoft.SecurityInsights/onboardingStates/delete | Deletes an onboarding state | > | Microsoft.SecurityInsights/operations/read | Gets operations |
+> | Microsoft.SecurityInsights/securityMLAnalyticsSettings/read | Gets the analytics settings |
+> | Microsoft.SecurityInsights/securityMLAnalyticsSettings/write | Update the analytics settings |
+> | Microsoft.SecurityInsights/securityMLAnalyticsSettings/delete | Delete an analytics setting |
> | Microsoft.SecurityInsights/settings/read | Gets settings | > | Microsoft.SecurityInsights/settings/write | Updates settings | > | Microsoft.SecurityInsights/settings/delete | Deletes setting |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/listKeys/read | Retrieves the list keys for the workspace. These keys are used to connect Microsoft Operational Insights agents to the workspace. | > | Microsoft.OperationalInsights/workspaces/managementGroups/read | Gets the names and metadata for System Center Operations Manager management groups connected to this workspace. | > | Microsoft.OperationalInsights/workspaces/metricDefinitions/read | Get Metric Definitions under workspace |
+> | microsoft.operationalinsights/workspaces/networkSecurityPerimeterAssociationProxies/read | Read Network Security Perimeter Association Proxies |
+> | microsoft.operationalinsights/workspaces/networkSecurityPerimeterAssociationProxies/write | Write Network Security Perimeter Association Proxies |
+> | microsoft.operationalinsights/workspaces/networkSecurityPerimeterAssociationProxies/delete | Delete Network Security Perimeter Association Proxies |
+> | microsoft.operationalinsights/workspaces/networkSecurityPerimeterConfigurations/read | Read Network Security Perimeter Configurations |
+> | microsoft.operationalinsights/workspaces/networkSecurityPerimeterConfigurations/write | Write Network Security Perimeter Configurations |
+> | microsoft.operationalinsights/workspaces/networkSecurityPerimeterConfigurations/delete | Delete Network Security Perimeter Configurations |
> | Microsoft.OperationalInsights/workspaces/notificationSettings/read | Get the user's notification settings for the workspace. | > | Microsoft.OperationalInsights/workspaces/notificationSettings/write | Set the user's notification settings for the workspace. | > | Microsoft.OperationalInsights/workspaces/notificationSettings/delete | Delete the user's notification settings for the workspace. |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/AADDomainServicesAccountLogon/read | Read data from the AADDomainServicesAccountLogon table | > | Microsoft.OperationalInsights/workspaces/query/AADDomainServicesAccountManagement/read | Read data from the AADDomainServicesAccountManagement table | > | Microsoft.OperationalInsights/workspaces/query/AADDomainServicesDirectoryServiceAccess/read | Read data from the AADDomainServicesDirectoryServiceAccess table |
-> | Microsoft.OperationalInsights/workspaces/query/AADDomainServicesLogonLogoff/read | Read data from the AADDomainServicesLogonLogoff table |
> | Microsoft.OperationalInsights/workspaces/query/AADDomainServicesPolicyChange/read | Read data from the AADDomainServicesPolicyChange table | > | Microsoft.OperationalInsights/workspaces/query/AADDomainServicesPrivilegeUse/read | Read data from the AADDomainServicesPrivilegeUse table | > | Microsoft.OperationalInsights/workspaces/query/AADManagedIdentitySignInLogs/read | Read data from the AADManagedIdentitySignInLogs table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/ADXQuery/read | Read data from the ADXQuery table | > | Microsoft.OperationalInsights/workspaces/query/ADXTableDetails/read | Read data from the ADXTableDetails table | > | Microsoft.OperationalInsights/workspaces/query/ADXTableUsageStatistics/read | Read data from the ADXTableUsageStatistics table |
+> | Microsoft.OperationalInsights/workspaces/query/AegDataPlaneRequests/read | Read data from the AegDataPlaneRequests table |
> | Microsoft.OperationalInsights/workspaces/query/AegDeliveryFailureLogs/read | Read data from the AegDeliveryFailureLogs table | > | Microsoft.OperationalInsights/workspaces/query/AegPublishFailureLogs/read | Read data from the AegPublishFailureLogs table | > | Microsoft.OperationalInsights/workspaces/query/AEWAuditLogs/read | Read data from the AEWAuditLogs table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/AgriFoodProviderAuthLogs/read | Read data from the AgriFoodProviderAuthLogs table | > | Microsoft.OperationalInsights/workspaces/query/AgriFoodSatelliteLogs/read | Read data from the AgriFoodSatelliteLogs table | > | Microsoft.OperationalInsights/workspaces/query/AgriFoodWeatherLogs/read | Read data from the AgriFoodWeatherLogs table |
+> | Microsoft.OperationalInsights/workspaces/query/AGSGrafanaLoginEvents/read | Read data from the AGSGrafanaLoginEvents table |
> | Microsoft.OperationalInsights/workspaces/query/AirflowDagProcessingLogs/read | Read data from the AirflowDagProcessingLogs table | > | Microsoft.OperationalInsights/workspaces/query/Alert/read | Read data from the Alert table | > | Microsoft.OperationalInsights/workspaces/query/AlertEvidence/read | Read data from the AlertEvidence table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/AzureDevOpsAuditing/read | Read data from the AzureDevOpsAuditing table | > | Microsoft.OperationalInsights/workspaces/query/AzureDiagnostics/read | Read data from the AzureDiagnostics table | > | Microsoft.OperationalInsights/workspaces/query/AzureMetrics/read | Read data from the AzureMetrics table |
-> | Microsoft.OperationalInsights/workspaces/query/BaiClusterEvent/read | Read data from the BaiClusterEvent table |
> | Microsoft.OperationalInsights/workspaces/query/BaiClusterNodeEvent/read | Read data from the BaiClusterNodeEvent table | > | Microsoft.OperationalInsights/workspaces/query/BaiJobEvent/read | Read data from the BaiJobEvent table | > | Microsoft.OperationalInsights/workspaces/query/BehaviorAnalytics/read | Read data from the BehaviorAnalytics table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/CoreAzureBackup/read | Read data from the CoreAzureBackup table | > | Microsoft.OperationalInsights/workspaces/query/DatabricksAccounts/read | Read data from the DatabricksAccounts table | > | Microsoft.OperationalInsights/workspaces/query/DatabricksClusters/read | Read data from the DatabricksClusters table |
+> | Microsoft.OperationalInsights/workspaces/query/DatabricksDatabricksSQL/read | Read data from the DatabricksDatabricksSQL table |
> | Microsoft.OperationalInsights/workspaces/query/DatabricksDBFS/read | Read data from the DatabricksDBFS table |
+> | Microsoft.OperationalInsights/workspaces/query/DatabricksDeltaPipelines/read | Read data from the DatabricksDeltaPipelines table |
> | Microsoft.OperationalInsights/workspaces/query/DatabricksFeatureStore/read | Read data from the DatabricksFeatureStore table | > | Microsoft.OperationalInsights/workspaces/query/DatabricksGenie/read | Read data from the DatabricksGenie table | > | Microsoft.OperationalInsights/workspaces/query/DatabricksGlobalInitScripts/read | Read data from the DatabricksGlobalInitScripts table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/DatabricksJobs/read | Read data from the DatabricksJobs table | > | Microsoft.OperationalInsights/workspaces/query/DatabricksMLflowAcledArtifact/read | Read data from the DatabricksMLflowAcledArtifact table | > | Microsoft.OperationalInsights/workspaces/query/DatabricksMLflowExperiment/read | Read data from the DatabricksMLflowExperiment table |
+> | Microsoft.OperationalInsights/workspaces/query/DatabricksModelRegistry/read | Read data from the DatabricksModelRegistry table |
> | Microsoft.OperationalInsights/workspaces/query/DatabricksNotebook/read | Read data from the DatabricksNotebook table | > | Microsoft.OperationalInsights/workspaces/query/DatabricksRemoteHistoryService/read | Read data from the DatabricksRemoteHistoryService table |
+> | Microsoft.OperationalInsights/workspaces/query/DatabricksRepos/read | Read data from the DatabricksRepos table |
> | Microsoft.OperationalInsights/workspaces/query/DatabricksSecrets/read | Read data from the DatabricksSecrets table | > | Microsoft.OperationalInsights/workspaces/query/DatabricksSQL/read | Read data from the DatabricksSQL table | > | Microsoft.OperationalInsights/workspaces/query/DatabricksSQLPermissions/read | Read data from the DatabricksSQLPermissions table | > | Microsoft.OperationalInsights/workspaces/query/DatabricksSSH/read | Read data from the DatabricksSSH table |
+> | Microsoft.OperationalInsights/workspaces/query/DatabricksUnityCatalog/read | Read data from the DatabricksUnityCatalog table |
> | Microsoft.OperationalInsights/workspaces/query/DatabricksWorkspace/read | Read data from the DatabricksWorkspace table | > | Microsoft.OperationalInsights/workspaces/query/dependencies/read | Read data from the dependencies table | > | Microsoft.OperationalInsights/workspaces/query/DeviceAppCrash/read | Read data from the DeviceAppCrash table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/NWConnectionMonitorPathResult/read | Read data from the NWConnectionMonitorPathResult table | > | Microsoft.OperationalInsights/workspaces/query/NWConnectionMonitorTestResult/read | Read data from the NWConnectionMonitorTestResult table | > | Microsoft.OperationalInsights/workspaces/query/OfficeActivity/read | Read data from the OfficeActivity table |
+> | Microsoft.OperationalInsights/workspaces/query/OLPSupplyChainEntityOperations/read | Read data from the OLPSupplyChainEntityOperations table |
> | Microsoft.OperationalInsights/workspaces/query/Operation/read | Read data from the Operation table | > | Microsoft.OperationalInsights/workspaces/query/Perf/read | Read data from the Perf table | > | Microsoft.OperationalInsights/workspaces/query/PowerBIActivity/read | Read data from the PowerBIActivity table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/SqlVulnerabilityAssessmentResult/read | Read data from the SqlVulnerabilityAssessmentResult table | > | Microsoft.OperationalInsights/workspaces/query/SqlVulnerabilityAssessmentScanStatus/read | Read data from the SqlVulnerabilityAssessmentScanStatus table | > | Microsoft.OperationalInsights/workspaces/query/StorageBlobLogs/read | Read data from the StorageBlobLogs table |
+> | Microsoft.OperationalInsights/workspaces/query/StorageCacheOperationEvents/read | Read data from the StorageCacheOperationEvents table |
+> | Microsoft.OperationalInsights/workspaces/query/StorageCacheUpgradeEvents/read | Read data from the StorageCacheUpgradeEvents table |
+> | Microsoft.OperationalInsights/workspaces/query/StorageCacheWarningEvents/read | Read data from the StorageCacheWarningEvents table |
> | Microsoft.OperationalInsights/workspaces/query/StorageFileLogs/read | Read data from the StorageFileLogs table | > | Microsoft.OperationalInsights/workspaces/query/StorageQueueLogs/read | Read data from the StorageQueueLogs table | > | Microsoft.OperationalInsights/workspaces/query/StorageTableLogs/read | Read data from the StorageTableLogs table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/WUDOStatus/read | Read data from the WUDOStatus table | > | Microsoft.OperationalInsights/workspaces/query/WVDAgentHealthStatus/read | Read data from the WVDAgentHealthStatus table | > | Microsoft.OperationalInsights/workspaces/query/WVDCheckpoints/read | Read data from the WVDCheckpoints table |
+> | Microsoft.OperationalInsights/workspaces/query/WVDConnectionNetworkData/read | Read data from the WVDConnectionNetworkData table |
> | Microsoft.OperationalInsights/workspaces/query/WVDConnections/read | Read data from the WVDConnections table | > | Microsoft.OperationalInsights/workspaces/query/WVDErrors/read | Read data from the WVDErrors table | > | Microsoft.OperationalInsights/workspaces/query/WVDFeeds/read | Read data from the WVDFeeds table | > | Microsoft.OperationalInsights/workspaces/query/WVDHostRegistrations/read | Read data from the WVDHostRegistrations table | > | Microsoft.OperationalInsights/workspaces/query/WVDManagement/read | Read data from the WVDManagement table |
+> | Microsoft.OperationalInsights/workspaces/query/WVDSessionHostManagement/read | Read data from the WVDSessionHostManagement table |
> | microsoft.operationalinsights/workspaces/rules/read | Get all alert rules. | > | Microsoft.OperationalInsights/workspaces/savedSearches/read | Gets a saved search query | > | Microsoft.OperationalInsights/workspaces/savedSearches/write | Creates a saved search query |
Azure service: core
> | Microsoft.Capacity/reservationorders/return/action | Return any Reservation | > | Microsoft.Capacity/reservationorders/swap/action | Swap any Reservation | > | Microsoft.Capacity/reservationorders/split/action | Split any Reservation |
+> | Microsoft.Capacity/reservationorders/changeBilling/action | Reservation billing change |
> | Microsoft.Capacity/reservationorders/merge/action | Merge any Reservation | > | Microsoft.Capacity/reservationorders/calculaterefund/action | Computes the refund amount and price of new purchase and returns policy Errors. |
+> | Microsoft.Capacity/reservationorders/changebillingoperationresults/read | Poll any Reservation billing change operation |
> | Microsoft.Capacity/reservationorders/mergeoperationresults/read | Poll any merge operation | > | Microsoft.Capacity/reservationorders/reservations/availablescopes/action | Find any Available Scope | > | Microsoft.Capacity/reservationorders/reservations/read | Read All Reservations |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Action | Description | > | | | > | Microsoft.RecoveryServices/register/action | Registers subscription for given Resource Provider |
-> | microsoft.recoveryservices/Locations/backupCrossRegionRestore/action | Trigger Cross region restore. |
-> | microsoft.recoveryservices/Locations/backupCrrJob/action | Get Cross Region Restore Job Details in the secondary region for Recovery Services Vault. |
-> | microsoft.recoveryservices/Locations/backupCrrJobs/action | List Cross Region Restore Jobs in the secondary region for Recovery Services Vault. |
-> | microsoft.recoveryservices/Locations/backupPreValidateProtection/action | |
-> | microsoft.recoveryservices/Locations/backupStatus/action | Check Backup Status for Recovery Services Vaults |
-> | microsoft.recoveryservices/Locations/backupValidateFeatures/action | Validate Features |
+> | Microsoft.RecoveryServices/Locations/backupCrossRegionRestore/action | Trigger Cross region restore. |
+> | Microsoft.RecoveryServices/Locations/backupCrrJob/action | Get Cross Region Restore Job Details in the secondary region for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Locations/backupCrrJobs/action | List Cross Region Restore Jobs in the secondary region for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Locations/backupPreValidateProtection/action | |
+> | Microsoft.RecoveryServices/Locations/backupStatus/action | Check Backup Status for Recovery Services Vaults |
+> | Microsoft.RecoveryServices/Locations/backupValidateFeatures/action | Validate Features |
> | Microsoft.RecoveryServices/locations/allocateStamp/action | AllocateStamp is internal operation used by service | > | Microsoft.RecoveryServices/locations/checkNameAvailability/action | Check Resource Name Availability is an API to check if resource name is available | > | Microsoft.RecoveryServices/locations/allocatedStamp/read | GetAllocatedStamp is internal operation used by service |
-> | microsoft.recoveryservices/Locations/backupAadProperties/read | Get AAD Properties for authentication in the third region for Cross Region Restore. |
-> | microsoft.recoveryservices/Locations/backupCrrOperationResults/read | Returns CRR Operation Result for Recovery Services Vault. |
-> | microsoft.recoveryservices/Locations/backupCrrOperationsStatus/read | Returns CRR Operation Status for Recovery Services Vault. |
-> | microsoft.recoveryservices/Locations/backupProtectedItem/write | Create a backup Protected Item |
-> | microsoft.recoveryservices/Locations/backupProtectedItems/read | Returns the list of all Protected Items. |
+> | Microsoft.RecoveryServices/Locations/backupAadProperties/read | Get AAD Properties for authentication in the third region for Cross Region Restore. |
+> | Microsoft.RecoveryServices/Locations/backupCrrOperationResults/read | Returns CRR Operation Result for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Locations/backupCrrOperationsStatus/read | Returns CRR Operation Status for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Locations/backupProtectedItem/write | Create a backup Protected Item |
+> | Microsoft.RecoveryServices/Locations/backupProtectedItems/read | Returns the list of all Protected Items. |
> | Microsoft.RecoveryServices/locations/operationStatus/read | Gets Operation Status for a given Operation | > | Microsoft.RecoveryServices/operations/read | Operation returns the list of Operations for a Resource Provider |
-> | microsoft.recoveryservices/Vaults/backupJobsExport/action | Export Jobs |
-> | microsoft.recoveryservices/Vaults/backupSecurityPIN/action | Returns Security PIN Information for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupTriggerValidateOperation/action | Validate Operation on Protected Item |
-> | microsoft.recoveryservices/Vaults/backupValidateOperation/action | Validate Operation on Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupJobsExport/action | Export Jobs |
+> | Microsoft.RecoveryServices/Vaults/backupSecurityPIN/action | Returns Security PIN Information for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupTriggerValidateOperation/action | Validate Operation on Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupValidateOperation/action | Validate Operation on Protected Item |
> | Microsoft.RecoveryServices/Vaults/write | Create Vault operation creates an Azure resource of type 'vault' | > | Microsoft.RecoveryServices/Vaults/read | The Get Vault operation gets an object representing the Azure resource of type 'vault' | > | Microsoft.RecoveryServices/Vaults/delete | The Delete Vault operation deletes the specified Azure resource of type 'vault' |
-> | microsoft.recoveryservices/Vaults/backupconfig/read | Returns Configuration for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupconfig/write | Updates Configuration for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupEncryptionConfigs/read | Gets Backup Resource Encryption Configuration. |
-> | microsoft.recoveryservices/Vaults/backupEncryptionConfigs/write | Updates Backup Resource Encryption Configuration |
-> | microsoft.recoveryservices/Vaults/backupEngines/read | Returns all the backup management servers registered with vault. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/refreshContainers/action | Refreshes the container list |
-> | microsoft.recoveryservices/Vaults/backupFabrics/backupProtectionIntent/delete | Delete a backup Protection Intent |
-> | microsoft.recoveryservices/Vaults/backupFabrics/backupProtectionIntent/read | Get a backup Protection Intent |
-> | microsoft.recoveryservices/Vaults/backupFabrics/backupProtectionIntent/write | Create a backup Protection Intent |
-> | microsoft.recoveryservices/Vaults/backupFabrics/operationResults/read | Returns status of the operation |
-> | microsoft.recoveryservices/Vaults/backupFabrics/operationsStatus/read | Returns status of the operation |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectableContainers/read | Get all protectable containers |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/delete | Deletes the registered Container |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/inquire/action | Do inquiry for workloads within a container |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/read | Returns all registered containers |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/write | Creates a registered container |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/items/read | Get all items in a container |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/operationResults/read | Gets result of Operation performed on Protection Container. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/operationsStatus/read | Gets status of Operation performed on Protection Container. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/backup/action | Performs Backup for Protected Item. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/delete | Deletes Protected Item |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/read | Returns object details of the Protected Item |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPointsRecommendedForMove/action | Get Recovery points recommended for move to another tier |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/write | Create a backup Protected Item |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/operationResults/read | Gets Result of Operation Performed on Protected Items. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/operationsStatus/read | Returns the status of Operation performed on Protected Items. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/accessToken/action | Get AccessToken for Cross Region Restore. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/move/action | Move Recovery point to another tier |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/provisionInstantItemRecovery/action | Provision Instant Item Recovery for Protected Item |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/read | Get Recovery Points for Protected Items. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/restore/action | Restore Recovery Points for Protected Items. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/revokeInstantItemRecovery/action | Revoke Instant Item Recovery for Protected Item |
-> | microsoft.recoveryservices/Vaults/backupJobs/cancel/action | Cancel the Job |
-> | microsoft.recoveryservices/Vaults/backupJobs/read | Returns all Job Objects |
-> | microsoft.recoveryservices/Vaults/backupJobs/operationResults/read | Returns the Result of Job Operation. |
-> | microsoft.recoveryservices/Vaults/backupJobs/operationsStatus/read | Returns the status of Job Operation. |
-> | microsoft.recoveryservices/Vaults/backupOperationResults/read | Returns Backup Operation Result for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupOperations/read | Returns Backup Operation Status for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupPolicies/delete | Delete a Protection Policy |
-> | microsoft.recoveryservices/Vaults/backupPolicies/read | Returns all Protection Policies |
-> | microsoft.recoveryservices/Vaults/backupPolicies/write | Creates Protection Policy |
-> | microsoft.recoveryservices/Vaults/backupPolicies/operationResults/read | Get Results of Policy Operation. |
-> | microsoft.recoveryservices/Vaults/backupPolicies/operations/read | Get Status of Policy Operation. |
-> | microsoft.recoveryservices/Vaults/backupProtectableItems/read | Returns list of all Protectable Items. |
-> | microsoft.recoveryservices/Vaults/backupProtectedItems/read | Returns the list of all Protected Items. |
-> | microsoft.recoveryservices/Vaults/backupProtectionContainers/read | Returns all containers belonging to the subscription |
-> | microsoft.recoveryservices/Vaults/backupProtectionIntents/read | List all backup Protection Intents |
-> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/delete | The Delete ResourceGuard proxy operation deletes the specified Azure resource of type 'ResourceGuard proxy' |
-> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/read | Get the list of ResourceGuard proxies for a resource |
-> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/read | Get ResourceGuard proxy operation gets an object representing the Azure resource of type 'ResourceGuard proxy' |
-> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/unlockDelete/action | Unlock delete ResourceGuard proxy operation unlocks the next delete critical operation |
-> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/write | Create ResourceGuard proxy operation creates an Azure resource of type 'ResourceGuard Proxy' |
-> | microsoft.recoveryservices/Vaults/backupstorageconfig/read | Returns Storage Configuration for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupstorageconfig/write | Updates Storage Configuration for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupUsageSummaries/read | Returns summaries for Protected Items and Protected Servers for a Recovery Services . |
-> | microsoft.recoveryservices/Vaults/backupValidateOperationResults/read | Validate Operation on Protected Item |
-> | microsoft.recoveryservices/Vaults/backupValidateOperationsStatuses/read | Validate Operation on Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupconfig/read | Returns Configuration for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupconfig/write | Updates Configuration for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupEncryptionConfigs/read | Gets Backup Resource Encryption Configuration. |
+> | Microsoft.RecoveryServices/Vaults/backupEncryptionConfigs/write | Updates Backup Resource Encryption Configuration |
+> | Microsoft.RecoveryServices/Vaults/backupEngines/read | Returns all the backup management servers registered with vault. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/refreshContainers/action | Refreshes the container list |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/delete | Delete a backup Protection Intent |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/read | Get a backup Protection Intent |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/write | Create a backup Protection Intent |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/operationResults/read | Returns status of the operation |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/operationsStatus/read | Returns status of the operation |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectableContainers/read | Get all protectable containers |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/delete | Deletes the registered Container |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/inquire/action | Do inquiry for workloads within a container |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/read | Returns all registered containers |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/write | Creates a registered container |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/items/read | Get all items in a container |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/operationResults/read | Gets result of Operation performed on Protection Container. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/operationsStatus/read | Gets status of Operation performed on Protection Container. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/backup/action | Performs Backup for Protected Item. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/delete | Deletes Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/read | Returns object details of the Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPointsRecommendedForMove/action | Get Recovery points recommended for move to another tier |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/write | Create a backup Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/operationResults/read | Gets Result of Operation Performed on Protected Items. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/operationsStatus/read | Returns the status of Operation performed on Protected Items. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/accessToken/action | Get AccessToken for Cross Region Restore. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/move/action | Move Recovery point to another tier |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/provisionInstantItemRecovery/action | Provision Instant Item Recovery for Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/read | Get Recovery Points for Protected Items. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/restore/action | Restore Recovery Points for Protected Items. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/revokeInstantItemRecovery/action | Revoke Instant Item Recovery for Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupJobs/cancel/action | Cancel the Job |
+> | Microsoft.RecoveryServices/Vaults/backupJobs/read | Returns all Job Objects |
+> | Microsoft.RecoveryServices/Vaults/backupJobs/operationResults/read | Returns the Result of Job Operation. |
+> | Microsoft.RecoveryServices/Vaults/backupJobs/operationsStatus/read | Returns the status of Job Operation. |
+> | Microsoft.RecoveryServices/Vaults/backupOperationResults/read | Returns Backup Operation Result for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupOperations/read | Returns Backup Operation Status for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupPolicies/delete | Delete a Protection Policy |
+> | Microsoft.RecoveryServices/Vaults/backupPolicies/read | Returns all Protection Policies |
+> | Microsoft.RecoveryServices/Vaults/backupPolicies/write | Creates Protection Policy |
+> | Microsoft.RecoveryServices/Vaults/backupPolicies/operationResults/read | Get Results of Policy Operation. |
+> | Microsoft.RecoveryServices/Vaults/backupPolicies/operations/read | Get Status of Policy Operation. |
+> | Microsoft.RecoveryServices/Vaults/backupProtectableItems/read | Returns list of all Protectable Items. |
+> | Microsoft.RecoveryServices/Vaults/backupProtectedItems/read | Returns the list of all Protected Items. |
+> | Microsoft.RecoveryServices/Vaults/backupProtectionContainers/read | Returns all containers belonging to the subscription |
+> | Microsoft.RecoveryServices/Vaults/backupProtectionIntents/read | List all backup Protection Intents |
+> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/delete | The Delete ResourceGuard proxy operation deletes the specified Azure resource of type 'ResourceGuard proxy' |
+> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/read | Get the list of ResourceGuard proxies for a resource |
+> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/read | Get ResourceGuard proxy operation gets an object representing the Azure resource of type 'ResourceGuard proxy' |
+> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/unlockDelete/action | Unlock delete ResourceGuard proxy operation unlocks the next delete critical operation |
+> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/write | Create ResourceGuard proxy operation creates an Azure resource of type 'ResourceGuard Proxy' |
+> | Microsoft.RecoveryServices/Vaults/backupstorageconfig/read | Returns Storage Configuration for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupstorageconfig/write | Updates Storage Configuration for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupUsageSummaries/read | Returns summaries for Protected Items and Protected Servers for a Recovery Services . |
+> | Microsoft.RecoveryServices/Vaults/backupValidateOperationResults/read | Validate Operation on Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupValidateOperationsStatuses/read | Validate Operation on Protected Item |
> | Microsoft.RecoveryServices/Vaults/certificates/write | The Update Resource Certificate operation updates the resource/vault credential certificate. | > | Microsoft.RecoveryServices/Vaults/extendedInformation/read | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? | > | Microsoft.RecoveryServices/Vaults/extendedInformation/write | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/Vaults/monitoringAlerts/write | Resolves the alert. | > | Microsoft.RecoveryServices/Vaults/monitoringConfigurations/read | Gets the Recovery services vault notification configuration. | > | Microsoft.RecoveryServices/Vaults/monitoringConfigurations/write | Configures e-mail notifications to Recovery services vault. |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/delete | Wait for a few minutes and then try the operation again. If the issue persists, please contact Microsoft support. |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/read | Get all protectable containers |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/validate/action | Get all protectable containers |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/write | Get all protectable containers |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/operationsStatus/read | Get all protectable containers |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnections/delete | Delete Private Endpoint requests. This call is made by Backup Admin. |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnections/write | Approve or Reject Private Endpoint requests. This call is made by Backup Admin. |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnections/operationsStatus/read | Returns the operation status for a private endpoint connection. |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/delete | Wait for a few minutes and then try the operation again. If the issue persists, please contact Microsoft support. |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/read | Get all protectable containers |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/validate/action | Get all protectable containers |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/write | Get all protectable containers |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/operationsStatus/read | Get all protectable containers |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/delete | Delete Private Endpoint requests. This call is made by Backup Admin. |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/write | Approve or Reject Private Endpoint requests. This call is made by Backup Admin. |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/operationsStatus/read | Returns the operation status for a private endpoint connection. |
> | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/diagnosticSettings/read | Azure Backup Diagnostics | > | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/diagnosticSettings/write | Azure Backup Diagnostics | > | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/logDefinitions/read | Azure Backup Logs |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/vaults/replicationVaultSettings/read | Read any | > | Microsoft.RecoveryServices/vaults/replicationVaultSettings/write | Create or Update any | > | Microsoft.RecoveryServices/vaults/replicationvCenters/read | Read any vCenters |
-> | microsoft.recoveryservices/Vaults/usages/read | Returns usage details for a Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/usages/read | Returns usage details for a Recovery Services Vault. |
> | Microsoft.RecoveryServices/vaults/usages/read | Read any Vault Usages | > | Microsoft.RecoveryServices/Vaults/vaultTokens/read | The Vault Token operation can be used to get Vault Token for vault level backend operations. |
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-log-reference.md
The tables listed below are required to enable functions that identify privilege
## Functions available from the SAP solution
-This section describes the [functions](/azure-monitor/logs/functions.md) that are available in your workspace after you've deployed the Continuous Threat Monitoring for SAP solution. Find these functions in the Microsoft Sentinel **Logs** page to use in your KQL queries, listed under **Workspace functions**.
+This section describes the [functions](/azure/azure-monitor/logs/functions) that are available in your workspace after you've deployed the Continuous Threat Monitoring for SAP solution. Find these functions in the Microsoft Sentinel **Logs** page to use in your KQL queries, listed under **Workspace functions**.
### SAPUsersAssignments
sentinel Web Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/web-normalization-schema.md
For example, to filter only Web sessions for a specified list of domain names, u
```kql let torProxies=dynamic(["tor2web.org", "tor2web.com", "torlink.co",...]);
-imWebSession (hurl_has_any = torProxies)
+imWebSession (url_has_any = torProxies)
``` ## Schema details
service-bus-messaging Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md
Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
service-fabric Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md
Title: Built-in policy definitions for Azure Service Fabric description: Lists Azure Policy built-in policy definitions for Azure Service Fabric. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
service-fabric Service Fabric Best Practices Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-security.md
A Service Fabric cluster is single tenant by design and hosted applications are
If you are considering hosting **untrusted applications**, you must take additional steps to define and own the hostile multi-tenant experience for your Service Fabric cluster. This will require you to consider multiple aspects, in the context of your scenario, including, but not limited to, the following: * A thorough security review of the untrusted applications' interactions with other applications, the cluster itself, and the underlying compute infrastructure.
-* Use of the strongest sandboxing technology applicable (e.g., appropriate [isolation modes](/virtualization/windowscontainers/manage-containers/hyperv-container.md) for container workloads).
+* Use of the strongest sandboxing technology applicable (e.g., appropriate [isolation modes](/virtualization/windowscontainers/manage-containers/hyperv-container) for container workloads).
* Risk assessment of the untrusted applications escaping the sandboxing technology, as the next trust and security boundary is the cluster itself. * Removal of the untrusted applications' [access to Service Fabric runtime](service-fabric-service-model-schema-complex-types.md#servicefabricruntimeaccesspolicytype-complextype).
service-health Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Service Health description: Sample Azure Resource Graph queries for Azure Service Health showing use of resource types and tables to access Azure Service Health related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
site-recovery Azure To Azure How To Enable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-policy.md
Title: Enable Azure Site Recovery for your VMs using Azure Policy
-description: Learn how to enable Policy Support to protect your VMs using Azure Site Recovery.
+ Title: Enable Azure Site Recovery for your VMs by using Azure Policy
+description: Learn how to enable policy support to help protect your VMs by using Azure Site Recovery.
-# Using Policy with Azure Site Recovery
+# Use Azure Policy to set up Azure Site Recovery
-This article describes how to set up [Azure Site Recovery](./site-recovery-overview.md) for your resources, using Azure Policy. [Azure Policy](../governance/policy/overview.md) helps to enforce certain business rules on your Azure resources and assess compliance of said resources.
+This article describes how to set up [Azure Site Recovery](./site-recovery-overview.md) for your resources by using Azure Policy. [Azure Policy](../governance/policy/overview.md) helps enforce certain business rules on your Azure resources and assess compliance of those resources.
+
+## Disaster recovery with Azure Policy
-## Disaster Recovery with Azure Policy
Site Recovery helps you keep your applications up and running in the event of planned or unplanned zonal/regional outages. Enabling Site Recovery on your machines at scale through the Azure portal can be challenging. Azure Policy can help you enable replication at scale without resorting to any scripting.
-With the built-in Azure Policy, you have a way to enable Site Recovery en masse on specific subscriptions or resource groups through the portal. Once you have a disaster recovery policy created for a subscription or resource group(s), then all the new virtual machines that are added to that/those subscription or resource group(s) will get Site Recovery enabled for them automatically. Moreover, for all the virtual machines already present in the resource group, Site Recovery can be enabled through a process called _remediation_(details below).
+With built-in Azure Policy capabilities, you have a way to enable Site Recovery en masse on specific subscriptions or resource groups through the portal. After you create a disaster recovery (DR) policy for subscriptions or resource groups, all the new virtual machines (VMs) that are added to those subscriptions or resource groups will get Site Recovery enabled for them automatically. For all the virtual machines already present in the resource group, you can enable Site Recovery through a process called _remediation_ (details later in this article).
>[!NOTE]
->The _Scope_ of this policy can be at a subscription level or resource group level.
+>A *scope* determines the resources or the grouping of resources where the policy assignment is enforced. The scope of this policy can be at a subscription level or a resource group level.
## Prerequisites -- Understand how to assign a Policy [here](../governance/policy/assign-policy-portal.md).-- Learn more about the Architecture of Azure to Azure Disaster Recovery [here](./azure-to-azure-architecture.md).-- Review the support matrix for Azure Site Recovery Policy Support:-
-**Scenario** | **Support Statement**
- |
-Managed Disks | Supported <br/>OS disk should be at least 1GB and at most 4TB in size.<br/>Data disk(s) should be at least 1GB and at most 32TB in size.<br/>
-Unmanaged Disks | Not supported
-Multiple Disks | Supported for up to 100 disks per VM.
-Ephemeral Disks | Not supported
-Ultra Disks | Not supported
-Availability Sets | Supported
-Availability Zones | Supported
-Azure Disk Encryption (ADE) enabled VMs | Not supported
-Proximity Placement Groups (PPG) | Supported. If the source VM is inside a PPG, then the Policy will create a PPG by appending ΓÇÿ-asrΓÇÖ on the source PPG and use it for the DR/secondary region failover.
-VMs in both PPG and availability set | Not supported
-Customer-managed keys (CMK) enabled disks | Not supported
-Storage spaces direct (S2D) clusters | Not supported
-Virtual machine scale set VMs | Not supported
-VM with image as Azure Site Recovery Configuration Server | Not supported
-Powered off VMs | Not supported. VM must be powered on for the Policy to work on it.
-Azure Resource Manager Deployment Model | Supported
-Classic Deployment Model | Not supported
-Zone to Zone DR | Supported
-Interoperability with other policies applied as default by Azure (if any) | Supported
+- [Understand how to assign a policy](../governance/policy/assign-policy-portal.md).
+- [Learn more about the architecture of Azure-to-Azure disaster recovery](./azure-to-azure-architecture.md).
+- Review the following support matrix for Azure Site Recovery policy support:
->[!NOTE]
->In the following cases, Site Recovery will not be enabled:
->1. If a not-supported VM is created within the scope of policy.
->1. If a VM is a part of both an Availability Set as well as PPG.
+ **Scenario** | **Support statement**
+ |
+ Managed disks | Supported. The OS disk should be 1 GB to 4 TB in size. Data disks should be 1 GB to 32 TB in size.
+ Unmanaged disks | Not supported
+ Multiple disks | Supported for up to 100 disks per VM
+ Ephemeral disks | Not supported
+ Ultra disks | Not supported
+ Availability sets | Supported
+ Availability zones | Supported
+ Azure Disk Encryption enabled VMs | Not supported
+ Proximity placement groups (PPGs) | Supported. If the source VM is inside a PPG, the policy will create a PPG by appending *-asr* on the source PPG and use it for failover to the DR/secondary region.
+ VMs in both PPGs and availability sets | Not supported
+ Customer-managed key (CMK) enabled disks | Not supported
+ Storage Spaces Direct (S2D) clusters | Not supported
+ Virtual machine scale sets | Not supported
+ VM with image as Azure Site Recovery configuration server | Not supported
+ Powered-off VMs | Not supported. The VM must be powered on for the policy to work on it.
+ Azure Resource Manager deployment model | Supported
+ Classic deployment model | Not supported
+ Zone-to-zone DR | Supported
+ Interoperability with other policies applied as default by Azure (if any) | Supported
+
+> [!NOTE]
+> Site Recovery won't be enabled if:
+> - An unsupported VM is created within the scope of the policy.
+> - A VM is a part of both an availability set and a PPG.
+
+## Create a policy assignment
+
+To create a policy assignment for the built-in Azure Site Recovery policy that enables replication for all newly created VMs in a subscription or resource group:
+
+1. In the Azure portal, go to **Azure Policy**.
+1. Select **Assignments** on the left side of the Azure Policy page. An assignment is a policy that has been assigned to run on a specific scope.
+
+ :::image type="content" source="./media/azure-to-azure-how-to-enable-policy/select-assignments.png" alt-text="Screenshot of selecting Assignments from the Azure Policy overview page." border="false":::
+
+1. Select **Assign policy** from the top of the **Policy - Assignments** page.
+
+ :::image type="content" source="./media/azure-to-azure-how-to-enable-policy/select-assign-policy.png" alt-text="Screenshot of selecting Assign Policy from the Assignments page." border="false":::
+
+1. On the **Assign Policy** page, set the **Scope** information by selecting the ellipsis, selecting a subscription, and then optionally selecting a resource group. Then use the **Select** button at the bottom of the **Scope** page.
-## Create a Policy Assignment
-To create a policy assignment of the built-in Azure Site Recovery Policy that enables replication for all newly created VMs in a subscription or resource group(s), perform the following:
-1. Go to the **Azure portal** and navigate to **Azure Policy**
-1. Select **Assignments** on the left side of the Azure Policy page. An assignment is a policy that
- has been assigned to execute on a specific scope.
+ > [!NOTE]
+ > You can also choose to exclude a few resource groups from assignment of the policy by selecting them under **Exclusions**. This ability is useful when you want to assign the policy to all but a few resource groups in a subscription.
-1. Select **Assign Policy** from the top of the **Policy - Assignments** page.
+1. Open the policy definition picker by selecting the ellipses next to **Policy definition**. Search for **disaster recovery** or **site recovery**. You'll find a built-in policy titled **Configure disaster recovery on virtual machines by enabling replication via Azure Site Recovery**. Select it and click **Select**.
-1. On the **Assign Policy** page, set the **Scope** by selecting the ellipsis and then selecting a subscription and then optionally a resource group. A scope determines what resources or grouping of resources the policy assignment gets enforced on. Then use the **Select** button at the bottom of the **Scope** page. Please note that you can also choose to exclude a few resource groups from assignment of the Policy by selecting them under ΓÇÿExclusionsΓÇÖ. This is particularly useful when you want to assign the Policy to all but a few resource groups in a given subscription.
+ :::image type="content" source="./media/azure-to-azure-how-to-enable-policy/select-policy-definition.png" alt-text="Screenshot of selecting a policy definition from the Basics page." border="true":::
-1. Launch the _Policy Definition Picker_ by selecting the ellipses next to **Policy Definition**. Search for _'disaster recovery'_ or _'site recovery'_. You will find a built-in Policy titled _"Configure disaster recovery on virtual machines by enabling replication via Azure Site Recovery"_. Select it and click _'Select'_.
+1. **Assignment name** is automatically populated with the policy name that you selected, but you can change it. It might be helpful if you plan to assign multiple Azure Site Recovery policies to the same scope.
-1. The **Assignment name** is automatically populated with the policy name you selected, but you can change it. It may be helpful if you plan to assign multiple Azure Site Recovery Policies to the same scope.
+1. Select **Next** to configure Azure Site Recovery properties for the policy.
-1. Select **Next** to configure Azure Site Recovery Properties for the Policy.
+## Configure target settings and properties
+
+You're on your way to creating a policy that enables Azure Site Recovery. Now, configure the target settings and properties:
+
+1. Go to the **Parameters** tab in the **Assign policy** workflow. Clear **Only show parameters that need input or review**. The parameters look as follows:
+
+ :::image type="content" source="./media/azure-to-azure-how-to-enable-policy/specify-parameters.png" alt-text="Screenshot of setting parameters from the Parameters page." border="true":::
-## Configure Target Settings and Properties
-You are on the way to create a Policy to enable Azure Site Recovery. Let us now configure the Target Settings and Properties:
-1. Go to the **Parameters** section of the **Assign Policy** workflow. Unselect _Only show parameters that need input or review_. The parameters look as follows:
1. Select appropriate values for these parameters:
- - **Source Region**: The Source Region of the Virtual Machines for which the Policy will be applicable.
- >[!NOTE]
- >The policy will apply to all the Virtual Machines belonging to the Source Region in the scope of the Policy. Virtual Machines not present in the Source Region will not be included.
- - **Target Region**: The location where your source virtual machine data will be replicated. Site Recovery provides the list of target regions that the customer can replicate to. If you want to enable zone to zone replication within a given region, select the same region as Source Region.
- - **Target Resource Group**: The resource group to which all your replicated virtual machines belong. By default, Site Recovery creates a new resource group in the target region.
- - **Vault Resource Group**: The resource group in which Recovery Services Vault exists.
- - **Recovery Services Vault**: The Vault in which all the VMs of the Scope will get protected. Policy can create a new vault on your behalf, if required.
- - **Recovery Virtual Network** **(optional parameter)**: Pick an existing virtual network in the target region to be used for recovery virtual machine. Policy can create a new virtual network for you as well, if required.
- - **Target Availability Zone** **(optional)**: Enter the Availability Zone of the Target Region where the Virtual Machine will failover. If some of the virtual machines in your resource group are already in the target availability zone, then the policy will not be applied to them in case you are setting up Zone to Zone DR.
- - **Cache Storage Account** **(optional)**: Azure Site Recovery makes use of a storage account for caching replicated data in the source region. Please select an account of your choice. You can choose to select the default cache storage account if you do not need to take care of any special considerations.
- > [!NOTE]
- > Please check cache storage account limits in the [Support Matrix](../site-recovery/azure-to-azure-support-matrix.md#cache-storage) before choosing a cache storage account.
- - **Tag name** **(optional)**: You can apply tags to your replicated VMs to logically organize them into a taxonomy. Each tag consists of a name and a value pair. You can use this field to enter the tag name. For example, *Environment*.
- - **Tag values** **(optional)**: You can use this field to enter the tag value. For example, *Production*.
- - **Tag type** **(optional)**: You can use tags to include VMs as part of the Policy assignment by selecting ΓÇÿTag type = InclusionΓÇÖ. This ensures that only the VMs that have the tag (provided via ΓÇÿTag nameΓÇÖ and ΓÇÿTag valuesΓÇÖ fields) are included in the Policy assignment. Alternatively, you can choose ΓÇÿTag type = ExclusionΓÇÖ. This ensures that the VMs that have the tag (provided via ΓÇÿTag nameΓÇÖ and ΓÇÿTag valuesΓÇÖ fields) are excluded from Policy assignment. If no tags are selected, the entire resource group and/or subscription (as the case may be) gets selected for the Policy assignment.
- - **Effect**: Enable or disable the execution of the policy. Select _DeployIfNotExists_ to enable the policy as soon as it gets created.
+
+ - **Source Region**: Enter the source region of the virtual machines for which the policy will apply.
+
+ >[!NOTE]
+ >The policy will apply to all the virtual machines that belong to the source region in the scope of the policy. Virtual machines not present in the source region won't be included.
+
+ - **Target Region**: Enter the location where your source virtual machine data will be replicated. Site Recovery provides the list of target regions that the customer can replicate to. If you want to enable zone-to-zone replication within a region, select the same region as the **Source Region** value.
+ - **Target Resource Group**: Enter the resource group to which all your replicated virtual machines belong. By default, Site Recovery creates a new resource group in the target region.
+ - **Vault Resource Group**: Enter the resource group in which the Recovery Services vault exists.
+ - **Recovery Services Vault**: This is the vault in which all the VMs of the scope will be protected. The policy can create a new vault on your behalf, if required.
+ - **Recovery Virtual Network** **(optional)**: Choose an existing virtual network in the target region to be used for the recovery virtual machine. The policy can create a new virtual network for you, if required.
+ - **Target Availability Zone** **(optional)**: Enter the availability zone of the target region where the virtual machine will fail over. If some of the virtual machines in your resource group are already in the target availability zone, the policy won't be applied to them in case you're setting up zone-to-zone DR.
+ - **Cache storage account** **(optional)**: Azure Site Recovery makes use of a storage account for caching replicated data in the source region. Select an account of your choice. You can choose the default cache storage account if you don't have any special considerations.
+
+ > [!NOTE]
+ > Before you choose a cache storage account, check the cache storage account limits in the [support matrix](../site-recovery/azure-to-azure-support-matrix.md#cache-storage).
+
+ - **Tag name** **(optional)**: You can apply tags to your replicated VMs to logically organize them into a taxonomy. Each tag consists of a name/value pair. For example, enter **Environment**.
+ - **Tag values** **(optional)**: You can use this field to enter a tag value. For example, enter **Production**.
+ - **Tag type** **(optional)**: You can use tags to include VMs as part of the policy assignment by selecting **Tag type = Inclusion**. This type ensures that only the VMs that have the tag (provided via **Tag name** and **Tag values** fields) are included in the policy assignment.
+
+ Alternatively, you can choose **Tag type = Exclusion**. This type ensures that the VMs that have the tag (provided via **Tag name** and **Tag values** fields) are excluded from the policy assignment.
+
+ If no tags are selected, the entire resource group and/or subscription is selected for the policy assignment.
+ - **Effect**: Enable or disable the execution of the policy. Select **DeployIfNotExists** to enable the policy as soon as it's created.
+
+1. Select **Next** to decide on remediation tasks.
+
+## Configure remediation and other properties
+
+You've configured the target properties for Azure Site Recovery. However, this policy will take effect only for newly created virtual machines in the scope of the policy. Replication on pre-existing VMs isn't enabled automatically in the scope of the policy. You can solve this by creating a remediation task after the policy is assigned.
+
+To create a remediation task and set other properties:
-1. Select **Next** to decide on Remediation Task.
+1. On the **Remediation** tab in the **Assign policy** workflow, select the **Create a Remediation Task** checkbox.
-## Remediation and other properties
-1. The Target Properties for Azure Site Recovery have been configured. However, this policy will take effect only for newly created virtual machines in the scope of the Policy. Pre-existing VMs in the scope of the Policy do not see replication being enabled on them automatically. This can be solved via a Remediation Task after the policy is assigned. You can create a Remediation Task here by checking _Create a Remediation Task_ checkbox.
+ Azure Policy will create a [managed identity](../governance/policy/how-to/remediate-resources.md), which will have owner permissions to enable Azure Site Recovery for the resources in the scope.
-1. Azure Policy will create a [Managed Identity](../governance/policy/how-to/remediate-resources.md), which will have owner permissions to enable Azure Site Recovery for the resources in the scope.
+1. You can configure a custom non-compliance message for the policy on the **Non-compliance messages** tab.
-1. You can configure a custom Non-Compliance message for the policy on the _Non-compliance messages_ tab.
+1. Select **Next** at the bottom of the page or the **Review + Create** tab at the top of the page to move to the next segment of the assignment wizard.
-1. Select Next at the bottom of the page or the _Review + Create_ tab at the top of the page to move to the next segment of the assignment wizard.
+1. Review the selected options, and then select **Create** at the bottom of the page.
-1. Review the selected options, then select _Create_ at the bottom of the page.
+## Check the protection status of VMs after policy assignment
-## Checking protection status of VMs after assignment of Policy
-After the Policy is assigned, please wait for up to 1 hour for replication to be enabled. Subsequently, please go to the Recovery Services Vault chosen during Policy assignment and look for replication jobs. You should be able to locate all VMs for which Site Recovery was enabled via Policy in this vault.
+After you assign the policy, wait for up to 1 hour for replication to be enabled. After that, go to the Recovery Services vault that you chose during policy assignment and look for replication jobs. You should be able to find all VMs for which Site Recovery was enabled via policy in this vault.
-If the VMs do not show up in the vault as protected, you can go back to the Policy assignment and attempt to remediate.
+If the VMs don't show up in the vault as protected, you can go back to the policy assignment and try to remediate.
-If the VMs show up as non-compliant, it may be because Policy evaluation may have taken place before the VM was up and running completely. You can choose to either remediate or wait for up to 24 hours for the Policy to evaluate the subscription/resource group and remediate automatically.
+If the VMs show up as noncompliant, it might be because policy evaluation happened before the VM was completely up and running. You can choose to either remediate or wait for up to 24 hours for the policy to evaluate the subscription/resource group and remediate automatically.
-## Next Steps
+## Next steps
[Learn more](site-recovery-test-failover-to-azure.md) about running a test failover.
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Configuration server ova** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
+[Rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 9.48.6263.1 | 5.1.7207.0 | 9.48.6263.1 | 5.1.7207.0 | 2.0.9245.0
[Rollup 60](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) | 9.47.6219.1 | 5.1.7127.0 | 9.47.6219.1 | 5.1.7127.0 | 2.0.9241.0 [Rollup 59](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 9.46.6149.1 | 5.1.7029.0 | 9.46.6149.1 | 5.1.7030.0 | 2.0.9239.0 [Rollup 58](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 9.45.6096.1 | 5.1.6952.0 | 9.45.6096.1 | 5.1.6952.0 | 2.0.9237.0 [Rollup 57](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 9.44.6068.1 | 5.1.6899.0 | 9.44.6068.1 | 5.1.6899.0 | 2.0.9236.0
-[Rollup 56](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 9.43.6040.1 | 5.1.6853.0 | 9.43.6040.1| 5.1.6853.0 | 2.0.9226.0
[Learn more](service-updates-how-to.md) about update installation and support.
+## Updates (March 2022)
+
+### Update Rollup 61
+
+[Update rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) provides the following updates:
+
+**Update** | **Details**
+ |
+**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
+**Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
+**Azure VM disaster recovery** | Added support for additional kernels for Debian 10 and Ubuntu 20.04 Linux distros. <br/><br/> Added public preview support for on-Demand Capacity Reservation integration.
+**VMware VM/physical disaster recovery to Azure** | Added support for thin provisioned LVM volumes.<br/><br/>
+ ## Updates (January 2022) ### Update Rollup 60
spring-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/policy-reference.md
Title: Built-in policy definitions for Azure Spring Cloud description: Lists Azure Policy built-in policy definitions for Azure Spring Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
static-web-apps Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apis.md
Logs are only available if you add [Application Insights](monitor.md).
| Managed functions | Bring your own functions | | | |
-| <ul><li>Triggers are limited to [HTTP](../azure-functions/functions-bindings-http-webhook.md).</li><li>The Azure Functions app must either be in Node.js 12, Node.js 14, Node.js 16 (preview), .NET Core 3.1, .NET 6.0, Python 3.8, or Python 3.9.</li><li>Some application settings are managed by the service, therefore the following prefixes are reserved by the runtime:<ul><li>*APPSETTING\_, AZUREBLOBSTORAGE\_, AZUREFILESSTORAGE\_, AZURE_FUNCTION\_, CONTAINER\_, DIAGNOSTICS\_, DOCKER\_, FUNCTIONS\_, IDENTITY\_, MACHINEKEY\_, MAINSITE\_, MSDEPLOY\_, SCMSITE\_, SCM\_, WEBSITES\_, WEBSITE\_, WEBSOCKET\_, AzureWeb*</li></ul></li></ul> | <ul><li>You are responsible to manage the Functions app deployment.</li></ul> |
+| <ul><li>Triggers are limited to [HTTP](../azure-functions/functions-bindings-http-webhook.md).</li><li>The Azure Functions app must either be in Node.js 12, Node.js 14, Node.js 16 (preview), .NET Core 3.1, .NET 6.0, Python 3.8, or Python 3.9.</li><li>Some application settings are managed by the service, therefore the following prefixes are reserved by the runtime:<ul><li>*APPSETTING\_, AZUREBLOBSTORAGE\_, AZUREFILESSTORAGE\_, AZURE_FUNCTION\_, CONTAINER\_, DIAGNOSTICS\_, DOCKER\_, FUNCTIONS\_, IDENTITY\_, MACHINEKEY\_, MAINSITE\_, MSDEPLOY\_, SCMSITE\_, SCM\_, WEBSITES\_, WEBSITE\_, WEBSOCKET\_, AzureWeb*</li></ul></li><li>Some application tags are internally used by the service. Therefore, the following tags are reserved:<ul><li> *AccountId, EnvironmentId, FunctionAppId*.</li></ul></li></ul> | <ul><li>You are responsible to manage the Functions app deployment.</li></ul> |
## Next steps
static-web-apps Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/authentication-authorization.md
For example:
<a href="/.auth/login/github?post_login_redirect_uri=https://zealous-water.azurestaticapps.net/success">Login</a> ```
+Additionally, you can redirect unauthenticated users back to the referring page after they log in. To configure this behavior, create a [response override](configuration.md#response-overrides) rule that sets `post_login_redirect_uri` to `.referrer`.
+
+For example:
+
+```json
+{
+ "responseOverrides": {
+ "401": {
+ "redirect": "/.auth/login/github?post_login_redirect_uri=.referrer",
+ "statusCode": 302
+ }
+ }
+}
+```
+ ## Logout The `/.auth/logout` route logs users out from the website. You can add a link to your site navigation to allow the user to log out as shown in the following example.
storage Data Lake Storage Acl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-powershell.md
This article shows you how to use PowerShell to get, set, and update the access
ACL inheritance is already available for new child items that are created under a parent directory. But you can also add, update, and remove ACLs recursively on the existing child items of a parent directory without having to make these changes individually for each child item.
-[Reference](/powershell/module/Az.Storage/) | [Recursive ACL Samples](https://recursiveaclpr.blob.core.windows.net/privatedrop/samplePS.ps1?sv=2019-02-02&st=2020-08-24T17%3A04%3A44Z&se=2021-08-25T17%3A04%3A00Z&sr=b&sp=r&sig=dNNKS%2BZcp%2F1gl6yOx6QLZ6OpmXkN88ZjBeBtym1Mejo%3D) | [Give feedback](https://github.com/Azure/azure-powershell/issues)
+[Reference](/powershell/module/Az.Storage/) | [Give feedback](https://github.com/Azure/azure-powershell/issues)
## Prerequisites
storage Network File System Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-known-issues.md
When you enable NFS 3.0 protocol support, some Blob Storage features will be ful
To see how each Blob Storage feature is supported in accounts that have NFS 3.0 support enabled, see [Blob Storage feature support for Azure Storage accounts](storage-feature-support-in-storage-accounts.md).
-> [!NOTE]
-> Static websites is an example of a partially supported feature because the configuration page for static websites does not yet appear in the Azure Portal for accounts that have NFS 3.0 support enabled. You can enable static websites only by using PowerShell or Azure CLI.
+> [!NOTE]
+> Static websites is an example of a partially supported feature because the configuration page for static websites does not yet appear in the Azure portal for accounts that have NFS 3.0 support enabled. You can enable static websites only by using PowerShell or Azure CLI.
## See also
storage Secure File Transfer Protocol Host Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-host-keys.md
When you connect to Blob Storage by using an SFTP client, you might be prompted
- [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md) - [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-support-how-to.md)-- [Known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md)
+- [Limitations and known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md)
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
Title: Known issues with SFTP in Azure Blob Storage (preview) | Microsoft Docs
+ Title: Limitations & known issues with SFTP in Azure Blob Storage (preview) | Microsoft Docs
description: Learn about limitations and known issues of SSH File Transfer Protocol (SFTP) support for Azure Blob Storage.
-# Known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage (preview)
+# Limitations and known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage (preview)
This article describes limitations and known issues of SFTP support for Azure Blob Storage.
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
See the documentation of your SFTP client for guidance about how to connect and
## See also - [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md)-- [Known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md)
+- [Limitations and known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md)
- [Host keys for SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-host-keys.md)
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
Transaction and storage costs are based on factors such as storage account type
## See also - [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-support-how-to.md)-- [Known issues with SSH File Transfer Protocol (SFTP) in Azure Blob Storage](secure-file-transfer-protocol-known-issues.md)
+- [Limitations and known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md)
- [Host keys for SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-host-keys.md)
storage Storage Blobs Static Site Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-static-site-github-actions.md
OpenID Connect is an authentication method that uses short-lived tokens. Setting
az ad sp create --id $appId ```
-1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
```azurecli-interactive
- az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal
+ az role assignment create --role contributor --scope /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal
``` 1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
storage Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Storage description: Sample Azure Resource Graph queries for Azure Storage showing use of resource types and tables to access Azure Storage related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Previously updated : 02/16/2022 Last updated : 03/12/2022
You can grant access to trusted Azure services by creating a network rule except
When you grant access to trusted Azure services, you grant the following types of access: - Trusted access for select operations to resources that are registered in your subscription.-- Trusted access to resources based on system-assigned managed identity.
+- Trusted access to resources based on a managed identity.
<a id="trusted-access-resources-in-subscription"></a>
Resources of some services, **when registered in your subscription**, can access
| Azure Site Recovery | Microsoft.SiteRecovery | Enable replication for disaster-recovery of Azure IaaS virtual machines when using firewall-enabled cache, source, or target storage accounts. [Learn more](../../site-recovery/azure-to-azure-tutorial-enable-replication.md). | <a id="trusted-access-system-assigned-managed-identity"></a>
+<a id="trusted-access-based-on-system-assigned-managed-identity"></a>
-### Trusted access based on system-assigned managed identity
+### Trusted access based on a managed identity
The following table lists services that can have access to your storage account data if the resource instances of those services are given the appropriate permission.
-If your account does not have the hierarchical namespace feature enabled on it, you can grant permission, by explicitly assigning an Azure role to the [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for each resource instance. In this case, the scope of access for the instance corresponds to the Azure role assigned to the managed identity.
+If your account does not have the hierarchical namespace feature enabled on it, you can grant permission, by explicitly assigning an Azure role to the [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for each resource instance. In this case, the scope of access for the instance corresponds to the Azure role assigned to the managed identity.
-You can use the same technique for an account that has the hierarchical namespace feature enable on it. However, you don't have to assign an Azure role if you add the system-assigned managed identity to the access control list (ACL) of any directory or blob contained in the storage account. In that case, the scope of access for the instance corresponds to the directory or file to which the system-assigned managed identity has been granted access. You can also combine Azure roles and ACLs together. To learn more about how to combine them together to grant access, see [Access control model in Azure Data Lake Storage Gen2](../blobs/data-lake-storage-access-control-model.md).
+You can use the same technique for an account that has the hierarchical namespace feature enable on it. However, you don't have to assign an Azure role if you add the managed identity to the access control list (ACL) of any directory or blob contained in the storage account. In that case, the scope of access for the instance corresponds to the directory or file to which the managed identity has been granted access. You can also combine Azure roles and ACLs together. To learn more about how to combine them together to grant access, see [Access control model in Azure Data Lake Storage Gen2](../blobs/data-lake-storage-access-control-model.md).
> [!TIP] > The recommended way to grant access to specific resources is to use resource instance rules. To grant access to specific resource instances, see the [Grant access from Azure resource instances (preview)](#grant-access-specific-instances) section of this article.
storage Storage Samples Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples-javascript.md
The following tables provide an overview of our samples repository and the scena
[Receive messages](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-queue/samples/v12/javascript/queueClient.js#L73) :::column-end::: :::column span="":::
- [Delete messages](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-queue/samp#L5)
+ [Delete messages](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-queue/samples/v12/javascript/queueClient.js#L5)
:::column-end::: :::row-end:::
storage File Sync Choose Cloud Tiering Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-choose-cloud-tiering-policies.md
Title: Choose Azure File Sync cloud tiering policies | Microsoft Docs description: Details on what to keep in mind when choosing Azure File Sync cloud tiering policies.-+ Last updated 04/13/2021-+
storage File Sync Cloud Tiering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-cloud-tiering-overview.md
Title: Understand Azure File Sync cloud tiering | Microsoft Docs description: Understand cloud tiering, an optional Azure File Sync feature. Frequently accessed files are cached locally on the server; others are tiered to Azure Files.-+ Last updated 04/13/2021-+
storage File Sync Cloud Tiering Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-cloud-tiering-policy.md
Title: Azure File Sync cloud tiering policies | Microsoft Docs description: Details on how the date and volume free space policies work together for different scenarios.-+ Last updated 04/13/2021-+
storage File Sync Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md
Title: Deploy Azure File Sync | Microsoft Docs description: Learn how to deploy Azure File Sync, from start to finish, using the Azure portal, PowerShell, or the Azure CLI.-+ Last updated 04/15/2021-+ ms.devlang: azurecli
storage File Sync Disaster Recovery Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-disaster-recovery-best-practices.md
Title: Best practices for disaster recovery with Azure File Sync description: Learn about best practices for disaster recovery with Azure File Sync. Specifically, high availability, data protection, and data redundancy.-+ Last updated 08/18/2021-+
storage File Sync Extend Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-extend-servers.md
Title: Tutorial - Extend Windows file servers with Azure File Sync | Microsoft Docs description: Learn how to extend Windows file servers with Azure File Sync, from start to finish.-+ Last updated 04/13/2021-+ #Customer intent: As an IT Administrator, I want see how to extend Windows file servers with Azure File Sync, so I can evaluate the process for extending storage capacity of my Windows servers.
storage File Sync Firewall And Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-firewall-and-proxy.md
Title: Azure File Sync on-premises firewall and proxy settings | Microsoft Docs description: Understand Azure File Sync on-premises proxy and firewall settings. Review configuration details for ports, networks, and special connections to Azure.-+ Last updated 04/13/2021-+
storage File Sync How To Manage Tiered Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-how-to-manage-tiered-files.md
Title: How to manage Azure File Sync tiered files | Microsoft Docs description: Tips and PowerShell commandlets to help you manage tiered files-+ Last updated 04/13/2021-+
storage File Sync Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-introduction.md
Title: Introduction to Azure File Sync | Microsoft Docs description: An overview of Azure File Sync, a service that enables you to create and use network file shares in the cloud using the industry standard SMB protocol.-+ Last updated 04/19/2021-+
storage File Sync Modify Sync Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-modify-sync-topology.md
Title: Modify your Azure File Sync topology | Microsoft Docs description: Guidance on how to modify your Azure File Sync sync topology-+ Last updated 4/23/2021-+
storage File Sync Monitor Cloud Tiering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-monitor-cloud-tiering.md
Title: Monitor Azure File Sync cloud tiering | Microsoft Docs description: Details on metrics to use to monitor your cloud tiering policies.-+ Last updated 04/13/2021-+
storage File Sync Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-monitoring.md
Title: Monitor Azure File Sync | Microsoft Docs description: Review how to monitor your Azure File Sync deployment by using Azure Monitor, Storage Sync Service, and Windows Server.-+ Last updated 01/3/2022-+
storage File Sync Networking Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-networking-endpoints.md
Title: Configuring Azure File Sync network endpoints | Microsoft Docs description: Learn how to configure Azure File Sync network endpoints.-+ Last updated 04/13/2021-+
storage File Sync Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-networking-overview.md
Title: Azure File Sync networking considerations | Microsoft Docs description: Learn how to configure networking to use Azure File Sync to cache files on-premises.-+ Last updated 04/13/2021-+
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
Title: Planning for an Azure File Sync deployment | Microsoft Docs description: Plan for a deployment with Azure File Sync, a service that allows you to cache several Azure file shares on an on-premises Windows Server or cloud VM.-+ Last updated 04/13/2021-+
storage File Sync Server Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-registration.md
Title: Manage registered servers with Azure File Sync | Microsoft Docs description: Learn how to register and unregister a Windows Server with an Azure File Sync Storage Sync Service.-+ Last updated 01/3/2022-+
storage Files Manage Namespaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-manage-namespaces.md
Title: How to use DFS-N with Azure Files description: Common DFS-N use cases with Azure Files-+ Last updated 3/02/2021-+
storage Files Nfs Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md
Title: NFS file shares in Azure Files description: Learn about file shares hosted in Azure Files using the Network File System (NFS) protocol.-+ Last updated 11/16/2021-+
storage Files Remove Smb1 Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-remove-smb1-linux.md
Title: Secure your Azure and on-premises environments by removing SMB 1 on Linux | Microsoft Docs description: Azure Files supports SMB 3.x and SMB 2.1, not insecure legacy versions of SMB such as SMB 1. Before connecting to an Azure file share, you may wish to disable older versions of SMB such as SMB 1.-+ Last updated 05/19/2021-+
storage Files Reserve Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-reserve-capacity.md
Title: Optimize costs for Azure Files with reserved capacity
description: Learn how to save costs on Azure file share deployments by using Azure Files reserved capacity. -+ Last updated 03/23/2021-+
storage Files Smb Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-smb-protocol.md
Title: SMB file shares in Azure Files description: Learn about file shares hosted in Azure Files using the Server Message Block (SMB) protocol.-+ Last updated 09/10/2021-+
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
Title: What's new in Azure Files description: Learn more about new features and enhancements in Azure Files.-+ Last updated 12/08/2021-+
storage Storage Dotnet How To Use Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-dotnet-how-to-use-files.md
Title: Develop for Azure Files with .NET | Microsoft Docs description: Learn how to develop .NET applications and services that use Azure Files to store data.-+ ms.devlang: csharp Last updated 10/02/2020-+
storage Storage Files Active Directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-active-directory-overview.md
Title: Overview - Azure Files identity-based authorization description: Azure Files supports identity-based authentication over SMB (Server Message Block) through Azure Active Directory Domain Services (AD DS) and Active Directory. Your domain-joined Windows virtual machines (VMs) can then access Azure file shares using Azure AD credentials. -+ Last updated 12/01/2021-+ # Overview of Azure Files identity-based authentication options for SMB access
storage Storage Files Configure P2s Vpn Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-p2s-vpn-linux.md
Title: Configure a Point-to-Site (P2S) VPN on Linux for use with Azure Files | Microsoft Docs description: How to configure a Point-to-Site (P2S) VPN on Linux for use with Azure Files-+ Last updated 10/19/2019-+
storage Storage Files Configure P2s Vpn Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-p2s-vpn-windows.md
Title: Configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files | Microsoft Docs description: How to configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files-+ Last updated 10/19/2019-+
storage Storage Files Configure S2s Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-s2s-vpn.md
Title: Configure a Site-to-Site (S2S) VPN for use with Azure Files | Microsoft Docs description: How to configure a Site-to-Site (S2S) VPN for use with Azure Files-+ Last updated 10/19/2019-+
storage Storage Files Enable Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-enable-soft-delete.md
Title: Enable soft delete - Azure file shares description: Learn how to enable soft delete on Azure file shares for data recovery and preventing accidental deletion.-+ Last updated 04/05/2021-+
storage Storage Files Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md
Title: Frequently asked questions (FAQ) for Azure Files | Microsoft Docs description: Get answers to Azure Files frequently asked questions. You can mount Azure file shares concurrently on cloud or on-premises Windows, Linux, or macOS deployments.-+ Last updated 02/09/2022-+
storage Storage Files How To Create Nfs Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-how-to-create-nfs-shares.md
Title: Create an NFS share - Azure Files description: Learn how to create an Azure file share that can be mounted using the Network File System protocol.-+ Last updated 11/16/2021-+
storage Storage Files How To Mount Nfs Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-how-to-mount-nfs-shares.md
Title: Mount an Azure NFS file share - Azure Files description: Learn how to mount a Network File System share.-+ Last updated 11/16/2021-+
storage Storage Files Identity Ad Ds Assign Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-assign-permissions.md
Title: Control access to Azure file shares - on-premises AD DS authentication description: Learn how to assign permissions to an Active Directory Domain Services identity that represents your storage account. This allows you control access with identity-based authentication.-+ Last updated 12/16/2021-+ ms.devlang: azurecli
storage Storage Files Identity Ad Ds Configure Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md
Title: Control what a user can do at the file level - Azure file shares description: Learn how to configure Windows ACLs permissions for on-premises AD DS authentication to Azure file shares. Allowing you to take advantage of granular access control.-+ Last updated 09/16/2020-+ # Part three: configure directory and file level permissions over SMB
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
Title: Enable AD DS authentication to Azure file shares description: Learn how to enable Active Directory Domain Services authentication over SMB for Azure file shares. Your domain-joined Windows virtual machines can then access Azure file shares by using AD DS credentials. -+ Last updated 01/14/2022-+
storage Storage Files Identity Ad Ds Mount File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-mount-file-share.md
Title: Mount Azure file share to an AD DS-joined VM description: Learn how to mount a file share to your on-premises Active Directory Domain Services-joined machines.-+ Last updated 06/22/2020-+ # Part four: mount a file share from a domain-joined VM
storage Storage Files Identity Ad Ds Update Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-update-password.md
Title: Update AD DS storage account password description: Learn how to update the password of the Active Directory Domain Services account that represents your storage account. This prevents the storage account from being cleaned up when the password expires, preventing authentication failures.-+ Last updated 06/22/2020-+ # Update the password of your storage account identity in AD DS
storage Storage Files Identity Auth Active Directory Domain Service Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md
Title: Use Azure AD Domain Services to authorize access to file data over SMB description: Learn how to enable identity-based authentication over Server Message Block (SMB) for Azure Files through Azure Active Directory Domain Services. Your domain-joined Windows virtual machines (VMs) can then access Azure file shares by using Azure AD credentials.-+ Last updated 01/14/2022-+
storage Storage Files Identity Auth Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md
Title: Overview - On-premises AD DS authentication to Azure file shares description: Learn about Active Directory Domain Services (AD DS) authentication to Azure file shares. This article goes over support scenarios, availability, and explains how the permissions work between your AD DS and Azure active directory. -+ Last updated 03/15/2021-+ # Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares
storage Storage Files Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-introduction.md
Title: Introduction to Azure Files | Microsoft Docs description: An overview of Azure Files, a service that enables you to create and use network file shares in the cloud using either SMB or NFS protocols.-+ Last updated 07/23/2021-+
storage Storage Files Networking Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-dns.md
Title: Configuring DNS forwarding for Azure Files | Microsoft Docs description: Learn how to configure DNS forwarding for Azure Files.-+ Last updated 07/02/2021-+
storage Storage Files Networking Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-endpoints.md
Title: Configuring Azure Files network endpoints | Microsoft Docs description: Learn how to configure Azure File network endpoints.-+ Last updated 07/02/2021-+
storage Storage Files Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-overview.md
Title: Azure Files networking considerations | Microsoft Docs description: An overview of networking options for Azure Files.-+ Last updated 07/02/2021-+
storage Storage Files Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-planning.md
Title: Planning for an Azure Files deployment | Microsoft Docs description: Understand planning for an Azure Files deployment. You can either direct mount an Azure file share, or cache Azure file share on-premises with Azure File Sync.-+ Last updated 07/02/2021-+
storage Storage Files Prevent File Share Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-prevent-file-share-deletion.md
Title: Prevent accidental deletion - Azure file shares description: Learn about soft delete for Azure file shares and how you can use it to for data recovery and preventing accidental deletion.-+ Last updated 03/29/2021-+
storage Storage Files Quick Create Use Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-windows.md
Title: Tutorial - Create and use an Azure file shares on Windows VMs description: This tutorial covers how to create and use an Azure files shares in the Azure portal. Connect it to a Windows VM, connect to the file share, and upload a file to the file share.-+ Last updated 02/14/2022-+ #Customer intent: As an IT admin new to Azure Files, I want to try out Azure file share so I can determine whether I want to subscribe to the service.
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
Title: Azure Files scalability and performance targets description: Learn about the capacity, IOPS, and throughput rates for Azure file shares.-+ Last updated 01/31/2022-+
storage Storage Files Smb Multichannel Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-smb-multichannel-performance.md
Title: SMB Multichannel performance - Azure Files description: Learn about SMB Multichannel performance.-+ Last updated 08/25/2021-+
storage Storage How To Create File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-create-file-share.md
Title: Create an Azure file share description: How to create an Azure file share by using the Azure portal, PowerShell, or the Azure CLI.-+ Last updated 07/27/2021-+
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
Title: Mount SMB Azure file share on Linux | Microsoft Docs description: Learn how to mount an Azure file share over SMB on Linux. See the list of prerequisites. Review SMB security considerations on Linux clients.-+ Last updated 05/05/2021-+
storage Storage How To Use Files Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-mac.md
Title: Mount SMB Azure file share on macOS | Microsoft Docs description: Learn how to mount an Azure file share over SMB with macOS using Finder or Terminal. Azure Files is Microsoft's easy-to-use cloud file system.-+ Last updated 09/23/2020-+
storage Storage How To Use Files Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-portal.md
Title: Quickstart for managing Azure file shares description: See how to create and manage Azure file shares with the Azure portal, Azure CLI, or Azure PowerShell module. Create a storage account, create an Azure file share, and use your Azure file share.-+ Last updated 09/17/2021-+ ms.devlang: azurecli
storage Storage How To Use Files Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-windows.md
Title: Mount SMB Azure file share on Windows | Microsoft Docs description: Learn to use Azure file shares with Windows and Windows Server. Use Azure file shares with SMB 3.x on Windows installations running on-premises or on Azure VMs.-+ Last updated 09/10/2021-+
storage Storage Java How To Use File Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-java-how-to-use-file-storage.md
Title: Develop for Azure Files with Java | Microsoft Docs description: Learn how to develop Java applications and services that use Azure Files to store file data.-+ Last updated 05/26/2021 -+
storage Storage Python How To Use File Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-python-how-to-use-file-storage.md
Title: Develop for Azure Files with Python | Microsoft Docs description: Learn how to develop Python applications and services that use Azure Files to store file data.-+ Last updated 10/08/2020-+
storage Storage Snapshots Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-snapshots-files.md
Title: Overview of share snapshots for Azure Files | Microsoft Docs description: A share snapshot is a read-only version of an Azure Files share that's taken at a point in time, as a way to back up the share.-+ Last updated 01/17/2018-+
storage Understanding Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understanding-billing.md
Title: Understand Azure Files billing | Microsoft Docs description: Learn how to interpret the provisioned and pay-as-you-go billing models for Azure file shares.-+ Last updated 12/08/2021-+
stream-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
synapse-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
synapse-analytics Apache Spark Cdm Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/data-sources/apache-spark-cdm-connector.md
+
+ Title: Azure Synapse Spark Common Data Model (CDM) connector
+description: Learn how to use the Azure Synapse Spark CDM connector to read and write CDM entities in a CDM folder on ADLS.
+++++ Last updated : 03/10/2022+++
+# Common Data Model (CDM) Connector for Azure Synapse Spark
+
+The Synapse Spark Common Data Model (CDM) format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes.
+
+For information on defining CDM documents using CDM 1.0 see. [What is CDM and how to use it](/common-data-model/).
+
+## High level functionality
+
+The following capabilities are supported:
+
+* Reading data from an entity in a CDM folder into a Spark dataframe
+* Writing from a Spark dataframe to an entity in a CDM folder based on a CDM entity definition
+* Writing from a Spark dataframe to an entity in a CDM folder based on the dataframe schema
+
+## Capabilities
+
+* Supports reading and writing to CDM folders in ADLS gen2 with HNS enabled.
+* Supports reading from CDM folders described by either manifest or model.json files.
+* Supports writing to CDM folders described by a manifest file.
+* Supports data in CSV format with/without column headers and with user selectable delimiter character.
+* Supports data in Apache Parquet format, including nested Parquet.
+* Supports submanifests on read, optional use of entity-scoped submanifests on write.
+* Supports writing data using user modifiable partition patterns.
+* Supports use of managed identity Synapse and credentials.
+* Supports resolving CDM aliases locations used in imports using CDM adapter definitions described in a config.json.
+
+## Limitations
+
+The following scenarios aren't supported:
+
+* Programmatic access to entity metadata after reading an entity.
+* Programmatic access to set or override metadata when writing an entity.
+* Schema drift - where data in a dataframe being written includes extra attributes not included in the entity definition.
+* Schema evolution - where entity partitions reference different versions of the entity definition
+* Write support for model.json isn't supported.
+* Executing ```com.microsoft.cdm.BuildInfo.version``` will verify the version
+
+Spark 2.4 and Spark 3.1 are supported.
+
+## Reading data
+
+When reading data, the connector uses metadata in the CDM folder to create the dataframe based on the resolved entity definition for the specified entity, as referenced in the manifest. Entity attribute names are used as dataframe column names. Attribute datatypes are mapped to the column datatype. When the dataframe is loaded, it's populated from the entity partitions identified in the manifest.
+
+The connector looks in the specified manifest and any first-level submanifests for the specified entity. If the required entity is in a second-level or lower submanifest, or if there are multiple entities of the same name in different submanifests, then the user should specify the submanifest that contains the required entity rather than the root manifest.
+Entity partitions can be in a mix of formats (CSV, Parquet, etc.). All the entity data files identified in the manifest are combined into one dataset regardless of format and loaded to the dataframe.
+
+When reading CSV data, the connector uses the Spark FAILFAST option by default. It will return an error if the number of columns isn't equal to the number of attributes in the entity. Alternatively, as of 0.19, permissive mode is now supported. This mode is only supported for CSV files. With the permissive mode, when a CSV row has fewer number of columns than the entity schema, null values will be assigned for the missing columns. When a CSV row has more columns than the entity schema, the columns greater than the entity schema column count will be truncated to the schema column count. Usage is as follows:
+
+```scala
+ .option("entity", "permissive") or .option("mode", "failfast")
+```
+
+For example, [here's an example python sample.](https://github.com/Azure/spark-cdm-connector/blob/master/samples/SparkCDMsamplePython.ipynb)
+
+## Writing data
+
+When writing to a CDM folder, if the entity doesn't already exist in the CDM folder, a new entity and definition is created and added to the CDM folder and referenced in the manifest. Two writing modes are supported:
+
+**Explicit write**: the physical entity definition is based on a logical CDM entity definition that the user specifies.
+
+* The specified logical entity definition is read and resolved to create the physical entity definition used in the CDM folder. If import statements in any directly or indirectly referenced CDM definition file include aliases, then a config.json file that maps these aliases to CDM adapters and storage locations must be provided. For more on the use of aliases, see _Aliases and adapter configuration_ below.
+* If the dataframe schema doesn't match the referenced entity definition, an error is returned. Ensure that the column datatypes in the dataframe match the attribute datatypes in the entity, including for decimal data, precision and scale set via traits in CDM.
+* If the dataframe is inconsistent with the entity definition an error is returned.
+* If the dataframe is consistent:
+ * If the entity already exists in the manifest, the provided entity definition is resolved and validated against the definition in the CDM folder. If the definitions don't match, an error is returned, otherwise data is written and the partition information in the manifest is updated
+ * If the entity doesn't exist in the CDM folder, a resolved copy of the entity definition is written to the manifest in the CDM folder and data is written and the partition information in the manifest is updated.
+
+**Implicit write**: the entity definition is derived from the dataframe structure.
+
+* If the entity doesn't exist in the CDM folder, the implicit definition is used to create the resolved entity definition in the target CDM folder.
+* If the entity exists in the CDM folder, the implicit definition is validated against the existing entity definition. If the definitions don't match an error is returned, otherwise data is written and a derived logical entity definition(s) is written into a subfolder of the entity folder.
+Data is written to data folder(s) within an entity subfolder. A save mode determines whether the new data overwrites or is appended to existing data, or an error is returned if data exists. The default is to return an error if data already exists.
+
+## CDM alias integration
+
+CDM definition files use aliases in import statements to simplify the import statement and allow the location of the imported content to be late bound at execution time. Using aliases:
+
+* Facilitates easy organization of CDM files so that related CDM definitions can be grouped together at different locations.
+* Allows CDM content to be accessed from different deployed locations at runtime.
+
+The snippet below shows the use of aliases in import statements in a CDM definition file.
+
+```Scala
+"imports": [
+{
+ "corpusPath": "cdm:/foundations.cdm.json"
+},
+{
+ "corpusPath": "core:/TrackedEntity.cdm.json"
+},
+{
+ "corpusPath": "Customer.cdm.json"
+}
+]
+```
+
+In the example above, 'cdm' is used as an alias for the location of the CDM foundations file, and 'core' is used as an alias for the location of the TrackedEntity definition file.
+
+Aliases are text labels that are matched to a namespace value in an adapter entry in a CDM config.json file. An adapter entry specifies the adapter type (for example "adls", "CDN", "GitHub", "local", etc.) and a URL that defines a location. Some adapters support other configuration options, such as a connection timeout. While aliases are arbitrary text labels, the 'cdm' alias is treated in a special manner as described below.
+
+The Spark CDM Connector will look in the entity definition model root location for the config.json file to load. If the config.json file is at some other location or the user seeks to override the config.json file in the model root, then the user can provide the location of a config.json file using the _configPath_ option. The config.json file must contain adapter entries for all the aliases used in the CDM code being resolved or an error will be reported.
+
+By being able to override the config.json, the user can provide runtime-accessible locations for CDM definitions. Ensure that the content referenced at runtime is consistent with the definitions used when the CDM was originally authored.
+
+By convention, the _cdm_ alias is used to refer to the location of the root-level standard CDM definitions, including the foundations.cdm.json file, which includes the CDM primitive datatypes and a core set of trait definitions required for most CDM entity definitions. The _cdm_ alias can be resolved like any other alias using an adapter entry in the config.json file. Alternatively, if an adapter isn't specified or a null entry is provided, then the _cdm_ alias will be resolved by default to the CDM public CDN at `https://cdm-schema.microsoft.com/logical/`. The user can also use the _cdmSource_ option to override how the cdm alias is resolved (see the option details below). Using the _cdmsource_ option is useful if the cdm alias is the only alias used in the CDM definitions being resolved as it can avoid needing to create or reference a config.json file.
+
+## Parameters, options and save mode
+
+For both read and write, the Spark CDM Connector library name is provided as a parameter. A set of options are used to parameterize the behavior of the connector. When writing, a save mode is also supported.
+
+The connector library name, options and save mode are formatted as follows:
+
+* dataframe.read.format("com.microsoft.cdm") [.option("option", "value")]*
+* dataframe.write.format("com.microsoft.cdm") [.option("option", "value")]* .mode(savemode.\<saveMode\>)
+
+Here's an example of how the connector is used for read, showing some of the options. More examples are provided later.
+
+```scala
+val readDf = spark.read.format("com.microsoft.cdm")
+ .option("storage", "mystorageaccount.dfs.core.windows.net")
+ .option("manifestPath", "customerleads/default.manifest.cdm.json")
+ .option("entity", "Customer")
+ .load()
+```
+
+### Common READ and WRITE options
+
+The following options identify the entity in the CDM folder that is either being read or written to.
+
+|**Option** |**Description** |**Pattern and example usage** |
+|||::|
+|storage|The endpoint URL for the ADLS gen2 storage account with HNS enabled in which the CDM folder is located. <br/>Use the _dfs_.core.windows.net URL | \<accountName\>.dfs.core.windows.net "myAccount.dfs.core.windows.net"|
+|manifestPath|The relative path to the manifest or model.json file in the storage account. For read, can be a root manifest or a submanifest or a model.json. For write, must be a root manifest.|\<container\>/{\<folderPath\>/}\<manifestFileName>, <br/>"mycontainer/default.manifest.cdm.json" "models/hr/employees.manifest.cdm.json" <br/> "models/hr/employees/model.json" (read only) |
+|entity| The name of the source or target entity in the manifest. When writing an entity for the first time in a folder, the resolved entity definition will be given this name. Entity name is case sensitive.| \<entityName\> <br/>"customer"|
+|maxCDMThreads| The maximum number of concurrent reads while resolving an entity definition. | Any valid integer. for example - 5|
+
+> [!NOTE]
+> You no longer need to specify a logical entity definition in addition to the physical entity definition in the CDM folder on read.
+
+### Explicit write options
+
+The following options identify the logical entity definition that defines the entity being written. The logical entity definition will be resolved to a physical definition that defines how the entity will be written.
+
+|**Option** |**Description** |**Pattern / example usage** |
+|||::|
+|entityDefinitionStorage |The ADLS gen2 storage account containing the entity definition. Required if different to the storage account hosting the CDM folder.|\<accountName\>.dfs.core.windows.net<br/>"myAccount.dfs.core.windows.net"|
+|entityDefinitionModelRoot|The location of the model root or corpus within the account. |\<container\>/\<folderPath\> <br/> "crm/core"<br/>|
+|entityDefinitionPath|Location of the entity. File path to the CDM definition file relative to the model root, including the name of the entity in that file.|\<folderPath\>/\<entityName\>.cdm.json/\<entityName\><br/>"sales/customer.cdm.json/customer"|
+configPath| The container and folder path to a config.json file that contains the adapter configurations for all aliases included in the entity definition file and any directly or indirectly referenced CDM files. **Not required if the config.json is in the model root folder.**| \<container\>\<folderPath\>|
+|useCdmStandardModelRoot | Indicates the model root is located at [https://cdm-schema.microsoft.com/CDM/logical/](https://github.com/microsoft/CDM/tree/master/schemaDocuments) <br/>Used to reference entity types defined in the CDM Content Delivery Network (CDN).<br/>Overrides: entityDefinitionStorage, entityDefinitionModelRoot if specified.<br/>| "useCdmStandardModelRoot" |
+|cdmSource|Defines how the 'cdm' alias if present in CDM definition files is resolved. If this option is used, it overrides any _cdm_ adapter specified in the config.json file. Values are "builtin" or "referenced". Default value is "referenced" <br/> If set to _referenced_, then the latest published standard CDM definitions at `https://cdm-schema.microsoft.com/logical/` are used. If set to _builtin_ then the CDM base definitions built in to the CDM object model used by the Spark CDM Connector will be used. <br/> Note: <br/> 1). The Spark CDM Connector may not be using the latest CDM SDK so may not contain the latest published standard definitions. <br/> 2). The built-in definitions only include the top-level CDM content such as foundations.cdm.json, primitives.cdm.json, etc. If you wish to use lower-level standard CDM definitions, either use _referenced_ or include a cdm adapter in the config.json.<br/>| "builtin"\|"referenced". |
+
+In the example above, the full path to the customer entity definition object is:
+`https://myAccount.dfs.core.windows.net/models/crm/core/sales/customer.cdm.json/customer`, where ΓÇÿmodelsΓÇÖ is the container in ADLS.
+
+### Implicit write options
+
+If a logical entity definition isn't specified on write, the entity will be written implicitly, based on the dataframe schema.
+
+When writing implicitly, a timestamp column will normally be interpreted as a CDM DateTime datatype. This can be overridden to create an attribute of CDM Time datatype by providing a metadata object associated with the column that specifies the datatype. See Handling CDM Time data below for details.
+
+Initially, this is supported for CSV files only. Support for writing time data to Parquet will be added in a later release.
+
+### Folder structure and data format options
+
+Folder organization and file format can be changed with the following options.
+
+|**Option** |**Description** |**Pattern / example usage** |
+|||::|
+|useSubManifest|If true, causes the target entity to be included in the root manifest via a submanifest. The submanifest and the entity definition are written into an entity folder beneath the root. Default is false.|"true"\|"false" |
+|format|Defines the file format. Current supported file formats are CSV and Parquet. Default is "csv"|"csv"\|"parquet" <br/> |
+|delimiter|CSV only. Defines the delimiter used. Default is comma. | "\|" |
+|columnHeaders| CSV only. If true, will add a first row to data files with column headers. Default is "true"|"true"\|"false"|
+|compression|Write only. Parquet only. Defines the compression format used. Default is "snappy" |"uncompressed" \| "snappy" \| "gzip" \| "lzo".
+|dataFolderFormat|Allows user-definable data folder structure within an entity folder. Allows the use of date and time values to be substituted into folder names using DateTimeFormatter formatting. Non-formatter content must be enclosed in single quotes. Default format is ``` "yyyy"-"MM"-"dd" ``` producing folder names like 2020-07-30| ```year "yyyy" / month "MM"``` <br/> ```"Data"```|
+
+### Save mode
+
+The save mode specifies how existing entity data in the CDM folder is handled when writing a dataframe. Options are to overwrite, append to, or error if data already exists. The default save mode is ErrorIfExists
+
+|**Mode** |**Description**|
+|||
+|SaveMode.Overwrite |Will overwrite the existing entity definition if it's changed and replace existing data partitions with the data partitions being written.|
+|SaveMode.Append |Will append data being written in new partitions alongside the existing partitions.<br/>Note: append doesn't support changing the schema; if the schema of the data being written is incompatible with the existing entity definition an error will be thrown.|
+|SaveMode.ErrorIfExists|Will return an error if partitions already exist.|
+
+See _Folder and file organization_ below for details of how data files are named and organized on write.
+
+## Authentication
+
+There are three modes of authentication that can be used with the Spark CDM Connector to read/write the CDM metadata and data partitions: Credential Passthrough, SasToken, and App Registration.
+
+### Credential pass-through
+
+In Synapse, the Spark CDM Connector supports use of [Managed identities for Azure resource](/active-directory/managed-identities-azure-resources/overview) to mediate access to the Azure datalake storage account containing the CDM folder. A managed identity is [automatically created for every Synapse workspace](/security/synapse-workspace-managed-identity). The connector uses the managed identity of the workspace that contains the notebook in which the connector is called to authenticate to the storage accounts being addressed.
+
+You must ensure the identity used is granted access to the appropriate storage accounts. Grant **Storage Blob Data Contributor** to allow the library to write to CDM folders, or **Storage Blob Data Reader** to allow only read access. In both cases, no extra connector options are required.
+
+### SAS token access control options
+
+SaS Token Credential authentication to storage accounts is an extra option for authentication to storage. With SAS token authentication, the SaS token can be at the container or folder level. The appropriate permissions (read/write) are required ΓÇô read manifest/partition only needs read level support, while write requires read and write support.
+
+| **Option** |**Description** |**Pattern and example usage** |
+|-||::|
+| sasToken |The sastoken to access the relative storageAccount with the correct permissions | \<token\>|
+
+### Credential-based access control options
+
+As an alternative to using a managed identity or a user identity, explicit credentials can be provided to enable the Spark CDM connector to access data. In Azure Active Directory, [create an App Registration](/active-directory/develop/quickstart-register-app) and then grant this App Registration access to the storage account using either of the following roles: **Storage Blob Data Contributor** to allow the library to write to CDM folders, or **Storage Blob Data Reader** to allow only read.
+
+Once permissions are created, you can pass the app ID, app key, and tenant ID to the connector on each call to it using the options below. It's recommended to use Azure Key Vault to secure these values to ensure they aren't stored in clear text in your notebook file.
+
+| **Option** |**Description** |**Pattern and example usage** |
+|-||::|
+| appId | The app registration ID used to authenticate to the storage account | \<guid\> |
+| appKey | The registered app key or secret | \<encrypted secret\> |
+| tenantId | The Azure Active Directory tenant ID under which the app is registered. | \<guid\> |
+
+## Examples
+
+The following examples all use appId, appKey and tenantId variables initialized earlier in the code based on an Azure app registration that has been given Storage Blob Data Contributor permissions on the storage for write and Storage Blob Data Reader permissions for read.
+
+### Read
+
+This code reads the Person entity from the CDM folder with manifest in `mystorage.dfs.core.windows.net/cdmdata/contacts/root.manifest.cdm.json`.
+
+```scala
+val df = spark.read.format("com.microsoft.cdm")
+ .option("storage", "mystorage.dfs.core.windows.net")
+ .option("manifestPath", "cdmdata/contacts/root.manifest.cdm.json")
+ .option("entity", "Person")
+ .load()
+```
+
+### Implicit Write ΓÇô using dataframe schema only
+
+This code writes the dataframe _df_ to a CDM folder with a manifest to `mystorage.dfs.core.windows.net/cdmdata/Contacts/default.manifest.cdm.json` with an Event entity.
+
+Event data is written as parquet files, compressed with gzip, that are appended to the folder (new files
+are added without deleting existing files).
+
+```scala
+
+df.write.format("com.microsoft.cdm")
+ .option("storage", "mystorage.dfs.core.windows.net")
+ .option("manifestPath", "cdmdata/Contacts/default.manifest.cdm.json")
+ .option("entity", "Event")
+ .option("format", "parquet")
+ .option("compression", "gzip")
+ .mode(SaveMode.Append)
+ .save()
+```
+
+### Explicit Write - using an entity definition stored in ADLS
+
+This code writes the dataframe _df_ to a CDM folder with manifest at
+`https://_mystorage_.dfs.core.windows.net/cdmdata/Contacts/root.manifest.cdm.json` with the entity Person. Person data is written as new CSV files (by default) which overwrite existing files in the folder.
+The Person entity definition is retrieved from
+`https://_mystorage_.dfs.core.windows.net/models/cdmmodels/core/Contacts/Person.cdm.json`
+
+```scala
+df.write.format("com.microsoft.cdm")
+ .option("storage", "mystorage.dfs.core.windows.net")
+ .option("manifestPath", "cdmdata/contacts/root.manifest.cdm.json")
+ .option("entity", "Person")
+ .option("entityDefinitionModelRoot", "cdmmodels/core")
+ .option("entityDefinitionPath", "/Contacts/Person.cdm.json/Person")
+ .mode(SaveMode.Overwrite)
+ .save()
+```
+
+### Explicit Write - using an entity defined in the CDM GitHub
+
+This code writes the dataframe _df_ to a CDM folder with the manifest at `https://_mystorage_.dfs.core.windows.net/cdmdata/Teams/root.manifest.cdm.json` and a submanifest containing the TeamMembership entity, created in a TeamMembership subdirectory. TeamMembership data is written to CSV files (the default) that overwrite any existing data files. The TeamMembership entity definition is retrieved from the CDM CDN, at:
+[https://cdm-schema.microsoft.com/logical/core/applicationCommon/TeamMembership.cdm.json](https://cdm-schema.microsoft.com/logical/core/applicationCommon/TeamMembership.cdm.json)
+
+```scala
+df.write.format("com.microsoft.cdm")
+ .option("storage", "mystorage.dfs.core.windows.net")
+ .option("manifestPath", "cdmdata/Teams/root.manifest.cdm.json")
+ .option("entity", "TeamMembership")
+ .option("useCdmStandardModelRoot", true)
+ .option("entityDefinitionPath", "core/applicationCommon/TeamMembership.cdm.json/Tea
+mMembership")
+ .option("useSubManifest", true)
+ .mode(SaveMode.Overwrite)
+ .save()
+```
+
+## Other considerations
+
+### Spark to CDM datatype mapping
+
+The following datatype mappings are applied when converting CDM to/from Spark.
+
+|**Spark** |**CDM**|
+|||
+|ShortType|SmallInteger|
+|IntegerType|Integer|
+|LongType |BigInteger|
+|DateType |Date|
+|Timestamp|DateTime (optionally Time, see below)|
+|StringType|String|
+|DoubleType|Double|
+|DecimalType(x,y)|Decimal (x,y) (default scale and precision are 18,4)|
+|FloatType|Float|
+|BooleanType|Boolean|
+|ByteType|Byte|
+
+The CDM Binary datatype isn't supported.
+
+### Handling CDM Date, DateTime, and DateTimeOffset data
+
+CDM Date and DateTime datatype values are handled as normal for Spark and Parquet, and in CSV are read/written in ISO 8601 format.
+
+CDM _DateTime_ datatype values are _interpreted as UTC_, and in CSV written in ISO 8601 format, for example,
+2020-03-13 09:49:00Z.
+
+CDM _DateTimeOffset_ values intended for recording local time instants are handled differently in Spark and
+parquet from CSV. While CSV and other formats can express a local time instant as a structure,
+comprising a datetime and a UTC offset, formatted in CSV like, 2020-03-13 09:49:00-08:00, Parquet and
+Spark donΓÇÖt support such structures. Instead, they use a TIMESTAMP datatype that allows an instant to
+be recorded in UTC time (or in some unspecified time zone).
+
+The Spark CDM connector will convert a DateTimeOffset value in CSV to a UTC timestamp. This will be persisted as a Timestamp in parquet and if subsequently persisted to CSV, the value will be serialized as a DateTimeOffset with a +00:00 offset. Importantly, there's no loss of temporal accuracy ΓÇô the serialized values represent the same instant as the original values, although the offset is lost. Spark systems use their system time as the baseline and normally express time using that local time. UTC times can always be computed by applying the local system offset. For Azure systems in all regions, system time is always UTC, so all timestamp values will normally be in UTC.
+
+As Azure system values are always UTC, when using implicit write, where a CDM definition is derived from a dataframe, timestamp columns are translated to attributes with CDM DateTime datatype, which implies a UTC time.
+
+If it's important to persist a local time and the data will be processed in Spark or persisted in parquet,
+then it's recommended to use a DateTime attribute and keep the offset in a separate attribute, for
+example as a signed integer value representing minutes. In CDM, DateTime values are UTC, so the
+offset must be applied when needed to compute local time.
+
+In most cases, persisting local time isn't important. Local times are often only required in a UI for user
+convenience and based on the userΓÇÖs time zone, so not storing a UTC time is often a better solution.
+
+### Handling CDM time data
+
+Spark doesn't support an explicit Time datatype. An attribute with the CDM _Time_ datatype is represented in a Spark dataframe as a column with a Timestamp datatype in a dataframe. When a time value is read, the timestamp in the dataframe will be initialized with the Spark epoch date 01/01/1970 plus the time value as read from the source.
+
+When using explicit write, a timestamp column can be mapped to either a DateTime or Time attribute. If a timestamp is mapped to a Time attribute, the date portion of the timestamp is stripped off.
+
+When using implicit write, a Timestamp column is mapped by default to a DateTime attribute. To map a timestamp column to a Time attribute, you must add a metadata object to the column in the dataframe that indicates that the timestamp should be interpreted as a time value. The code below shows how this is done in Scala.
+
+```scala
+val md = new MetadataBuilder().putString(ΓÇ£dataTypeΓÇ¥, ΓÇ£TimeΓÇ¥)
+val schema = StructType(List(
+StructField(ΓÇ£ATimeColumnΓÇ¥, TimeStampType, true, md))
+
+```
+
+### Time value accuracy
+
+The Spark CDM Connector supports time values in either DateTime or Time with seconds having up to six decimal places, based on the format of the data either in the file being read (CSV or Parquet) or as defined in the dataframe, enabling accuracy from single seconds to microseconds.
+
+### Folder and file naming and organization
+
+When writing CDM folders, the default folder organization illustrated below is used.
+
+By default, data files are written into folders created for the current date, named like '2010-07-31'. The folder structure and names can be customized using the dateFolderFormat option, described earlier.
+
+Data file names are based on the following pattern: \<entity\>-\<jobid\>-*.\<fileformat\>.
+
+The number of data partitions written can be controlled using the sparkContext.parallelize() method. The number of partitions is either determined by the number of executors in the Spark cluster or can be specified explicitly. The Scala example below creates a dataframe with two partitions.
+
+```scala
+val df= spark.createDataFrame(spark.sparkContext.parallelize(data, 2), schema)
+```
+
+**Explicit Write** (defined by a referenced entity definition)
+
+```text
++-- <CDMFolder>
+ |-- default.manifest.cdm.json << with entity ref and partition info
+ +-- <Entity>
+ |-- <entity>.cdm.json << resolved physical entity definition
+ |-- <data folder>
+ |-- <data folder>
+ +-- ...
+ ```
+
+**Explicit Write with sub-manifest:**
+
+```text
++-- <CDMFolder>
+ |-- default.manifest.cdm.json << contains reference to sub-manifest
+ +-- <Entity>
+ |-- <entity>.cdm.json
+ |-- <entity>.manifest.cdm.json << sub-manifest with partition info
+ |-- <data folder>
+ |-- <data folder>
+ +-- ...
+```
+
+**Implicit (entity definition is derived from dataframe schema):**
+
+```text
++-- <CDMFolder>
+ |-- default.manifest.cdm.json
+ +-- <Entity>
+ |-- <entity>.cdm.json << resolved physical entity definition
+ +-- LogicalDefinition
+ | +-- <entity>.cdm.json << logical entity definition(s)
+ |-- <data folder>
+ |-- <data folder>
+ +-- ...
+```
+
+**Implicit Write with sub-manifest:**
+
+```text
++-- <CDMFolder>
+ |-- default.manifest.cdm.json << contains reference to sub-manifest
+ +-- <Entity>
+ |-- <entity>.cdm.json << resolved physical entity definition
+ |-- <entity>.manifest.cdm.json << sub-manifest with reference to the entity and partition info
+ +-- LogicalDefinition
+ | +-- <entity>.cdm.json << logical entity definition(s)
+ |-- <data folder>
+ |-- <data folder>
+ +-- ...
+```
+
+## Samples
+
+See https://github.com/Azure/spark-cdm-connector/tree/master/samples for sample code and CDM files.
+
+### Examples
+
+The following examples all use appId, appKey and tenantId variables initialized earlier in the code based on an Azure app registration that has been given Storage Blob Data Contributor permissions on the storage for write and Storage Blob Data Reader permissions for read.
+
+#### Read
+
+This code reads the Person entity from the CDM folder with manifest in `mystorage.dfs.core.windows.net/cdmdata/contacts/root.manifest.cdm.json`.
+
+```scala
+val df = spark.read.format("com.microsoft.cdm")
+ .option("storage", "mystorage.dfs.core.windows.net")
+ .option("manifestPath", "cdmdata/contacts/root.manifest.cdm.json")
+ .option("entity", "Person")
+ .load()
+```
+
+#### Implicit write ΓÇô using dataframe schema only
+
+This code writes the dataframe _df_ to a CDM folder with a manifest to `mystorage.dfs.core.windows.net/cdmdata/Contacts/default.manifest.cdm.json` with an Event entity.
+
+Event data is written as Parquet files, compressed with gzip, that are appended to the folder (new files
+are added without deleting existing files).
+
+```scala
+
+df.write.format("com.microsoft.cdm")
+ .option("storage", "mystorage.dfs.core.windows.net")
+ .option("manifestPath", "cdmdata/Contacts/default.manifest.cdm.json")
+ .option("entity", "Event")
+ .option("format", "parquet")
+ .option("compression", "gzip")
+ .mode(SaveMode.Append)
+ .save()
+```
+
+#### Explicit write - using an entity definition stored in ADLS
+
+This code writes the dataframe _df_ to a CDM folder with manifest at
+`https://mystorage.dfs.core.windows.net/cdmdata/Contacts/root.manifest.cdm.json` with the entity Person. Person data is written as new CSV files (by default) which overwrite existing files in the folder.
+The Person entity definition is retrieved from
+`https://mystorage.dfs.core.windows.net/models/cdmmodels/core/Contacts/Person.cdm.json`
+
+```scala
+df.write.format("com.microsoft.cdm")
+ .option("storage", "mystorage.dfs.core.windows.net")
+ .option("manifestPath", "cdmdata/contacts/root.manifest.cdm.json")
+ .option("entity", "Person")
+ .option("entityDefinitionModelRoot", "cdmmodels/core")
+ .option("entityDefinitionPath", "/Contacts/Person.cdm.json/Person")
+ .mode(SaveMode.Overwrite)
+ .save()
+```
+
+#### Explicit write - using an entity defined in the CDM GitHub
+
+This code writes the dataframe _df_ to a CDM folder with the manifest at `https://_mystorage_.dfs.core.windows.net/cdmdata/Teams/root.manifest.cdm.json` and a submanifest containing the TeamMembership entity, created in a TeamMembership subdirectory. TeamMembership data is written to CSV files (the default) that overwrite any existing data files. The TeamMembership entity definition is retrieved from the CDM CDN, at:
+[https://cdm-schema.microsoft.com/logical/core/applicationCommon/TeamMembership.cdm.json](https://cdm-schema.microsoft.com/logical/core/applicationCommon/TeamMembership.cdm.json)
+
+```scala
+df.write.format("com.microsoft.cdm")
+ .option("storage", "mystorage.dfs.core.windows.net")
+ .option("manifestPath", "cdmdata/Teams/root.manifest.cdm.json")
+ .option("entity", "TeamMembership")
+ .option("useCdmStandardModelRoot", true)
+ .option("entityDefinitionPath", "core/applicationCommon/TeamMembership.cdm.json/Tea
+mMembership")
+ .option("useSubManifest", true)
+ .mode(SaveMode.Overwrite)
+ .save()
+```
+
+### Other considerations
+
+#### Spark to CDM datatype mapping
+
+The following datatype mappings are applied when converting CDM to/from Spark.
+
+|**Spark** |**CDM**|
+|||
+|ShortType|SmallInteger|
+|IntegerType|Integer|
+|LongType |BigInteger|
+|DateType |Date|
+|Timestamp|DateTime (optionally Time, see below)|
+|StringType|String|
+|DoubleType|Double|
+|DecimalType(x,y)|Decimal (x,y) (default scale and precision are 18,4)|
+|FloatType|Float|
+|BooleanType|Boolean|
+|ByteType|Byte|
+
+The CDM Binary datatype isn't supported.
+
+## Troubleshooting and known issues
+
+* Ensure the decimal precision and scale of decimal data type fields used in the dataframe match the data type used in the CDM entity definition - requires precision and scale traits are defined on the data type. If the precision and scale aren't defined explicitly in CDM, the default used is Decimal(18,4). For model.json files, Decimal is assumed to be Decimal(18,4).
+* Folder and file names in the options below shouldn't include spaces or special characters, such as "=": manifestPath, entityDefinitionModelRoot, entityDefinitionPath, dataFolderFormat.
+
+## Unsupported features
+
+The following features aren't yet supported:
+
+* Overriding a timestamp column to be interpreted as a CDM Time rather than a DateTime is initially supported for CSV files only. Support for writing Time data to Parquet will be added in a later release.
+* Parquet Map type and arrays of primitive types and arrays of array types aren't currently supported by CDM so aren't supported by the Spark CDM Connector.
+
+## Next steps
+
+You can now look at the other Apache Spark connectors:
+
+* [Apache Spark Kusto connector](apache-spark-kusto-connector.md)
+* [Apache Spark SQL connector](apache-spark-sql-connector.md)
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
Title: Built-in policy definitions for Azure virtual machine scale sets description: Lists Azure Policy built-in policy definitions for Azure virtual machine scale sets. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
virtual-machines Disks Pools Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-pools-deploy.md
storagePoolObjectId=$(az ad sp list --filter "displayName eq 'StoragePool Resour
storagePoolObjectId="${storagePoolObjectId%"}" storagePoolObjectId="${storagePoolObjectId#"}"
-az role assignment create --assignee-object-id $storagePoolObjectId --role "Virtual Machine Contributor" --resource-group $resourceGroupName
+az role assignment create --assignee-object-id $storagePoolObjectId --role "Virtual Machine Contributor" --scope /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName --resource-group $resourceGroupName
#Create a disk pool #To create a disk pool configured for ultra disks, add --additional-capabilities "DiskPool.Disk.Sku.UltraSSD_LRS" to your command
virtual-machines Generation 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generation-2.md
Generation 2 VMs support the following Marketplace images:
* SUSE Linux Enterprise Server 15 SP3, SP2 * SUSE Linux Enterprise Server 12 SP4 * Ubuntu Server 21.04 LTS, 20.04 LTS, 18.04 LTS, 16.04 LTS
-* RHEL 8.4, 8.3, 8.2, 8.1, 8.0, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0
+* RHEL 8.5, 8.4, 8.3, 8.2, 8.1, 8.0, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0
* Cent OS 8.4, 8.3, 8.2, 8.1, 8.0, 7.7, 7.6, 7.5, 7.4 * Oracle Linux 8.4 LVM, 8.3 LVM, 8.2 LVM, 8.1, 7.9 LVM, 7.9, 7.8, 7.7
virtual-machines Nda100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nda100-v4-series.md
Title: ND A100 v4-series
description: Specifications for the ND A100 v4-series VMs. ++ Last updated 05/26/2021
virtual-machines Np Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/np-series.md
To enable Host_Mem(SB) (up to 1 Gb RAM): sudo xbutil host_mem --enable --size 1
<br> To disable Host_Mem(SB): sudo xbutil host_mem --disable <br>
-Starting on XRT2021.1, OnPrem FPGA in Linux exposes [M2M data transfer](https://xilinx.github.io/XRT/master/html/m2m.html)
+
+<br>
+Starting on XRT2021.1:
+
+OnPrem FPGA in Linux exposes
+[M2M data transfer](https://xilinx.github.io/XRT/master/html/m2m.html).
+<br>
This feature is not supported in Azure NP VMs.
-<p>
**Q:** Can I run xbmgmt commands?
virtual-machines Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
virtual-machines Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Virtual Machines description: Sample Azure Resource Graph queries for Azure Virtual Machines showing use of resource types and tables to access Azure Virtual Machines related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
virtual-machines Automation Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deploy-control-plane.md
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<subscrip
Optionally assign the following permissions to the Service Principal: ```azurecli
-az role assignment create --assignee <appId> --role "User Access Administrator"
+az role assignment create --assignee <appId> --role "User Access Administrator" --scope /subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName>
``` ## Deploy the control plane
virtual-machines Automation Manual Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-manual-deployment.md
The deployer uses a service principal to deploy resources into a subscription.
1. Create a role assignment for the service principal. Make sure to replace `<appId>` with the application identifier you noted in the previous step. ```azurecli-interactive
- az role assignment create --assignee <appId> --role "User Access Administrator"
+ az role assignment create --assignee <appId> --role "User Access Administrator" --scope /subscriptions/<subscription-id>/resourceGroups/<resource-group-name>
``` 1. Add keys for the service principal to the key vault as follows. Be sure to replace the placeholder values with the information you noted in previous steps. Replace `<environment>` with the name of your environment, such as `DEMO`.
virtual-machines Automation Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-plan-deployment.md
Create your service principal:
``` 1. Optionally assign the User Access Administrator role to your service principal. For example: ```azurecli
- az role assignment create --assignee <your-application-ID> --role "User Access Administrator"
+ az role assignment create --assignee <your-application-ID> --role "User Access Administrator" --scope /subscriptions/<your-subscription-ID>/resourceGroups/<your-resource-group-name>
``` For more information, see [the Azure CLI documentation for creating a service principal](/cli/azure/create-an-azure-service-principal-azure-cli)
virtual-machines Automation Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-tutorial.md
The SAP automation deployment framework uses service principals for deployment.
export appId="<appId>" az role assignment create --assignee ${appId} \
- --role "User Access Administrator"
+ --role "User Access Administrator" \
+ --scope /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}
```
virtual-network Accelerated Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-overview.md
The following distributions are supported out of the box from the Azure Gallery:
Accelerated Networking is supported on most general purpose and compute-optimized instance sizes with 2 or more vCPUs. On instances that support hyperthreading, Accelerated Networking is supported on VM instances with 4 or more vCPUs.
-Support for Accelerated Networking can be found in the individual [virtual machine sizes](../virtual-machines/sizes.md) documentation.
+Support for Accelerated Networking can be found in the individual [virtual machine sizes](../virtual-machines/sizes.md) documentation.
-### Custom images
+The list of Virtual Machine SKUs that support Accelerated Networking can be queried directly via the following Azure CLI [`az vm list-skus`](/cli/azure/vm?view=azure-cli-latest#az-vm-list-skus) command.
+
+```azurecli-interactive
+az vm list-skus \
+ --location westus \
+ --all true \
+ --resource-type virtualMachines \
+ --query '[].{size:size, name:name, acceleratedNetworkingEnabled: capabilities[?name==`AcceleratedNetworkingEnabled`].value | [0]}' \
+ --output table
+```
+### Custom images
If you're using a custom image and your image supports Accelerated Networking, make sure that you have the required drivers to work with Mellanox ConnectX-3, ConnectX-4 Lx, and ConnectX-5 NICs on Azure. Also, Accelerated Networking requires network configurations that exempt the configuration of the virtual functions (mlx4_en and mlx5_core drivers). In images that have cloud-init >=19.4, networking is correctly configured to support Accelerated Networking during provisioning.
Virtual machines (classic) can't be deployed with accelerated networking.
* Learn how to [create a VM with Accelerated Networking in PowerShell](./create-vm-accelerated-networking-powershell.md) * Learn how to [create a VM with Accerelated Networking using Azure CLI](./create-vm-accelerated-networking-cli.md) * Improve latency with an [Azure proximity placement group](../virtual-machines/co-location.md)-
virtual-network Create Vm Accelerated Networking Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-cli.md
vf_tx_dropped: 0
Accelerated Networking is now enabled for your VM. ## Handle dynamic binding and revocation of virtual function
-Applications must run over the synthetic NIC that is exposed in VM. If the application runs directly over the VF NIC, it doesn't receive **all** packets that are destined to the VM, since some packets show up over the synthetic interface.
-If you run an application over the synthetic NIC, it guarantees that the application receives **all** packets that are destined to it. It also makes sure that the application keeps running, even if the VF is revoked during host servicing.
-Applications binding to the synthetic NIC is a **mandatory** requirement for all applications taking advantage of **Accelerated Networking**.
+Applications must run over the synthetic NIC that is exposed in VM. If the application runs directly over the VF NIC, it doesn't receive **all** packets that are destined to the VM, since some packets show up over the synthetic interface. If you run an application over the synthetic NIC, it guarantees that the application receives **all** packets that are destined to it. It also makes sure that the application keeps running, even if the VF is revoked during host servicing. Applications binding to the synthetic NIC is a **mandatory** requirement for all applications taking advantage of **Accelerated Networking**.
+
+For more details on application binding requirements, see [How Accelerated Networking works in Linux and FreeBSD VMs](/azure/virtual-network/accelerated-networking-how-it-works#application-usage).
## Enable Accelerated Networking on existing VMs If you've created a VM without Accelerated Networking, it's possible to enable this feature on an existing VM. The VM must support Accelerated Networking by meeting the following prerequisites that are also outlined:
virtual-network Create Vm Accelerated Networking Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-powershell.md
ms.devlang: na
vm-windows Previously updated : 02/15/2022 Last updated : 03/22/2022
## VM creation using the portal
-Though this article provides steps to create a VM with accelerated networking using Azure PowerShell, you can also [use the Azure portal to create a virtual machine](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that enables accelerated networking. When you create a VM in the portal, in the **Create a virtual machine** page, choose the **Networking** tab. This tab has an option for **Accelerated networking**. If you have chosen a [supported operating system](./accelerated-networking-overview.md#supported-operating-systems) and [VM size](./accelerated-networking-overview.md#supported-vm-instances), this option is automatically set to **On**. Otherwise, the option is set to **Off**, and Azure displays the reason why it can't be enabled.
+Though this article provides steps to create a VM with accelerated networking using Azure PowerShell, you can also use the Azure portal to create a virtual machine that enables accelerated networking. When [creating a VM in the Azure Portal](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json), in the **Create a virtual machine** page, choose the **Networking** tab. This tab has an option for **Accelerated networking**. If you have chosen a [supported operating system](./accelerated-networking-overview.md#supported-operating-systems) and [VM size](./accelerated-networking-overview.md#supported-vm-instances), this option is automatically set to **On**. Otherwise, the option is set to **Off**, and Azure displays the reason why it can't be enabled.
You can also enable or disable accelerated networking through the portal after VM creation by navigating to the network interface and clicking the button at the top of the **Overview** blade. > [!NOTE]
virtual-network Deploy Container Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/deploy-container-networking.md
-
+ Title: Deploy Azure virtual network container networking | Microsoft Docs description: Learn how to deploy the Azure Virtual Network container network interface (CNI) plug-in for Kubernetes clusters.
The json example that follows is for a cluster with the following properties:
"vmSize": "Standard_A2", "vnetSubnetId": "/subscriptions/<subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.Network/virtualNetworks/<Vnet Name>/subnets/KubeClusterSubnet", "firstConsecutiveStaticIP": "10.0.1.50", --> IP address allocated to the Master node
-"vnetCidr": "10.0.0.0/16" --> Virtual network address space
+ "vnetCidr": "10.0.0.0/16" --> Virtual network address space
}, "agentPoolProfiles": [ {
The CNI network configuration file is described in JSON format. It is, by defaul
Download the plug-in from [GitHub](https://github.com/Azure/azure-container-networking/releases). Download the latest version for the platform that you're using: -- **Linux**: [azure-vnet-cni-linux-amd64-\<version no.\>.tgz](https://github.com/Azure/azure-container-networking/releases/download/v1.0.12-rc3/azure-vnet-cni-linux-amd64-v1.0.12-rc3.tgz)-- **Windows**: [azure-vnet-cni-windows-amd64-\<version no.\>.zip](https://github.com/Azure/azure-container-networking/releases/download/v1.0.12-rc3/azure-vnet-cni-windows-amd64-v1.0.12-rc3.zip)
+- **Linux**: [azure-vnet-cni-linux-amd64-\<version no.\>.tgz](https://github.com/Azure/azure-container-networking/releases/download/v1.4.20/azure-vnet-cni-linux-amd64-v1.4.20.tgz)
+- **Windows**: [azure-vnet-cni-windows-amd64-\<version no.\>.zip](https://github.com/Azure/azure-container-networking/releases/download/v1.4.20/azure-vnet-cni-windows-amd64-v1.4.20.zip)
+
+Copy the install script for [Linux](https://github.com/Azure/azure-container-networking/blob/master/scripts/install-cni-plugin.sh) or [Windows](https://github.com/Azure/azure-container-networking/blob/master/scripts/Install-CniPlugin.ps1) to your computer. Save the script to a `scripts` directory on your computer and name the file `install-cni-plugin.sh` for Linux, or `install-cni-plugin.ps1` for Windows.
-Copy the install script for [Linux](https://github.com/Azure/azure-container-networking/blob/master/scripts/install-cni-plugin.sh) or [Windows](https://github.com/Azure/azure-container-networking/blob/master/scripts/Install-CniPlugin.ps1) to your computer. Save the script to a `scripts` directory on your computer and name the file `install-cni-plugin.sh` for Linux, or `install-cni-plugin.ps1` for Windows. To install the plug-in, run the appropriate script for your platform, specifying the version of the plug-in you are using. For example, you might specify *v1.0.12-rc3*:
+To install the plug-in, run the appropriate script for your platform, specifying the version of the plug-in you are using. For example, you might specify *v1.4.20*. For the Linux install, you'll also need to provide an appropriate [CNI plugin version](https://github.com/containernetworking/plugins/releases), such as *v1.0.1*:
```bash
- \$scripts/install-cni-plugin.sh [version]
+ scripts/install-cni-plugin.sh [azure-cni-plugin-version] [cni-plugin-version]
``` ```powershell
- scripts\\ install-cni-plugin.ps1 [version]
+ scripts\\ install-cni-plugin.ps1 [azure-cni-plugin-version]
```
-The script installs the plug-in under `/opt/cni/bin` for Linux and `c:\cni\bin` for Windows. The installed plug-in comes with a simple network configuration file that works after installation. It doesn't need to be updated. To learn more about the settings in the file, see [CNI network configuration file](#cni-network-configuration-file).
+The script installs the plug-in under `/opt/cni/bin` for Linux and `c:\cni\bin` for Windows. The installed plug-in comes with a simple network configuration file that works after installation. It doesn't need to be updated. To learn more about the settings in the file, see [CNI network configuration file](#cni-network-configuration-file).
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
virtual-network Setup Dpdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/setup-dpdk.md
All Azure regions support DPDK.
Accelerated networking must be enabled on a Linux virtual machine. The virtual machine should have at least two network interfaces, with one interface for management. Enabling Accelerated networking on management interface is not recommended. Learn how to [create a Linux virtual machine with accelerated networking enabled](create-vm-accelerated-networking-cli.md).
+On virtual machines that are using InfiniBand, ensure the appropriate `mlx4_ib` or `mlx5_ib` drivers are loaded, see [Enable InfiniBand](/azure/virtual-machines/workloads/hpc/enable-infiniband).
+ ## Install DPDK via system package (recommended) ### Ubuntu 18.04
vpn-gateway Create Routebased Vpn Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/create-routebased-vpn-gateway-powershell.md
Previously updated : 09/02/2020 Last updated : 03/11/2022
The steps in this article will create a VNet, a subnet, a gateway subnet, and a
## Create a resource group
-Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources are deployed and managed. Create a resource group. If you are running PowerShell locally, open your PowerShell console with elevated privileges and connect to Azure using the `Connect-AzAccount` command.
+Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources are deployed and managed. If you are running PowerShell locally, open your PowerShell console with elevated privileges and connect to Azure using the `Connect-AzAccount` command.
```azurepowershell-interactive New-AzResourceGroup -Name TestRG1 -Location EastUS