Updates from: 03/12/2022 02:09:57
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Previously updated : 02/15/2022 Last updated : 03/11/2022 -+
For the first test scenario, configure the authentication policy where the Issue
### Test multifactor authentication
-For the next test scenario, configure the authentication policy where the Issuer subject rule satisfies multifactor authentication.
+For the next test scenario, configure the authentication policy where the **policyOID** rule satisfies multifactor authentication.
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/multifactor.png" alt-text="Screenshot of the Authentication policy configuration showing multifactor authentication required." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/multifactor.png":::
active-directory Cloudknox Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-aws.md
Previously updated : 03/09/2022 Last updated : 03/10/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-> [!Note]
-> Sign up for the CloudKnox Permissions Management public preview by filling [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9AT7gfYe2NPtdIbYxQQX45UNEpIVjY4WUJNSUhMVjcyNzdYOFY2NFhISi4u).
This article describes how to onboard an Amazon Web Services (AWS) account on CloudKnox Permissions Management (CloudKnox). > [!NOTE] > A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable CloudKnox on your Azure Active Directory tenant](cloudknox-onboard-enable-tenant.md).
-## Prerequisites
--- To enable the CloudKnox **Feature highlights** tile in the Azure AD portal, [select this link to run the script in your browser](https://aka.ms/ciem-prod).-- To use the CloudKnox public preview, we encourage you to fill out a consent form that provides other terms and conditions for the public preview product. To open the form, select [CloudKnox Permissions Management Public Preview: Terms and Conditions](https://aka.ms/ciem-terms). ## View a training video on configuring and onboarding an AWS account
active-directory Cloudknox Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-azure.md
Previously updated : 03/09/2022 Last updated : 03/10/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-> [!Note]
-> Sign up for the CloudKnox Permissions Management public preview by filling [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9AT7gfYe2NPtdIbYxQQX45UNEpIVjY4WUJNSUhMVjcyNzdYOFY2NFhISi4u).
- This article describes how to onboard a Microsoft Azure subscription or subscriptions on CloudKnox Permissions Management (CloudKnox). Onboarding a subscription creates a new authorization system to represent the Azure subscription in CloudKnox. > [!NOTE]
This article describes how to onboard a Microsoft Azure subscription or subscrip
To add CloudKnox to your Azure AD tenant: - You must have an Azure AD user account and an Azure command-line interface (Azure CLI) on your system, or an Azure subscription. If you don't already have one, [create a free account](https://azure.microsoft.com/free/). - You must have **Microsoft.Authorization/roleAssignments/write** permission at the subscription or management group scope to perform these tasks. If you don't have this permission, you can ask someone who has this permission to perform these tasks for you.-- To enable the CloudKnox **Feature highlights** tile in the Azure AD portal, [select this link to run the script in your browser](https://aka.ms/ciem-prod).-- To use the CloudKnox public preview, we encourage you to fill out a consent form that provides other terms and conditions for the public preview product. To open the form, select [CloudKnox Permissions Management Public Preview: Terms and Conditions](https://aka.ms/ciem-terms).+ ## View a training video on enabling CloudKnox in your Azure AD tenant
active-directory Cloudknox Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-enable-tenant.md
Previously updated : 03/09/2022 Last updated : 03/10/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-> [!Note]
-> Sign up for the CloudKnox Permissions Management public preview by filling [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9AT7gfYe2NPtdIbYxQQX45UNEpIVjY4WUJNSUhMVjcyNzdYOFY2NFhISi4u).
This article describes how to enable CloudKnox Permissions Management (CloudKnox) in your organization. Once you've enabled CloudKnox, you can connect it to your Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) platforms.
To enable CloudKnox in your organization:
- You must have an Azure AD tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/). - You must be eligible for or have an active assignment to the global administrator role as a user in that tenant.-- To enable the CloudKnox **Feature highlights** tile in the Azure AD portal, [select this link to run the script in your browser](https://aka.ms/ciem-prod).-- To use the CloudKnox public preview, we encourage you to fill out a consent form that provides other terms and conditions for the public preview product. To open the form, select [CloudKnox Permissions Management Public Preview: Terms and Conditions](https://aka.ms/ciem-terms).+ > [!NOTE] > During public preview, CloudKnox doesn't perform a license check.
active-directory Cloudknox Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-gcp.md
Previously updated : 02/24/2022 Last updated : 03/10/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-> [!Note]
-> Sign up for the CloudKnox Permissions Management public preview by filling [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9AT7gfYe2NPtdIbYxQQX45UNEpIVjY4WUJNSUhMVjcyNzdYOFY2NFhISi4u).
This article describes how to onboard a Google Cloud Platform (GCP) project on CloudKnox Permissions Management (CloudKnox). > [!NOTE] > A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable CloudKnox on your Azure Active Directory tenant](cloudknox-onboard-enable-tenant.md).
-## Prerequisites
--- To enable the CloudKnox **Feature highlights** tile in the Azure AD portal, [select this link to run the script in your browser](https://aka.ms/ciem-prod).-- To use the CloudKnox public preview, we encourage you to fill out a consent form that provides other terms and conditions for the public preview product. To open the form, select [CloudKnox Permissions Management Public Preview: Terms and Conditions](https://aka.ms/ciem-terms). ## Onboard a GCP project
active-directory Cloudknox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-overview.md
Previously updated : 02/23/2022 Last updated : 03/10/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-> [!Note]
-> Sign up for the CloudKnox Permissions Management public preview by filling [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9AT7gfYe2NPtdIbYxQQX45UNEpIVjY4WUJNSUhMVjcyNzdYOFY2NFhISi4u).
- ## Overview CloudKnox Permissions Management (CloudKnox) is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multi-cloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
active-directory Migrate Spa Implicit To Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-spa-implicit-to-auth-code.md
For additional changes you might need to make to your code, see the [migration g
## Disable implicit grant settings
-Once you've updated all your production applications that use this app registration and its client ID to MSAL 2.x and the authorization code flow, you should uncheck the implicit grant settings in the app registration.
+Once you've updated all your production applications that use this app registration and its client ID to MSAL 2.x and the authorization code flow, you should uncheck the implicit grant settings under the **Authentication** menu of the app registration.
When you uncheck the implicit grant settings in the app registration, the implicit flow is disabled for all applications using registration and its client ID.
active-directory Msal Js Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-sso.md
When your application is open in multiple tabs and you first sign in the user on
By default, MSAL.js uses `sessionStorage`, which doesn't allow the session to be shared between tabs. To get SSO between tabs, make sure to set the `cacheLocation` in MSAL.js to `localStorage` as shown below. ```javascript+ const config = { auth: { clientId: "abcd-ef12-gh34-ikkl-ashdjhlhsdg",
const config = {
}, };
-const myMSALObj = new UserAgentApplication(config);
+const msalInstance = new msal.PublicClientApplication(config);
``` ## SSO between apps
var request = {
sid: sid, };
-userAgentApplication
- .acquireTokenSilent(request)
+ msalInstance.acquireTokenSilent(request)
.then(function (response) { const token = response.accessToken; })
var request = {
extraQueryParameters: { domain_hint: "organizations" }, };
-userAgentApplication.loginRedirect(request);
+ msalInstance.loginRedirect(request);
``` To get the values for login_hint and domain_hint by reading the claims returned in the ID token for the user.
To get the values for login_hint and domain_hint by reading the claims returned
- **domain_hint** is only required to be passed when using the /common authority. The domain hint is determined by tenant ID(tid). If the `tid` claim in the ID token is `9188040d-6c67-4c5b-b112-36a304b66dad` it's consumers. Otherwise, it's organizations.
-For more information about **login_hint** and **domain_hint**, see [Implicit grant flow](v2-oauth2-implicit-grant-flow.md).
+For more information about **login_hint** and **domain_hint**, see [auth code grant](v2-oauth2-auth-code-flow.md).
## SSO without MSAL.js login
var request = {
extraQueryParameters: { domain_hint: "organizations" }, };
-userAgentApplication
- .acquireTokenSilent(request)
+msalInstance.acquireTokenSilent(request)
.then(function (response) { const token = response.accessToken; })
MSAL.js brings feature parity with ADAL.js for Azure AD authentication scenarios
To take advantage of the SSO behavior when updating from ADAL.js, you'll need to ensure the libraries are using `localStorage` for caching tokens. Set the `cacheLocation` to `localStorage` in both the MSAL.js and ADAL.js configuration at initialization as follows: ```javascript+ // In ADAL.js window.config = { clientId: "g075edef-0efa-453b-997b-de1337c29185",
const config = {
}, };
-const myMSALObj = new UserAgentApplication(config);
+const msalInstance = new msal.PublicClientApplication(config);
``` Once the `cacheLocation` is configured, MSAL.js can read the cached state of the authenticated user in ADAL.js and use that to provide SSO in MSAL.js.
active-directory Secure Group Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/secure-group-access-control.md
+
+ Title: Secure access control using groups in Azure AD - Microsoft identity platform
+description: Learn about how groups are used to securely control access to resources in Azure AD.
++++++++ Last updated : 2/21/2022++++
+# Customer intent: As a developer, I want to learn how to most securely use Azure AD groups to control access to resources.
++
+# Secure access control using groups in Azure AD
+
+Azure Active Directory (Azure AD) allows the use of groups to manage access to resources in an organization. You should use groups for access control when you want to manage and minimize access to applications. When groups are used, only members of those groups can access the resource. Using groups also allows you to benefit from several Azure AD group management features, such as attribute-based dynamic groups, external groups synced from on-premises Active Directory, and Administrator managed or self-service managed groups. To learn more about the benefits of groups for access control, see [manage access to an application](../manage-apps/what-is-access-management.md).
+
+While developing an application, you can authorize access with the [groups claim](/graph/api/resources/application?view=graph-rest-1.0#properties&preserve-view=true). To learn more, see how to [configure group claims for applications with Azure AD](../hybrid/how-to-connect-fed-group-claims.md).
+
+Today, many applications select a subset of groups with the *securityEnabled* flag set to *true* to avoid scale challenges, that is, to reduce the number of groups returned in the token. Setting the *securityEnabled* flag to be true for a group doesn't guarantee that the group is securely managed. Therefore, we suggest following the best practices described below:
++
+## Best practices to mitigate risk
+
+This table presents several security best practices for security groups and the potential security risks each practice mitigates.
+
+|Security best practice |Security risk mitigated |
+|--||
+|**Ensure resource owner and group owner are the same principal**. Applications should build their own group management experience and create new groups to manage access. For example, an application can create groups with *Group. Create* permission and add itself as the owner of the group. This way the application has control over its groups without being over privileged to modify other groups in the tenant.|When group owners and resource owners are different users or entities, group owners can add users to the group who aren't supposed to get access to the resource and thus give access to the resource unintentionally.|
+|**Build an implicit contract between resource owner(s) and group owner(s)**. The resource owner and the group owner should align on the group purpose, policies, and members that can be added to the group to get access to the resource. This level of trust is non-technical and relies on human or business contract.|When group owners and resource owners have different intentions, the group owner may add users to the group the resource owner didn't intend on giving access to. This can result in unnecessary and potentially risky access.|
+|**Use private groups for access control**. Microsoft 365 groups are managed by the [visibility concept](/graph/api/resources/group?view=graph-rest-1.0#group-visibility-options&preserve-view=true). This property controls the join policy of the group and visibility of group resources. Security groups have join policies that either allow anyone to join or require owner approval. On-premises-synced groups can also be public or private. When they're used to give access to a resource in the cloud, users joining this group on-premises can get access to the cloud resource as well.|When you use a *Public* group for access control, any member can join the group and get access to the resource. When a *Public* group is used to give access to an external resource, the risk of elevation of privilege exists.|
+|**Group nesting**. When you use a group for access control and it has other groups as its members, members of the subgroups can get access to the resource. In this case, there are multiple group owners - owners of the parent group and the subgroups.|Aligning with multiple group owners on the purpose of each group and how to add the right members to these groups is more complex and more prone to accidental grant of access. Therefore, you should limit the number of nested groups or don't use them at all if possible.|
+
+## Next steps
+
+For more information about groups in Azure AD, see the following:
+
+- [Manage app and resource access using Azure Active Directory groups](../fundamentals/active-directory-manage-groups.md)
+- [Access with Azure Active Directory groups](/azure/devops/organizations/accounts/manage-azure-active-directory-groups)
+- [Restrict your Azure AD app to a set of users in an Azure AD tenant](./howto-restrict-your-app-to-a-set-of-users.md)
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
For on-premises environments, users with this role can configure domain names fo
In February 2021 we have added following 37 new applications in our App gallery with Federation support:
-[Loop Messenger Extension](https://loopworks.com/loop-flow-messenger/), [Silverfort Azure AD Adapter](http://www.silverfort.com/), [Interplay Learning](https://skilledtrades.interplaylearning.com/#login), [Nura Space](https://dashboard.nuraspace.com/login), [Yooz EU](https://eu1.getyooz.com/?kc_idp_hint=microsoft), [UXPressia](https://uxpressia.com/users/sign-in), [introDus Pre- and Onboarding Platform](http://app.introdus.dk/login), [Happybot](https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize?client_id=34353e1e-dfe5-4d2f-bb09-2a5e376270c8&response_type=code&redirect_uri=https://api.happyteams.io/microsoft/integrate&response_mode=query&scope=offline_access%20User.Read%20User.Read.All), [LeaksID](https://app.leaksid.com/), [ShiftWizard](http://www.shiftwizard.com/), [PingFlow SSO](https://app.pingview.io/), [Swiftlane](https://admin.swiftlane.com/login), [Quasydoc SSO](https://www.quasydoc.eu/login), [Fenwick Gold Account](https://businesscentral.dynamics.com/), [SeamlessDesk](https://www.seamlessdesk.com/login), [Learnsoft LMS & TMS](http://www.learnsoft.com/), [P-TH+](https://p-th.jp/), [myViewBoard](https://api.myviewboard.com/auth/microsoft/), [Tartabit IoT Bridge](https://bridge-us.tartabit.com/), [AKASHI](../saas-apps/akashi-tutorial.md), [Rewatch](../saas-apps/rewatch-tutorial.md), [Zuddl](../saas-apps/zuddl-tutorial.md), [Parkalot - Car park management](../saas-apps/parkalot-car-park-management-tutorial.md), [HSB ThoughtSpot](../saas-apps/hsb-thoughtspot-tutorial.md), [IBMid](../saas-apps/ibmid-tutorial.md), [SharingCloud](../saas-apps/sharingcloud-tutorial.md), [PoolParty Semantic Suite](../saas-apps/poolparty-semantic-suite-tutorial.md), [GlobeSmart](../saas-apps/globesmart-tutorial.md), [Samsung Knox and Business Services](../saas-apps/samsung-knox-and-business-services-tutorial.md), [Penji](../saas-apps/penji-tutorial.md), [Kendis- Scaling Agile Platform](../saas-apps/kendis-scaling-agile-platform-tutorial.md), [Maptician](../saas-apps/maptician-tutorial.md), [Olfeo SAAS](../saas-apps/olfeo-saas-tutorial.md), [Sigma Computing](../saas-apps/sigma-computing-tutorial.md), [CloudKnox Permissions Management Platform](../saas-apps/cloudknox-permissions-management-platform-tutorial.md), [Klaxoon SAML](../saas-apps/klaxoon-saml-tutorial.md), [Enablon](../saas-apps/enablon-tutorial.md)
+[Loop Messenger Extension](https://loopworks.com/loop-flow-messenger/), [Silverfort Azure AD Adapter](http://www.silverfort.com/), [Interplay Learning](https://skilledtrades.interplaylearning.com/#login), [Nura Space](https://dashboard.nuraspace.com/login), [Yooz EU](https://eu1.getyooz.com/?kc_idp_hint=microsoft), [UXPressia](https://uxpressia.com/users/sign-in), [introDus Pre- and Onboarding Platform](http://app.introdus.dk/login), [Happybot](https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize?client_id=34353e1e-dfe5-4d2f-bb09-2a5e376270c8&response_type=code&redirect_uri=https://api.happyteams.io/microsoft/integrate&response_mode=query&scope=offline_access%20User.Read%20User.Read.All), [LeaksID](https://leaksid.com/), [ShiftWizard](http://www.shiftwizard.com/), [PingFlow SSO](https://app.pingview.io/), [Swiftlane](https://admin.swiftlane.com/login), [Quasydoc SSO](https://www.quasydoc.eu/login), [Fenwick Gold Account](https://businesscentral.dynamics.com/), [SeamlessDesk](https://www.seamlessdesk.com/login), [Learnsoft LMS & TMS](http://www.learnsoft.com/), [P-TH+](https://p-th.jp/), [myViewBoard](https://api.myviewboard.com/auth/microsoft/), [Tartabit IoT Bridge](https://bridge-us.tartabit.com/), [AKASHI](../saas-apps/akashi-tutorial.md), [Rewatch](../saas-apps/rewatch-tutorial.md), [Zuddl](../saas-apps/zuddl-tutorial.md), [Parkalot - Car park management](../saas-apps/parkalot-car-park-management-tutorial.md), [HSB ThoughtSpot](../saas-apps/hsb-thoughtspot-tutorial.md), [IBMid](../saas-apps/ibmid-tutorial.md), [SharingCloud](../saas-apps/sharingcloud-tutorial.md), [PoolParty Semantic Suite](../saas-apps/poolparty-semantic-suite-tutorial.md), [GlobeSmart](../saas-apps/globesmart-tutorial.md), [Samsung Knox and Business Services](../saas-apps/samsung-knox-and-business-services-tutorial.md), [Penji](../saas-apps/penji-tutorial.md), [Kendis- Scaling Agile Platform](../saas-apps/kendis-scaling-agile-platform-tutorial.md), [Maptician](../saas-apps/maptician-tutorial.md), [Olfeo SAAS](../saas-apps/olfeo-saas-tutorial.md), [Sigma Computing](../saas-apps/sigma-computing-tutorial.md), [CloudKnox Permissions Management Platform](../saas-apps/cloudknox-permissions-management-platform-tutorial.md), [Klaxoon SAML](../saas-apps/klaxoon-saml-tutorial.md), [Enablon](../saas-apps/enablon-tutorial.md)
You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
For more information about how to better secure your organization by using autom
In January 2022, weΓÇÖve added the following 47 new applications in our App gallery with Federation support
-[Jooto](../saas-apps/jooto-tutorial.md), [Proprli](https://app.proprli.com/), [Pace Scheduler](https://www.pacescheduler.com/accounts/login/), [DRTrack](../saas-apps/drtrack-tutorial.md), [Dining Sidekick](../saas-apps/dining-sidekick-tutorial.md), [Cryotos](https://app.cryotos.com/oauth2/authorization/azure-client), [Emergency Management Systems](https://secure.emsystems.com.au/), [Manifestly Checklists](../saas-apps/manifestly-checklists-tutorial.md), [eLearnPOSH](../saas-apps/elearnposh-tutorial.md), [Scuba Analytics](../saas-apps/scuba-analytics-tutorial.md), [Athena Systems Login Platform](../saas-apps/athena-systems-login-platform-tutorial.md), [TimeTrack](../saas-apps/timetrack-tutorial.md), [MiHCM](../saas-apps/mihcm-tutorial.md), [Health Note](https://auth.healthnote.works/oauth), [Active Directory SSO for DoubleYou](../saas-apps/active-directory-sso-for-doubleyou-tutorial.md), [Emplifi platform](../saas-apps/emplifi-platform-tutorial.md), [Flexera One](../saas-apps/flexera-one-tutorial.md), [Hypothesis](https://web.hypothes.is/help/authorizing-hypothesis-from-the-azure-ad-app-gallery/), [Recurly](../saas-apps/recurly-tutorial.md), [XpressDox AU Cloud](https://au.xpressdox.com/Authentication/Login.aspx), [Zoom for Intune](https://zoom.us/), [UPWARD AGENT](https://app.upward.jp/login/), [Linux Foundation ID](https://openprofile.dev/), [Asset Planner](../saas-apps/asset-planner-tutorial.md), [Kiho](https://v3.kiho.fi/index/sso), [chezie](https://app.chezie.co/), [Excelity HCM](../saas-apps/excelity-hcm-tutorial.md), [yuccaHR](https://app.yuccahr.com/), [Blue Ocean Brain](../saas-apps/blue-ocean-brain-tutorial.md), [EchoSpan](../saas-apps/echospan-tutorial.md), [Archie](../saas-apps/archie-tutorial.md), [Equifax Workforce Solutions](../saas-apps/equifax-workforce-solutions-tutorial.md), [Palantir Foundry](../saas-apps/palantir-foundry-tutorial.md), [ATP SpotLight and ChronicX](../saas-apps/atp-spotlight-and-chronicx-tutorial.md), [DigiSign](https://app.digisign.org/selfcare/sso), [mConnect](https://mconnect.skooler.com/), [BrightHR](https://login.brighthr.com/), [Mural Identity](../saas-apps/mural-identity-tutorial.md), [NordPass SSO](https://app.nordpass.com/login%20use%20%22Log%20in%20to%20business%22%20option), [CloudClarity](https://portal.cloudclarity.app/dashboard), [Twic](../saas-apps/twic-tutorial.md), [Eduhouse Online](https://app.eduhouse.fi/palvelu/kirjaudu/microsoft), [Bealink](../saas-apps/bealink-tutorial.md), [Time Intelligence Bot](https://teams.microsoft.com/), [SentinelOne](https://sentinelone.com/)
+[Jooto](../saas-apps/jooto-tutorial.md), [Proprli](https://app.proprli.com/), [Pace Scheduler](https://www.pacescheduler.com/accounts/login/), [DRTrack](../saas-apps/drtrack-tutorial.md), [Dining Sidekick](../saas-apps/dining-sidekick-tutorial.md), [Cryotos](https://app.cryotos.com/oauth2/authorization/azure-client), [Emergency Management Systems](https://secure.emsystems.com.au/), [Manifestly Checklists](../saas-apps/manifestly-checklists-tutorial.md), [eLearnPOSH](../saas-apps/elearnposh-tutorial.md), [Scuba Analytics](../saas-apps/scuba-analytics-tutorial.md), [Athena Systems Login Platform](../saas-apps/athena-systems-login-platform-tutorial.md), [TimeTrack](../saas-apps/timetrack-tutorial.md), [MiHCM](../saas-apps/mihcm-tutorial.md), [Health Note](https://www.healthnote.com/), [Active Directory SSO for DoubleYou](../saas-apps/active-directory-sso-for-doubleyou-tutorial.md), [Emplifi platform](../saas-apps/emplifi-platform-tutorial.md), [Flexera One](../saas-apps/flexera-one-tutorial.md), [Hypothesis](https://web.hypothes.is/help/authorizing-hypothesis-from-the-azure-ad-app-gallery/), [Recurly](../saas-apps/recurly-tutorial.md), [XpressDox AU Cloud](https://au.xpressdox.com/Authentication/Login.aspx), [Zoom for Intune](https://zoom.us/), [UPWARD AGENT](https://app.upward.jp/login/), [Linux Foundation ID](https://openprofile.dev/), [Asset Planner](../saas-apps/asset-planner-tutorial.md), [Kiho](https://v3.kiho.fi/index/sso), [chezie](https://app.chezie.co/), [Excelity HCM](../saas-apps/excelity-hcm-tutorial.md), [yuccaHR](https://app.yuccahr.com/), [Blue Ocean Brain](../saas-apps/blue-ocean-brain-tutorial.md), [EchoSpan](../saas-apps/echospan-tutorial.md), [Archie](../saas-apps/archie-tutorial.md), [Equifax Workforce Solutions](../saas-apps/equifax-workforce-solutions-tutorial.md), [Palantir Foundry](../saas-apps/palantir-foundry-tutorial.md), [ATP SpotLight and ChronicX](../saas-apps/atp-spotlight-and-chronicx-tutorial.md), [DigiSign](https://app.digisign.org/selfcare/sso), [mConnect](https://mconnect.skooler.com/), [BrightHR](https://login.brighthr.com/), [Mural Identity](../saas-apps/mural-identity-tutorial.md), [NordPass SSO](https://app.nordpass.com/login%20use%20%22Log%20in%20to%20business%22%20option), [CloudClarity](https://portal.cloudclarity.app/dashboard), [Twic](../saas-apps/twic-tutorial.md), [Eduhouse Online](https://app.eduhouse.fi/palvelu/kirjaudu/microsoft), [Bealink](../saas-apps/bealink-tutorial.md), [Time Intelligence Bot](https://teams.microsoft.com/), [SentinelOne](https://sentinelone.com/)
You can also find the documentation of all the applications from: https://aka.ms/AppsTutorial,
active-directory Concept Azure Ad Connect Sync Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-azure-ad-connect-sync-architecture.md
When sync engine finds a staging object that matches by distinguished name but n
* If the object located in the connector space has no anchor, then sync engine removes this object from the connector space and marks the metaverse object it is linked to as **retry provisioning on next synchronization run**. Then it creates the new import object. * If the object located in the connector space has an anchor, then sync engine assumes that this object has either been renamed or deleted in the connected directory. It assigns a temporary, new distinguished name for the connector space object so that it can stage the incoming object. The old object then becomes **transient**, waiting for the Connector to import the rename or deletion to resolve the situation.
+Transient objects are not always a problem, and you might see them even in a healthy environment. With [Azure AD Connect sync V2 endpoint API](how-to-connect-sync-endpoint-api-v2.md), transient objects should auto-resolve in subsequent delta synchronization cycles. A common example where you might find transient objects being generated occurs on Azure AD Connect servers installed in staging mode, when an admin permanently deletes an object directly in Azure AD using PowerShell and later synchronizes the object again.
+ If sync engine locates a staging object that corresponds to the object specified in the Connector, it determines what kind of changes to apply. For example, sync engine might rename or delete the object in the connected data source, or it might only update the objectΓÇÖs attribute values. Staging objects with updated data are marked as pending import. Different types of pending imports are available. Depending on the result of the import process, a staging object in the connector space has one of the following pending import types:
active-directory How To Connect Install Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-custom.md
On the next page, you can select optional features for your scenario.
| Optional features | Description | | | | | Exchange hybrid deployment |The Exchange hybrid deployment feature allows for the coexistence of Exchange mailboxes both on-premises and in Microsoft 365. Azure AD Connect synchronizes a specific set of [attributes](reference-connect-sync-attributes-synchronized.md#exchange-hybrid-writeback) from Azure AD back into your on-premises directory. |
-| Exchange mail public folders | The Exchange mail public folders feature allows you to synchronize mail-enabled public-folder objects from your on-premises instance of Active Directory to Azure AD. |
+| Exchange mail public folders | The Exchange mail public folders feature allows you to synchronize mail-enabled public-folder objects from your on-premises instance of Active Directory to Azure AD. Note that it is not supported to sync groups that contain public folders as members, and attempting to do so will result in a synchronization error. |
| Azure AD app and attribute filtering |By enabling Azure AD app and attribute filtering, you can tailor the set of synchronized attributes. This option adds two more configuration pages to the wizard. For more information, see [Azure AD app and attribute filtering](#azure-ad-app-and-attribute-filtering). | | Password hash synchronization |If you selected federation as the sign-in solution, you can enable password hash synchronization. Then you can use it as a backup option. </br></br>If you selected pass-through authentication, you can enable this option to ensure support for legacy clients and to provide a backup.</br></br> For more information, see [Password hash synchronization](how-to-connect-password-hash-synchronization.md).| | Password writeback |Use this option to ensure that password changes that originate in Azure AD are written back to your on-premises directory. For more information, see [Getting started with password management](../authentication/tutorial-enable-sspr.md). |
active-directory How To Connect Pta Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-quick-start.md
Ensure that the following prerequisites are in place.
### In the Azure Active Directory admin center
-1. Create a cloud-only global administrator account on your Azure AD tenant. This way, you can manage the configuration of your tenant should your on-premises services fail or become unavailable. Learn about [adding a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Completing this step is critical to ensure that you don't get locked out of your tenant.
+1. Create a cloud-only global administrator account or a Hybrid Identity administrator account on your Azure AD tenant. This way, you can manage the configuration of your tenant should your on-premises services fail or become unavailable. Learn about [adding a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Completing this step is critical to ensure that you don't get locked out of your tenant.
2. Add one or more [custom domain names](../fundamentals/add-custom-domain.md) to your Azure AD tenant. Your users can sign in with one of these domain names. ### In your on-premises environment
active-directory How To Connect Pta Security Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-security-deep-dive.md
The following sections discuss these phases in detail.
### Authentication Agent installation
-Only global administrators can install an Authentication Agent (by using Azure AD Connect or standalone) on an on-premises server. Installation adds two new entries to the **Control Panel** > **Programs** > **Programs and Features** list:
+Only global administrators or Hybrid Identity administrators can install an Authentication Agent (by using Azure AD Connect or standalone) on an on-premises server. Installation adds two new entries to the **Control Panel** > **Programs** > **Programs and Features** list:
- The Authentication Agent application itself. This application runs with [NetworkService](/windows/win32/services/networkservice-account) privileges. - The Updater application that's used to auto-update the Authentication Agent. This application runs with [LocalSystem](/windows/win32/services/localsystem-account) privileges.
The Authentication Agents use the following steps to register themselves with Az
![Agent registration](./media/how-to-connect-pta-security-deep-dive/pta1.png)
-1. Azure AD first requests that a global administrator sign in to Azure AD with their credentials. During sign-in, the Authentication Agent acquires an access token that it can use on behalf of the global administrator.
+1. Azure AD first requests that a global administrator or hybrid identity administrator sign in to Azure AD with their credentials. During sign-in, the Authentication Agent acquires an access token that it can use on behalf of the global administrator or hybrid identity administrator.
2. The Authentication Agent then generates a key pair: a public key and a private key. - The key pair is generated through standard RSA 2048-bit encryption. - The private key stays on the on-premises server where the Authentication Agent resides.
The Authentication Agents use the following steps to register themselves with Az
- The access token acquired in step 1. - The public key generated in step 2. - A Certificate Signing Request (CSR or Certificate Request). This request applies for a digital identity certificate, with Azure AD as its certificate authority (CA).
-4. Azure AD validates the access token in the registration request and verifies that the request came from a global administrator.
+4. Azure AD validates the access token in the registration request and verifies that the request came from a global administrator or hybrid identity administrator.
5. Azure AD then signs and sends a digital identity certificate back to the Authentication Agent. - The root CA in Azure AD is used to sign the certificate.
To renew an Authentication Agent's trust with Azure AD:
- A Certificate Signing Request (CSR or Certificate Request). This request applies for a new digital identity certificate, with Azure AD as its certificate authority. 4. Azure AD validates the existing certificate in the certificate renewal request. Then it verifies that the request came from an Authentication Agent registered on your tenant. 5. If the existing certificate is still valid, Azure AD then signs a new digital identity certificate, and issues the new certificate back to the Authentication Agent.
-6. If the existing certificate has expired, Azure AD deletes the Authentication Agent from your tenantΓÇÖs list of registered Authentication Agents. Then a global administrator needs to manually install and register a new Authentication Agent.
+6. If the existing certificate has expired, Azure AD deletes the Authentication Agent from your tenantΓÇÖs list of registered Authentication Agents. Then a global administrator or hybrid identity administrator needs to manually install and register a new Authentication Agent.
- Use the Azure AD root CA to sign the certificate. - Set the certificateΓÇÖs subject (Distinguished Name or DN) to your tenant ID, a GUID that uniquely identifies your tenant. The DN scopes the certificate to your tenant only. 6. Azure AD stores the new public key of the Authentication Agent in a database in Azure SQL Database that only it has access to. It also invalidates the old public key associated with the Authentication Agent. 7. The new certificate (issued in step 5) is then stored on the server in the Windows certificate store (specifically in the [CERT_SYSTEM_STORE_CURRENT_USER](/windows/win32/seccrypto/system-store-locations#CERT_SYSTEM_STORE_CURRENT_USER) location).
- - Because the trust renewal procedure happens non-interactively (without the presence of the global administrator), the Authentication Agent no longer has access to update the existing certificate in the CERT_SYSTEM_STORE_LOCAL_MACHINE location.
+ - Because the trust renewal procedure happens non-interactively (without the presence of the global administrator or hybrid identity administrator), the Authentication Agent no longer has access to update the existing certificate in the CERT_SYSTEM_STORE_LOCAL_MACHINE location.
> [!NOTE] > This procedure does not remove the certificate itself from the CERT_SYSTEM_STORE_LOCAL_MACHINE location.
active-directory How To Connect Sso Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sso-quick-start.md
After completion of the wizard, Seamless SSO is enabled on your tenant.
Follow these instructions to verify that you have enabled Seamless SSO correctly:
-1. Sign in to the [Azure Active Directory administrative center](https://aad.portal.azure.com) with the global administrator credentials for your tenant.
+1. Sign in to the [Azure Active Directory administrative center](https://aad.portal.azure.com) with the global administrator or hybrid identity administrator credentials for your tenant.
2. Select **Azure Active Directory** in the left pane. 3. Select **Azure AD Connect**. 4. Verify that the **Seamless single sign-on** feature appears as **Enabled**.
active-directory Tshoot Connect Pass Through Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-pass-through-authentication.md
This article helps you find troubleshooting information about common issues regarding Azure AD Pass-through Authentication. > [!IMPORTANT]
-> If you are facing user sign-in issues with Pass-through Authentication, don't disable the feature or uninstall Pass-through Authentication Agents without having a cloud-only Global Administrator account to fall back on. Learn about [adding a cloud-only Global Administrator account](../fundamentals/add-users-azure-active-directory.md). Doing this step is critical and ensures that you don't get locked out of your tenant.
+> If you are facing user sign-in issues with Pass-through Authentication, don't disable the feature or uninstall Pass-through Authentication Agents without having a cloud-only Global Administrator account or a Hybrid Identity Administrator account to fall back on. Learn about [adding a cloud-only Global Administrator account](../fundamentals/add-users-azure-active-directory.md). Doing this step is critical and ensures that you don't get locked out of your tenant.
## General issues
Ensure that the server on which the Authentication Agent has been installed can
### Registration of the Authentication Agent failed due to token or account authorization errors
-Ensure that you use a cloud-only Global Administrator account for all Azure AD Connect or standalone Authentication Agent installation and registration operations. There is a known issue with MFA-enabled Global Administrator accounts; turn off MFA temporarily (only to complete the operations) as a workaround.
+Ensure that you use a cloud-only Global Administrator account or a Hybrid Identity Administrator account for all Azure AD Connect or standalone Authentication Agent installation and registration operations. There is a known issue with MFA-enabled Global Administrator accounts; turn off MFA temporarily (only to complete the operations) as a workaround.
### An unexpected error occurred
active-directory Tshoot Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-sso.md
If troubleshooting didn't help, you can manually reset the feature on your tenan
### Step 2: Get the list of Active Directory forests on which Seamless SSO has been enabled
-1. Run PowerShell as an administrator. In PowerShell, call `New-AzureADSSOAuthenticationContext`. When prompted, enter your tenant's global administrator credentials.
+1. Run PowerShell as an administrator. In PowerShell, call `New-AzureADSSOAuthenticationContext`. When prompted, enter your tenant's global administrator or hybrid identity administrator credentials.
2. Call `Get-AzureADSSOStatus`. This command provides you with the list of Active Directory forests (look at the "Domains" list) on which this feature has been enabled. ### Step 3: Disable Seamless SSO for each Active Directory forest where you've set up the feature
active-directory Tutorial Phs Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-phs-backup.md
Do the following:
1. Double-click the Azure AD Connect icon that was created on the desktop 2. Click **Configure**. 3. On the Additional tasks page, select **Customize synchronization options** and click **Next**.
-4. Enter the username and password for your global administrator. This account was created [here](tutorial-federation.md#create-a-global-administrator-in-azure-ad) in the previous tutorial.
+4. Enter the username and password for your global administrator or your hybrid identity administrator. This account was created [here](tutorial-federation.md#create-a-global-administrator-in-azure-ad) in the previous tutorial.
5. On the **Connect your directories** screen, click **Next**. 6. On the **Domain and OU filtering** screen, click **Next**. 7. On the **Optional features** screen, check **Password hash synchronization** and click **Next**.
Now, we will show you how to switch over to password hash synchronization. Befor
2. Click **Configure**. 3. Select **Change user sign-in** and click **Next**. ![Change](media/tutorial-phs-backup/backup2.png)</br>
-4. Enter the username and password for your global administrator. This account was created [here](tutorial-federation.md#create-a-global-administrator-in-azure-ad) in the previous tutorial.
+4. Enter the username and password for your global administrator or your hybrid identity administrator. This account was created [here](tutorial-federation.md#create-a-global-administrator-in-azure-ad) in the previous tutorial.
5. On the **User sign-in** screen, select **Password Hash Synchronization** and place a check in the **Do not convert user accounts** box. 6. Leave the default **Enable single sign-on** selected and click **Next**. 7. On the **Enable single sign-on** screen click **Next**.
Now, we will show you how to switch back to federation. To do this, do the foll
1. Double-click the Azure AD Connect icon that was created on the desktop 2. Click **Configure**. 3. Select **Change user sign-in** and click **Next**.
-4. Enter the username and password for your global administrator. This is the account that was created [here](tutorial-federation.md#create-a-global-administrator-in-azure-ad) in the previous tutorial.
+4. Enter the username and password for your global administrator or your hybrid identity administrator. This is the account that was created [here](tutorial-federation.md#create-a-global-administrator-in-azure-ad) in the previous tutorial.
5. On the **User sign-in** screen, select **Federation with AD FS** and click **Next**. 6. On the Domain Administrator credentials page, enter the contoso\Administrator username and password and click **Next.** 7. On the AD FS farm screen, click **Next**.
Now we need to reset the trust between AD FS and Azure.
3. Select **Manage Federation** and click **Next**. 4. Select **Reset Azure AD trust** and click **Next**. ![Reset](media/tutorial-phs-backup/backup6.png)</br>
-5. On the **Connect to Azure AD** screen enter the username and password for your global administrator.
+5. On the **Connect to Azure AD** screen enter the username and password for your global administrator or your hybrid identity administrator.
6. On the **Connect to AD FS** screen, enter the contoso\Administrator username and password and click **Next.** 7. On the **Certificates** screen, click **Next**.
You have now successfully setup a hybrid identity environment that you can use t
- [Hardware and prerequisites](how-to-connect-install-prerequisites.md) - [Express settings](how-to-connect-install-express.md)-- [Password hash synchronization](how-to-connect-password-hash-synchronization.md)
+- [Password hash synchronization](how-to-connect-password-hash-synchronization.md)
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| Azure Automation | [Azure Automation account authentication overview](../../automation/automation-security-overview.md#managed-identities) | | Azure Batch | [Configure customer-managed keys for your Azure Batch account with Azure Key Vault and Managed Identity](../../batch/batch-customer-managed-key.md) </BR> [Configure managed identities in Batch pools](../../batch/managed-identity-pools.md) | | Azure Blueprints | [Stages of a blueprint deployment](../../governance/blueprints/concepts/deployment-stages.md) |
+| Azure Cache for Redis | [Managed identity for storage accounts with Azure Cache for Redis](../../azure-cache-for-redis/cache-managed-identity.md) |
| Azure Container Instance | [How to use managed identities with Azure Container Instances](../../container-instances/container-instances-managed-identity.md) | | Azure Container Registry | [Use an Azure-managed identity in ACR Tasks](../../container-registry/container-registry-tasks-authentication-managed-identity.md) | | Azure Cognitive Services | [Configure customer-managed keys with Azure Key Vault for Cognitive Services](../../cognitive-services/encryption/cognitive-services-encryption-keys-portal.md) |
active-directory Concept Activity Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md
na Previously updated : 04/09/2020 Last updated : 03/11/2022 -+
You can route Azure AD audit logs and sign-in logs to your Azure Storage account
* **Audit logs**: The [audit logs activity report](concept-audit-logs.md) gives you access to information about changes applied to your tenant, such as users and group management, or updates applied to your tenantΓÇÖs resources. * **Sign-in logs**: With the [sign-in activity report](concept-sign-ins.md), you can determine who performed the tasks that are reported in the audit logs.
-> [!NOTE]
-> B2C-related audit and sign-in activity logs are not supported at this time.
->
+ ## Prerequisites
active-directory Betterworks Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/betterworks-tutorial.md
Title: 'Tutorial: Azure AD SSO integration with BetterWorks'
-description: Learn how to configure single sign-on between Azure Active Directory and BetterWorks.
+ Title: 'Tutorial: Azure AD SSO integration with Betterworks'
+description: Learn how to configure single sign-on between Azure Active Directory and Betterworks.
Last updated 10/07/2021
-# Tutorial: Azure AD SSO integration with BetterWorks
+# Tutorial: Azure AD SSO integration with Betterworks
-In this tutorial, you'll learn how to integrate BetterWorks with Azure Active Directory (Azure AD). When you integrate BetterWorks with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Betterworks with Azure Active Directory (Azure AD). When you integrate Betterworks with Azure AD, you can:
-* Control in Azure AD who has access to BetterWorks.
-* Enable your users to be automatically signed-in to BetterWorks with their Azure AD accounts.
+* Control in Azure AD who has access to Betterworks.
+* Enable your users to be automatically signed-in to Betterworks with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate BetterWorks with Azure Active Di
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* BetterWorks single sign-on (SSO) enabled subscription.
+* Betterworks single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* BetterWorks supports **SP and IDP** initiated SSO.
+* Betterworks supports **SP and IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Add BetterWorks from the gallery
+## Add Betterworks from the gallery
-To configure the integration of BetterWorks into Azure AD, you need to add BetterWorks from the gallery to your list of managed SaaS apps.
+To configure the integration of Betterworks into Azure AD, you need to add Betterworks from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **BetterWorks** in the search box.
-1. Select **BetterWorks** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **Betterworks** in the search box.
+1. Select **Betterworks** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for BetterWorks
+## Configure and test Azure AD SSO for Betterworks
-Configure and test Azure AD SSO with BetterWorks using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in BetterWorks.
+Configure and test Azure AD SSO with Betterworks using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Betterworks.
-To configure and test Azure AD SSO with BetterWorks, perform the following steps:
+To configure and test Azure AD SSO with Betterworks, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure BetterWorks SSO](#configure-betterworks-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create BetterWorks test user](#create-betterworks-test-user)** - to have a counterpart of B.Simon in BetterWorks that is linked to the Azure AD representation of user.
+1. **[Configure Betterworks SSO](#configure-betterworks-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Betterworks test user](#create-betterworks-test-user)** - to have a counterpart of B.Simon in Betterworks that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **BetterWorks** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Betterworks** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://app.betterworks.com` > [!NOTE]
- > If you're a European Union customer of BetterWorks, please use `eu.betterworks.com` as the domain name instead of `app.betterworks.com` in these URLs.
+ > If you're a European Union customer of Betterworks, please use `eu.betterworks.com` as the domain name instead of `app.betterworks.com` in these URLs.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/metadataxml.png)
-1. On the **Set up BetterWorks** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up Betterworks** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to BetterWorks.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Betterworks.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **BetterWorks**.
+1. In the applications list, select **Betterworks**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure BetterWorks SSO
+## Configure Betterworks SSO
-To configure single sign-on on **BetterWorks** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [BetterWorks support team](mailto:support@betterworks.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Betterworks** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Betterworks support team](mailto:support@betterworks.com). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create BetterWorks test user
+### Create Betterworks test user
-In this section, you create a user called Britta Simon in BetterWorks. Work with [BetterWorks support team](mailto:support@betterworks.com) to add the users in the BetterWorks platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in Betterworks. Work with [Betterworks support team](mailto:support@betterworks.com) to add the users in the Betterworks platform. Users must be created and activated before you use single sign-on.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to BetterWorks Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Betterworks Sign on URL where you can initiate the login flow.
-* Go to BetterWorks Sign-on URL directly and initiate the login flow from there.
+* Go to Betterworks Sign-on URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the BetterWorks for which you set up the SSO.
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Betterworks for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the BetterWorks tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the BetterWorks for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Betterworks tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Betterworks for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure BetterWorks you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure Betterworks you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
advisor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Advisor description: Sample Azure Resource Graph queries for Azure Advisor showing use of resource types and tables to access Azure Advisor related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
api-management Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-bicep.md
+
+ Title: Quickstart - Create Azure API Management instance by using Bicep
+description: Learn how to create an Azure API Management instance in the Developer tier by using Bicep.
+++
+tags: azure-resource-manager, bicep
++ Last updated : 03/10/2022++
+# Quickstart: Create a new Azure API Management service instance using Bicep
+
+This quickstart describes how to use a Bicep file to create an Azure API Management (APIM) service instance. APIM helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. APIM enables you to create and manage modern API gateways for existing backend services hosted anywhere. For more information, see the [Overview](api-management-key-concepts.md).
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-api-management-create/).
++
+The following resource is defined in the Bicep file:
+
+- **[Microsoft.ApiManagement/service](/azure/templates/microsoft.apimanagement/service)**
+
+In this example, the Bicep file configures the API Management instance in the Developer tier, an economical option to evaluate Azure API Management. This tier isn't for production use.
+
+More Azure API Management Bicep samples can be found in [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Apimanagement&pageNumber=1&sort=Popular).
+
+## Deploy the Bicep file
+
+You can use Azure CLI or Azure PowerShell to deploy the Bicep file. For more information about deploying Bicep files, see [Deploy](../azure-resource-manager/bicep/deploy-cli.md).
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters publisherEmail=<publisher-email> publishername=<publisher-name>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -publisherEmail "<publisher-email>" -publisherName "<publisher-name>"
+ ```
+
+
+
+ Replace **\<publisher-name\>** and **\<publisher-email\>** with the name of the API publisher's organization and the email address to receive notifications.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI or Azure PowerShell to list the deployed App Configuration resource in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+When your API Management service instance is online, you're ready to use it. Start with the tutorial to [import and publish](import-and-publish.md) your first API.
+
+## Clean up resources
+
+If you plan to continue working with subsequent tutorials, you might want to leave the API Management instance in place. When no longer needed, delete the resource group, which deletes the resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Import and publish your first API](import-and-publish.md)
app-service Deploy Container Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-container-github-action.md
OpenID Connect is an authentication method that uses short-lived tokens. Setting
az ad sp create --id $appId ```
-1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
```azurecli-interactive
- az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal
+ az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName
+/providers/Microsoft.Web/sites/
``` 1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
app-service Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-github-actions.md
OpenID Connect is an authentication method that uses short-lived tokens. Setting
az ad sp create --id $appId ```
-1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
```azurecli-interactive
- az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal
+ az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/--assignee-principal-type ServicePrincipal
``` 1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
app-service App Service App Service Environment Control Inbound Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-control-inbound-traffic.md
The following list contains the ports used by an App Service Environment. All po
* 4016: Used for remote debugging with Visual Studio 2012. This port can be safely blocked if the feature isn't being used. On an ILB-enabled ASE, this port is bound to the ILB address of the ASE. * 4018: Used for remote debugging with Visual Studio 2013. This port can be safely blocked if the feature isn't being used. On an ILB-enabled ASE, this port is bound to the ILB address of the ASE. * 4020: Used for remote debugging with Visual Studio 2015. This port can be safely blocked if the feature isn't being used. On an ILB-enabled ASE, this port is bound to the ILB address of the ASE.
+* 4022: Used for remote debugging with Visual Studio 2017. This port can be safely blocked if the feature isn't being used. On an ILB-enabled ASE, this port is bound to the ILB address of the ASE.
+* 4024 Used for remote debugging with Visual Studio 2019. This port can be safely blocked if the feature isn't being used. On an ILB-enabled ASE, this port is bound to the ILB address of the ASE.
+* 4026: Used for remote debugging with Visual Studio 2022. This port can be safely blocked if the feature isn't being used. On an ILB-enabled ASE, this port is bound to the ILB address of the ASE.
## Outbound Connectivity and DNS Requirements For an App Service Environment to function properly, it also requires outbound access to various endpoints. A full list of the external endpoints used by an ASE is in the "Required Network Connectivity" section of the [Network Configuration for ExpressRoute](app-service-app-service-environment-network-configuration-expressroute.md#required-network-connectivity) article.
app-service Network Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/network-info.md
If you put a *deny everything else* rule before the default rules, you prevent t
If you assigned an IP address to your app, make sure you keep the ports open. To see the ports, select **App Service Environment** > **IP addresses**.  
-All the items shown in the following outbound rules are needed, except for the last item. They enable network access to the App Service Environment dependencies that were noted earlier in this article. If you block any of them, your App Service Environment stops working. The last item in the list enables your App Service Environment to communicate with other resources in your virtual network.
+All the items shown in the following outbound rules are needed, except for the rule named **ASE-internal-outbound**. They enable network access to the App Service Environment dependencies that were noted earlier in this article. If you block any of them, your App Service Environment stops working. The rule named **ASE-internal-outbound** in the list enables your App Service Environment to communicate with other resources in your virtual network.
![Screenshot that shows outbound security rules.][5]
+> [!NOTE]
+> The IP range in the ASE-internal-outbound rule is only an example and should be changed to match the subnet range for the App Service Environment subnet.
+ After your NSGs are defined, assign them to the subnet. If you don't remember the virtual network or subnet, you can see it from the App Service Environment portal. To assign the NSG to your subnet, go to the subnet UI and select the NSG. ## Routes
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md
In addition to configuring the Health check options, you can also configure the
Health check integrates with App Service's [authentication and authorization features](overview-authentication-authorization.md). No additional settings are required if these security features are enabled.
-If you're using your own authentication system, the Health check path must allow anonymous access. To secure the Health check endpoint, you should first use features such as [IP restrictions](app-service-ip-restrictions.md#set-an-ip-address-based-rule), [client certificates](app-service-ip-restrictions.md#set-an-ip-address-based-rule), or a Virtual Network to restrict application access. You can secure the Health check endpoint by requiring the `User-Agent` of the incoming request matches `HealthCheck/1.0`. The User-Agent can't be spoofed since the request would already secured by prior security features.
+If you're using your own authentication system, the Health check path must allow anonymous access. To secure the Health check endpoint, you should first use features such as [IP restrictions](app-service-ip-restrictions.md#set-an-ip-address-based-rule), [client certificates](app-service-ip-restrictions.md#set-an-ip-address-based-rule), or a Virtual Network to restrict application access. You can secure the Health check endpoint by requiring the `User-Agent` of the incoming request matches `HealthCheck/1.0`. The User-Agent can't be spoofed since the request would already be secured by prior security features.
## Monitoring
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
For more information on deployment slots, see [Set up staging environments in Az
| Setting name| Description | Example | |-|-|-| |`WEBSITE_SLOT_NAME`| Read-only. Name of the current deployment slot. The name of the production slot is `Production`. ||
-|`WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS`| By default, the versions for site extensions are specific to each slot. This prevents unanticipated application behavior due to changing extension versions after a swap. If you want the extension versions to swap as well, set to `1` on *all slots*. ||
+|`WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS`| By default, the versions for site extensions are specific to each slot. This prevents unanticipated application behavior due to changing extension versions after a swap. If you want the extension versions to swap as well, set to `0` on *all slots*. ||
|`WEBSITE_OVERRIDE_PRESERVE_DEFAULT_STICKY_SLOT_SETTINGS`| Designates certain settings as [sticky or not swappable by default](deploy-staging-slots.md#which-settings-are-swapped). Default is `true`. Set this setting to `false` or `0` for *all deployment slots* to make them swappable instead. There's no fine-grain control for specific setting types. || |`WEBSITE_SWAP_WARMUP_PING_PATH`| Path to ping to warm up the target slot in a swap, beginning with a slash. The default is `/`, which pings the root path over HTTP. | `/statuscheck` | |`WEBSITE_SWAP_WARMUP_PING_STATUSES`| Valid HTTP response codes for the warm-up operation during a swap. If the returned status code isn't in the list, the warmup and swap operations are stopped. By default, all response codes are valid. | `200,202` |
application-gateway Application Gateway Backend Health Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
This behavior can occur for one or more of the following reasons:
d. If an NSG is configured, search for that NSG resource on the **Search** tab or under **All resources**.
- e. In the **Inbound Rules** section, add an inbound rule to allow destination port range 65503-65534 for v1 SKU or 65200-65535 v2 SKU with the **Source** set as **Any** or **Internet**.
+ e. In the **Inbound Rules** section, add an inbound rule to allow destination port range 65503-65534 for v1 SKU or 65200-65535 v2 SKU with the **Source** set as **GatewayManager** service tag.
f. Select **Save** and verify that you can view the backend as Healthy. Alternatively, you can do that through [PowerShell/CLI](../virtual-network/manage-network-security-group.md).
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
Previously updated : 11/02/2021 Last updated : 03/11/2022 recommendations: false
The business card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from business card images. The API analyzes printed business cards; extracts key information such as first name, last name, company name, email address, and phone number; and returns a structured JSON data representation.
-***Sample business card processed with Form Recognizer Studio***
+***Sample business card processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)***
:::image type="content" source="./media/studio/overview-business-card-studio.png" alt-text="sample business card" lightbox="./media/overview-business-card.jpg":::
See how data, including name, job title, address, email, and company name, is ex
#### Sample Labeling tool
-You will need a business card document. You can use our [sample business card document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/businessCard.png).
+You'll need a business card document. You can use our [sample business card document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/businessCard.png).
1. On the Sample Labeling tool home page, select **Use prebuilt model to get data**.
You will need a business card document. You can use our [sample business card do
* Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location. * For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed). * The file size must be less than 50 MB.
-* Image dimensions must be between 50 x 50 pixels and 10000 x 10000 pixels.
+* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
* PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller. * The total size of the training data is 500 pages or less. * If your PDFs are password-locked, you must remove the lock before submission.
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
Previously updated : 11/02/2021 Last updated : 03/11/2022 recommendations: false
The ID document model combines Optical Character Recognition (OCR) with deep learning models to analyze and extracts key information from US Drivers Licenses (all 50 states and District of Columbia) and international passport biographical pages (excludes visa and other travel documents). The API analyzes identity documents, extracts key information, and returns a structured JSON data representation.
-***Sample U.S. Driver's License processed with Form Recognizer Studio***
+***Sample U.S. Driver's License processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)***
:::image type="content" source="media/studio/analyze-drivers-license.png" alt-text="Image of a sample driver's license." lightbox="media/overview-id.jpg":::
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
Previously updated : 11/02/2021 Last updated : 03/11/2022 recommendations: false
The Form Recognizer Layout API extracts text, tables, selection marks, and structure information from documents (PDF, TIFF) and images (JPG, PNG, BMP).
-***Sample form processed with [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/) layout feature***
+***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***
**Data extraction features**
You'll need a form document. You can use our [sample form document](https://raw.
* Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location. * For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed). * The file size must be less than 50 MB.
-* Image dimensions must be between 50 x 50 pixels and 10000 x 10000 pixels.
+* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
> [!NOTE] > The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service. ## Supported languages and locales
- Form Recognizer preview version introduces additional language support for the layout model. *See* our [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
+ Form Recognizer preview version introduces additional language support for the layout model. *See* [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
## Features ### Tables and table headers
-Layout API extracts tables in the `pageResults` section of the JSON output. Documents can be scanned, photographed, or digitized. Tables can be complex with merged cells or columns, with or without borders, and with odd angles. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding box is output along with information whether it's recognized as part of a header or not. The model predicted header cells can span multiple rows and are not necessarily the first rows in a table. They also work with rotated tables. Each table cell also includes the full text with references to the individual words in the `readResults` section.
+Layout API extracts tables in the `pageResults` section of the JSON output. Documents can be scanned, photographed, or digitized. Tables can be complex with merged cells or columns, with or without borders, and with odd angles. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding box is output along with information whether it's recognized as part of a header or not. The model predicted header cells can span multiple rows and aren't necessarily the first rows in a table. They also work with rotated tables. Each table cell also includes the full text with references to the individual words in the `readResults` section.
:::image type="content" source="./media/layout-table-headers-example.png" alt-text="Layout table headers output":::
Layout API also extracts selection marks from documents. Extracted selection mar
### Text lines and words
-Layout API extracts text from documents and images with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Text is extracted with information provided on lines, words, bounding boxes, confidence scores, and style (handwritten or other). All the text information is included in the `readResults` section of the JSON output.
+Layout API extracts text from documents and images with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Text is extracted with information provided in lines, words, and bounding boxes. All the text information is included in the `readResults` section of the JSON output.
:::image type="content" source="./media/layout-text-extraction.png" alt-text="Layout text extraction output":::
Layout API extracts text from documents and images with multiple text angles and
In Form Recognizer v2.1, you can specify the order in which the text lines are output with the `readingOrder` query parameter. Use `natural` for a more human-friendly reading order output as shown in the following example. This feature is only supported for Latin languages.
-In Form Recognizer v3.0, the natural reading order output is used by the service in all cases. Therefore, there is no `readingOrder` parameter provided in this version.
+In Form Recognizer v3.0, the natural reading order output is used by the service in all cases. Therefore, there's no `readingOrder` parameter provided in this version.
### Handwritten classification for text lines (Latin only)
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
Previously updated : 11/02/2021 Last updated : 03/11/2022 recommendations: false
The receipt model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from sales receipts. Receipts can be of various formats and quality including printed and handwritten receipts. The API extracts key information such as merchant name, merchant phone number, transaction date, tax, and transaction total and returns a structured JSON data representation.
-***Sample receipt processed with [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)***:
+***Sample receipt processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
:::image type="content" source="media/studio/overview-receipt.png" alt-text="sample receipt" lightbox="media/overview-receipt.jpg":::
applied-ai-services Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities.md
To get started, you'll need:
## Managed identity assignments
-There are two types of managed identity: **system-assigned** and **user-assigned**. Currently, Form Recognizer supports system-assigned managed identity:
+There are two types of managed identity: **system-assigned** and **user-assigned**. Currently, Form Recognizer only supports system-assigned managed identity:
* A system-assigned managed identity is **enabled** directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting.
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
sudo python onboarding.py --deregister --endpoint="<URL>" --key="<PrimaryAccessK
> [!NOTE] > - This script doesn't remove the Log Analytics agent for Linux from the machine. It only removes the functionality and configuration of the Hybrid Runbook Worker role. </br> > - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
+> - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
## Remove a Hybrid Worker group
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-windows-hrw-install.md
Modules that are installed must be in a location referenced by the `PSModulePath
Remove-HybridRunbookWorker -Url <URL> -Key <primaryAccessKey> -MachineName <computerName> ``` > [!NOTE]
-> After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
+> - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
+> - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
## Remove a Hybrid Worker group
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-app-configuration Use Key Vault References Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-spring-boot.md
To add a secret to the vault, you need to take just a few additional steps. In t
```azurecli az ad sp show --id <clientId-of-your-service-principal>
- az role assignment create --role "App Configuration Data Reader" --assignee-object-id <objectId-of-your-service-principal> --resource-group <your-resource-group>
+ az role assignment create --role "App Configuration Data Reader" --scope /subscriptions/<subscriptionId>/resourceGroups/<group-name> --assignee-principal-type --assignee-object-id <objectId-of-your-service-principal> --resource-group <your-resource-group>
``` 1. Create the environment variables **AZURE_CLIENT_ID**, **AZURE_CLIENT_SECRET**, and **AZURE_TENANT_ID**. Use the values for the service principal that were displayed in the previous steps. At the command line, run the following commands and restart the command prompt to allow the change to take effect:
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
To create the Azure Arc data controller using Kubernetes tools you will need to
### Cleanup from past installations
-If you installed the Azure Arc data controller in the past on the same cluster and deleted the Azure Arc data controller, there may be some cluster level objects that would still need to be deleted. Run the following commands to delete the Azure Arc data controller cluster level objects:
+If you installed the Azure Arc data controller in the past on the same cluster and deleted the Azure Arc data controller, there may be some cluster level objects that would still need to be deleted.
+
+For some of the tasks, you'll need to replace `{namespace}` with the value for your namespace. Substitute the name of the namespace the data controller was deployed in into `{namespace}`. If unsure, get the name of the `mutatingwebhookconfiguration` using `kubectl get clusterrolebinding`.
+
+Run the following commands to delete the Azure Arc data controller cluster level objects:
```console # Cleanup azure arc data service artifacts
kubectl delete crd sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.co
kubectl delete crd dags.sql.arcdata.microsoft.com kubectl delete crd exporttasks.tasks.arcdata.microsoft.com kubectl delete crd monitors.arcdata.microsoft.com
+kubectl delete crd activedirectoryconnectors.arcdata.microsoft.com
+
+# Substitute the name of the namespace the data controller was deployed in into {namespace}.
# Cluster roles and role bindings kubectl delete clusterrole arcdataservices-extension
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022 #
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled Kubernetes description: Sample Azure Resource Graph queries for Azure Arc-enabled Kubernetes showing use of resource types and tables to access Azure Arc-enabled Kubernetes related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v
Information on how OSM issues and manages certificates to Envoy proxies running on application pods can be found on the [OSM docs site](https://docs.openservicemesh.io/docs/guides/certificates/). ### 14. Upgrade Envoy
-When a new pod is created in a namespace monitored by the add-on, OSM will inject an [envoy proxy sidecar](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) in that pod. If the envoy version needs to be updated, steps to do so can be found in the [Upgrade Guide](https://docs.openservicemesh.io/docs/getting_started/upgrade/#envoy) on the OSM docs site.
+When a new pod is created in a namespace monitored by the add-on, OSM will inject an [envoy proxy sidecar](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) in that pod. If the envoy version needs to be updated, steps to do so can be found in the [Upgrade Guide](https://release-v0-11.docs.openservicemesh.io/docs/getting_started/upgrade/#envoy) on the OSM docs site.
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
For usage details, see the following documents:
* [Flux Kustomize controller](https://fluxcd.io/docs/components/kustomize/) * [Kustomize reference documents](https://kubectl.docs.kubernetes.io/references/kustomize/) * [The kustomization file](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/)
-* [Kustomize project](https://kubernetes-sigs.github.io/kustomize/)
+* [Kustomize project](https://kubectl.docs.kubernetes.io/references/kustomize/)
* [Kustomize guides](https://kubectl.docs.kubernetes.io/guides/config_management/) ## Manage Helm chart releases by using the Flux Helm controller
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc description: Sample Azure Resource Graph queries for Azure Arc showing use of resource types and tables to access Azure Arc related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled servers description: Sample Azure Resource Graph queries for Azure Arc-enabled servers showing use of resource types and tables to access Azure Arc-enabled servers related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
azure-cache-for-redis Cache Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-managed-identity.md
Title: Managed Identity
+ Title: Managed identity for storage accounts
description: Learn to Azure Cache for Redis Previously updated : 01/21/2022 Last updated : 03/10/2022 +
-# Managed identity with Azure Cache for Redis (Preview)
+# Managed identity for storage (Preview)
[Managed identities](../active-directory/managed-identities-azure-resources/overview.md) are a common tool used in Azure to help developers minimize the burden of managing secrets and login information. Managed identities are useful when Azure services connect to each other. Instead of managing authorization between each service, [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) can be used to provide a managed identity that makes the authentication process more streamlined and secure.
-## Managed identity with storage accounts
+## Use managed identity with storage accounts
-Azure Cache for Redis can use a managed identity to connect with a storage account, useful in two scenarios:
+Presently, Azure Cache for Redis can use a managed identity to connect with a storage account, useful in two scenarios:
- [Data Persistence](cache-how-to-premium-persistence.md)--scheduled backups of data in your cache through an RDB or AOF file.
Set-AzRedisCache -ResourceGroupName \"MyGroup\" -Name \"MyCache\" -IdentityType
## Next steps - [Learn more](cache-overview.md#service-tiers) about Azure Cache for Redis features-- [What are managed identifies](../active-directory/managed-identities-azure-resources/overview.md)
+- [What are managed identifies](../active-directory/managed-identities-azure-resources/overview.md)
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-functions Functions Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-powershell.md
Depending on your use case, Durable Functions may significantly improve scalabil
### Considerations for using concurrency
-PowerShell is a _single threaded_ scripting language by default. However, concurrency can be added by using multiple PowerShell runspaces in the same process. The amount of runspaces created will match the ```PSWorkerInProcConcurrencyUpperBound``` application setting. The throughput will be impacted by the amount of CPU and memory available in the selected plan.
+PowerShell is a _single threaded_ scripting language by default. However, concurrency can be added by using multiple PowerShell runspaces in the same process. The amount of runspaces created, and therefore the number of concurrent threads per worker, is limited by the ```PSWorkerInProcConcurrencyUpperBound``` application setting. By default, the number of runspaces is set to 1,000 in version 4.x of the Functions runtime. In versions 3.x and below, the maximum number of runspaces is set to 1. The throughput will be impacted by the amount of CPU and memory available in the selected plan.
Azure PowerShell uses some _process-level_ contexts and state to help save you from excess typing. However, if you turn on concurrency in your function app and invoke actions that change state, you could end up with race conditions. These race conditions are difficult to debug because one invocation relies on a certain state and the other invocation changed the state.
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
The Azure Monitor agent is implemented as an [Azure VM extension](../../virtual-
|:|:|:| | Publisher | Microsoft.Azure.Monitor | Microsoft.Azure.Monitor | | Type | AzureMonitorWindowsAgent | AzureMonitorLinuxAgent |
-| TypeHandlerVersion | 1.0 | 1.5 |
+| TypeHandlerVersion | 1.2 | 1.15 |
## Extension versions We strongly recommended to update to generally available versions listed as follows instead of using preview or intermediate versions.
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
Since you're charged for any data collected in a Log Analytics workspace, you sh
To specify additional filters, you must use Custom configuration and specify an XPath that filters out the events you don't. XPath entries are written in the form `LogName!XPathQuery`. For example, you may want to return only events from the Application event log with an event ID of 1035. The XPathQuery for these events would be `*[System[EventID=1035]]`. Since you want to retrieve the events from the Application event log, the XPath would be `Application!*[System[EventID=1035]]`
+### Extracting XPath queries from Windows Event Viewer
+One of the ways to create XPath quries is to use Windows Event Viewer to extract XPath queries as shown below.
+*In step 5 when pasting over the 'Select Path' parameter value, you must append the log type category followed by '!' and then paste the copied value.
+
+[![Extract XPath](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png#lightbox)
+ See [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitations) for a list of limitations in the XPath supported by Windows event log. > [!TIP]
-> Use this **shortcut** to create syntactically correct XPath queries: [Extract XPath queries from Windows Event Viewer](https://azurecloudai.blog/2021/08/10/shortcut-way-to-create-your-xpath-queries-for-azure-sentinel-dcrs/)
->
-> Alternatively you can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPathQuery. The following script shows an example.
+> You can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPathQuery locally on your machine first. The following script shows an example.
> > ```powershell > $XPath = '*[System[EventID=1035]]'
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
This section shows you how to download the auto-instrumentation jar file.
#### Download the jar file
-Download the [applicationinsights-agent-3.2.7.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.2.7/applicationinsights-agent-3.2.7.jar) file.
+Download the [applicationinsights-agent-3.2.8.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.2.8/applicationinsights-agent-3.2.8.jar) file.
> [!WARNING] >
Download the [applicationinsights-agent-3.2.7.jar](https://github.com/microsoft/
#### Point the JVM to the jar file
-Add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to your application's JVM args.
+Add `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` to your application's JVM args.
> [!TIP] > For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
Add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to your application
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=... ```
- - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.2.7.jar` with the following content:
+ - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.2.8.jar` with the following content:
```json {
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Configure [App Services](../../app-service/configure-language-java.md#set-java-r
## Spring Boot
-Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` somewhere before `-jar`, for example:
```
-java -javaagent:path/to/applicationinsights-agent-3.2.7.jar -jar <myapp.jar>
+java -javaagent:path/to/applicationinsights-agent-3.2.8.jar -jar <myapp.jar>
``` ## Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.2.7.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.2.8.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.2.7.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.2.8.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.7.jar -jar <myapp.jar>
+ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.8.jar -jar <myapp.jar>
``` ## Tomcat 8 (Linux)
ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.7.jar -jar <mya
If you installed Tomcat via `apt-get` or `yum`, then you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.7.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.8.jar"
``` ### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.7.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), then you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.7.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.8.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and a
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.7.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.8.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.7.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.8.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` to `CATALINA_OPTS`.
### Running Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.2.7.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.2.8.jar -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to the existing `jv
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.2.7.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.2.8.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`
``` --exec--javaagent:path/to/applicationinsights-agent-3.2.7.jar
+-javaagent:path/to/applicationinsights-agent-3.2.8.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.2.8.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.2.7.jar>
+ -javaagent:path/to/applicationinsights-agent-3.2.8.jar>
</jvm-options> ... </java-config>
Java and Process Management > Process definition > Java Virtual Machine
``` In "Generic JVM arguments" add the following: ```--javaagent:path/to/applicationinsights-agent-3.2.7.jar
+-javaagent:path/to/applicationinsights-agent-3.2.8.jar
``` After that, save and restart the application server.
After that, save and restart the application server.
Create a new file `jvm.options` in the server directory (for example `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.2.7.jar
+-javaagent:path/to/applicationinsights-agent-3.2.8.jar
``` ## Others
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
You will find more details and additional configuration options below.
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.2.7.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.2.8.jar`.
You can specify your own configuration file path using either * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable, or * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.7.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.8.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the json configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
You can also set the connection string using the environment variable `APPLICATI
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.7.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.8.jar` is located.
```json {
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
## HTTP headers
-Starting from 3.2.7, you can capture request and response headers on your server (request) telemetry:
+Starting from 3.2.8, you can capture request and response headers on your server (request) telemetry:
```json {
Again, the header names are case-insensitive, and the examples above will be cap
By default, http server requests that result in 4xx response codes are captured as errors.
-Starting from version 3.2.7, you can change this behavior to capture them as success if you prefer:
+Starting from version 3.2.8, you can change this behavior to capture them as success if you prefer:
```json {
Starting from version 3.2.0, the following preview instrumentations can be enabl
``` > [!NOTE] > Akka instrumentation is available starting from version 3.2.2
-> Vertx HTTP Library instrumentation is available starting from version 3.2.7
+> Vertx HTTP Library instrumentation is available starting from version 3.2.8
## Metric interval
and the console, corresponding to this configuration:
`level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.2.7.jar` is located.
+`applicationinsights-agent-3.2.8.jar` is located.
`maxSizeMb` is the max size of the log file before it rolls over. `maxHistory` is the number of rolled over log files that are retained (in addition to the current log file).
-Starting from version 3.0.2, you can also set the self-diagnostics `level` using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`
+Starting from version 3.0.2, you can also set the self-diagnostics `level` using the environment variable
+`APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`
(which will then take precedence over self-diagnostics level specified in the json configuration).
+And starting from version 3.0.3, you can also set the self-diagnostics file location using the environment variable
+`APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_FILE_PATH`
+(which will then take precedence over self-diagnostics file path specified in the json configuration).
+ ## An example This is just an example to show what a configuration file looks like with multiple components.
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-troubleshoot.md
In this article, we cover some of the common issues that you might face while in
## Check the self-diagnostic log file By default, Application Insights Java 3.x produces a log file named `applicationinsights.log` in the same directory
-that holds the `applicationinsights-agent-3.2.7.jar` file.
+that holds the `applicationinsights-agent-3.2.8.jar` file.
This log file is the first place to check for hints to any issues you might be experiencing. If no log file is generated, check that your Java application has write permission to the directory that holds the
-`applicationinsights-agent-3.2.7.jar` file.
+`applicationinsights-agent-3.2.8.jar` file.
If still no log file is generated, check the stdout log from your Java application. Application Insights Java 3.x should log any errors to stdout that would prevent it from logging to its normal location.
azure-monitor Autoscale Predictive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-predictive.md
You receive the error message as seen below:
*Predictive autoscale is based on the metric percentage CPU of the current resource. Choose this metric in the scale up trigger rules*. This message means you attempted to enable predictive autoscale before you enabled standard autoscale and set it up to use the *Percentage CPU* metric with the *Average* aggregation type.
You won't see data on the predictive charts under certain conditions. This isn'
When predictive autoscale is disabled, you instead receive a message beginning with "No data to show..." and giving you instructions on what to enable so you can see a predictive chart.
- :::image type="content" source="media/autoscale-predictive/message-no-data-to-show-11.png" alt-text="Screenshot of message No data to show":::
+ :::image type="content" source="media/autoscale-predictive/error-no-data-to-show.png" alt-text="Screenshot of message No data to show":::
When you first create a virtual machine scale set and enable forecast only mode, you receive a message telling you "Predictive data is being trained.." and a time to return to see the chart.
azure-monitor Change Analysis Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-troubleshoot.md
+
+ Title: Troubleshoot Application Change Analysis - Azure Monitor
+description: Learn how to troubleshoot problems in Application Change Analysis.
+++
+ms.contributor: cawa
Last updated : 03/11/2022 ++++
+# Troubleshoot Application Change Analysis (preview)
+
+## Trouble registering Microsoft.ChangeAnalysis resource provider from Change history tab.
+
+If you're viewing Change history after its first integration with Application Change Analysis, you will see it automatically registering the **Microsoft.ChangeAnalysis** resource provider. The resource may fail and incur the following error messages:
+
+### You don't have enough permissions to register Microsoft.ChangeAnalysis resource provider.
+You're receiving this error message because your role in the current subscription is not associated with the **Microsoft.Support/register/action** scope. For example, you are not the owner of your subscription and instead received shared access permissions through a coworker (like view access to a resource group).
+
+To resolve the issue, contact the owner of your subscription to register the **Microsoft.ChangeAnalysis** resource provider.
+1. In the Azure portal, search for **Subscriptions**.
+1. Select your subscription.
+1. Navigate to **Resource providers** under **Settings** in the side menu.
+1. Search for **Microsoft.ChangeAnalysis** and register via the UI, Azure PowerShell, or Azure CLI.
+
+ Example for registering the resource provider through PowerShell:
+ ```PowerShell
+ # Register resource provider
+ Register-AzResourceProvider -ProviderNamespace "Microsoft.ChangeAnalysis"
+ ```
+
+### Failed to register Microsoft.ChangeAnalysis resource provider.
+This error message is likely a temporary internet connectivity issue, since:
+* The UI sent the resource provider registration request.
+* You've resolved your [permissions issue](#you-dont-have-enough-permissions-to-register-microsoftchangeanalysis-resource-provider).
+
+Try refreshing the page and checking your internet connection. If the error persists, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+
+### This is taking longer than expected.
+You'll receive this error message when the registration takes longer than 2 minutes. While unusual, it doesn't mean something went wrong. Restart your web app to see your registration changes. Changes should show up within a few hours of app restart.
+
+If your changes still don't show after 6 hours, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+
+## Azure Lighthouse subscription is not supported.
+
+### Failed to query Microsoft.ChangeAnalysis resource provider.
+Often, this message includes: `Azure Lighthouse subscription is not supported, the changes are only available in the subscription's home tenant`.
+
+Currently, the Change Analysis resource provider is limited to registration through Azure Lighthouse subscription for users outside of home tenant. We are working on addressing this limitation.
+
+If this is a blocking issue for you, we can provide a workaround that involves creating a service principal and explicitly assigning the role to allow the access. Contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com) to learn more about it.
+
+## An error occurred while getting changes. Please refresh this page or come back later to view changes.
+
+When changes can't be loaded, Application Change Analysis service presents this general error message. A few known causes are:
+
+- Internet connectivity error from the client device.
+- Change Analysis service being temporarily unavailable.
+
+Refreshing the page after a few minutes usually fixes this issue. If the error persists, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+
+## You don't have enough permissions to view some changes. Contact your Azure subscription administrator.
+
+This general unauthorized error message occurs when the current user does not have sufficient permissions to view the change. At minimum,
+* To view infrastructure changes returned by Azure Resource Graph and Azure Resource Manager, reader access is required.
+* For web app in-guest file changes and configuration changes, contributor role is required.
+
+## Cannot see in-guest changes for newly enabled Web App.
+
+You may not immediately see web app in-guest file changes and configuration changes. Restart your web app and you should be able to view changes within 30 minutes. If not, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
+
+## Diagnose and solve problems tool for virtual machines
+
+To troubleshoot virtual machine issues using the troubleshooting tool in the Azure portal:
+1. Navigate to your virtual machine.
+1. Select **Diagnose and solve problems** from the side menu.
+1. Browse and select the troubleshooting tool that fits your issue.
+
+![Screenshot of the Diagnose and Solve Problems tool for a Virtual Machine with Troubleshooting tools selected.](./media/change-analysis/vm-dnsp-troubleshootingtools.png)
+
+![Screenshot of the tile for the Analyze recent changes troubleshooting tool for a Virtual Machine.](./media/change-analysis/analyze-recent-changes.png)
+++
+## Next steps
+
+Learn more about [Azure Resource Graph](../../governance/resource-graph/overview.md), which helps power Change Analysis.
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-visualizations.md
+
+ Title: Visualizations for Application Change Analysis - Azure Monitor
+description: Learn how to use visualizations in Application Change Analysis in Azure Monitor.
+++
+ms.contributor: cawa
Last updated : 03/11/2022++++
+# Visualizations for Application Change Analysis (preview)
+
+## Standalone UI
+
+Change Analysis lives in a standalone pane under Azure Monitor, where you can view all changes and application dependency/resource insights. You can access Change Analysis through a couple of entry points:
+
+In the Azure portal, search for Change Analysis to launch the experience.
+
+
+Select one or more subscriptions to view:
+- All of its resources' changes from the past 24 hours.
+- Old and new values to provide insights at one glance.
+
+
+Click into a change to view full Resource Manager snippet and other properties.
+
+
+Send any feedback to the [Change Analysis team](mailto:changeanalysisteam@microsoft.com) from the Change Analysis blade:
+++
+### Multiple subscription support
+
+The UI supports selecting multiple subscriptions to view resource changes. Use the subscription filter:
++
+## Diagnose and solve problems tool
+
+Application Change Analysis is:
+- A standalone detector in the Web App **Diagnose and solve problems** tool.
+- Aggregated in **Application Crashes** and **Web App Down detectors**.
+
+From your resource's overview page in Azure portal, select **Diagnose and solve problems** the left menu. As you enter the Diagnose and Solve Problems tool, the **Microsoft.ChangeAnalysis** resource provider will automatically be registered.
+
+### Diagnose and solve problems tool for Web App
+
+> [!NOTE]
+> You may not immediately see web app in-guest file changes and configuration changes. Restart your web app and you should be able to view changes within 30 minutes. If not, refer to [the troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
+
+1. Select **Availability and Performance**.
+
+ :::image type="content" source="./media/change-analysis/availability-and-performance.png" alt-text="Screenshot of the Availability and Performance troubleshooting options":::
+
+2. Select **Application Changes (Preview)**. The feature is also available in **Application Crashes**.
+
+ :::image type="content" source="./media/change-analysis/application-changes.png" alt-text="Screenshot of the Application Crashes button":::
+
+ The link leads to Application Change Analysis UI scoped to the web app.
+
+3. Enable web app in-guest change tracking if you haven't already.
+
+ :::image type="content" source="./media/change-analysis/enable-changeanalysis.png" alt-text="Screenshot of the Application Crashes options":::
+
+4. Toggle on **Change Analysis** status and select **Save**.
+
+ :::image type="content" source="./media/change-analysis/change-analysis-on.png" alt-text="Screenshot of the Enable Change Analysis user interface":::
+
+ - The tool displays all web apps under an App Service plan, which you can toggle on and off individually.
+
+ :::image type="content" source="./media/change-analysis/change-analysis-on-2.png" alt-text="Screenshot of the Enable Change Analysis user interface expanded":::
++
+You can also view change data via the **Web App Down** and **Application Crashes** detectors. The graph summarizes:
+- The change types over time.
+- Details on those changes.
+
+By default, the graph displays changes from within the past 24 hours help with immediate problems.
++
+### Diagnose and solve problems tool for Virtual Machines
+
+Change Analysis displays as an insight card in a your virtual machine's **Diagnose and solve problems** tool. The insight card displays the number of changes or issues a resource experiences within the past 72 hours.
+
+1. Within your virtual machine, select **Diagnose and solve problems** from the left menu.
+1. Go to **Troubleshooting tools**.
+1. Scroll to the end of the troubleshooting options and select **Analyze recent changes** to view changes on the virtual machine.
+
+ :::image type="content" source="./media/change-analysis/vm-dnsp-troubleshootingtools.png" alt-text="Screenshot of the VM Diagnose and Solve Problems":::
+
+ :::image type="content" source="./media/change-analysis/analyze-recent-changes.png" alt-text="Change analyzer in troubleshooting tools":::
+
+### Diagnose and solve problems tool for Azure SQL Database and other resources
+
+You can view Change Analysis data for [multiple Azure resources](./change-analysis.md#supported-resource-types), but we highlight Azure SQL Database below.
+
+1. Within your resource, select **Diagnose and solve problems** from the left menu.
+1. Under **Common problems**, select **View change details** to view the filtered view from Change Analysis standalone UI.
+
+ :::image type="content" source="./media/change-analysis/change-insight-diagnose-and-solve.png" alt-text="Screenshot of viewing common problems in Diagnose and Solve Problems tool.":::
+
+## Activity Log change history
+
+Use the [View change history](../essentials/activity-log.md#view-change-history) feature to call the Application Change Analysis service backend to view changes associated with an operation. Changes returned include:
+- Resource level changes from [Azure Resource Graph](../../governance/resource-graph/overview.md).
+- Resource properties from [Azure Resource Manager](../../azure-resource-manager/management/overview.md).
+- In-guest changes from PaaS services, such as App Services web app.
+
+1. From within your resource, select **Activity Log** from the side menu.
+1. Select a change from the list.
+1. Select the **Change history (Preview)** tab.
+1. For the Application Change Analysis service to scan for changes in users' subscriptions, a resource provider needs to be registered. Upon selecting the **Change history (Preview)** tab, the tool will automatically register **Microsoft.ChangeAnalysis** resource provider.
+1. Once registered, you can view changes from **Azure Resource Graph** immediately from the past 14 days.
+ - Changes from other sources will be available after ~4 hours after subscription is onboard.
+
+ :::image type="content" source="./media/change-analysis/activity-log-change-history.png" alt-text="Activity Log change history integration":::
+
+## VM Insights integration
+
+If you've enabled [VM Insights](../vm/vminsights-overview.md), you can view changes in your virtual machines that may have caused any spikes in a metric chart, such as CPU or Memory.
+
+1. Within your virtual machine, select **Insights** from under **Monitoring** in the left menu.
+1. Select the **Performance** tab.
+1. Expand the property panel.
+
+ :::image type="content" source="./media/change-analysis/vm-insights.png" alt-text="Virtual machine insights performance and property panel.":::
+
+1. Select the **Changes** tab.
+1. Select the **Investigate Changes** button to view change details in the Application Change Analysis standalone UI.
+
+ :::image type="content" source="./media/change-analysis/vm-insights-2.png" alt-text="View of the property panel, selecting Investigate Changes button.":::
+
+## Drill to Change Analysis logs
+
+You can also drill to Change Analysis logs via a chart you've created or pinned to your resource's **Monitoring** dashboard.
+
+1. Navigate to the resource for which you'd like to view Change Analysis logs.
+1. On the resource's overview page, select the **Monitoring** tab.
+1. Select a chart from the **Key Metrics** dashboard.
+
+ :::image type="content" source="./media/change-analysis/view-change-analysis-1.png" alt-text="Chart from the Monitoring tab of the resource.":::
+
+1. From the chart, select **Drill into logs** and choose **Change Analysis** to view it.
+
+ :::image type="content" source="./media/change-analysis/view-change-analysis-2.png" alt-text="Drill into logs and select to view Change Analysis.":::
+
+## Next steps
+
+- Learn how to [troubleshoot problems in Change Analysis](change-analysis-troubleshoot.md)
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md
+
+ Title: Use Application Change Analysis in Azure Monitor to find web-app issues | Microsoft Docs
+description: Use Application Change Analysis in Azure Monitor to troubleshoot application issues on live sites on Azure App Service.
+++
+ms.contributor: cawa
Last updated : 03/11/2022 ++++
+# Use Application Change Analysis in Azure Monitor (preview)
+
+While standard monitoring solutions might alert you to a live site issue, outage, or component failure, they often don't explain the cause. For example, your site worked five minutes ago, and now it's broken. What changed in the last five minutes?
+
+We've designed Application Change Analysis to answer that question in Azure Monitor.
+
+Building on the power of [Azure Resource Graph](../../governance/resource-graph/overview.md), Change Analysis:
+- Provides insights into your Azure application changes.
+- Increases observability.
+- Reduces mean time to repair (MTTR).
+
+> [!IMPORTANT]
+> Change Analysis is currently in preview. This version:
+>
+> - Is provided without a service-level agreement.
+> - Is not recommended for production workloads.
+> - Includes unsupported features and might have constrained capabilities.
+>
+> For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Overview
+
+Change Analysis detects various types of changes, from the infrastructure layer through application deployment. Change Analysis is a subscription-level Azure resource provider that:
+- Checks resource changes in the subscription.
+- Provides data for various diagnostic tools to help users understand what changes might have caused issues.
+
+The following diagram illustrates the architecture of Change Analysis:
+
+![Architecture diagram of how Change Analysis gets change data and provides it to client tools](./media/change-analysis/overview.png)
+
+## Supported resource types
+
+Application Change Analysis service supports resource property level changes in all Azure resource types, including common resources like:
+- Virtual Machine
+- Virtual machine scale set
+- App Service
+- Azure Kubernetes Service (AKS)
+- Azure Function
+- Networking resources:
+ - Network Security Group
+ - Virtual Network
+ - Application Gateway, etc.
+- Data
+ - Storage
+ - SQL
+ - Redis Cache
+ - Cosmos DB, etc.
+
+## Data sources
+
+Application Change Analysis queries for:
+- Azure Resource Manager tracked properties.
+- Proxied configurations.
+- Web app in-guest changes.
+
+Change Analysis also tracks resource dependency changes to diagnose and monitor an application end-to-end.
+
+### Azure Resource Manager tracked properties changes
+
+Using [Azure Resource Graph](../../governance/resource-graph/overview.md), Change Analysis provides a historical record of how the Azure resources that host your application have changed over time. The following tracked settings can be detected:
+- Managed identities
+- Platform OS upgrade
+- Hostnames
+
+### Azure Resource Manager proxied setting changes
+
+Unlike Azure Resource Graph, Change Analysis securely queries and computes IP Configuration rules, TLS settings, and extension versions to provide more change details in the app.
+
+### Changes in web app deployment and configuration (in-guest changes)
+
+Every 30 minutes, Change Analysis captures the deployment and configuration state of an application. For example, it can detect changes in the application environment variables. The tool computes the differences and presents the changes.
+
+Unlike Azure Resource Manager changes, code deployment change information might not be available immediately in the Change Analysis tool. To view the latest changes in Change Analysis, select **Refresh**.
++
+If you don't see changes within 30 minutes, refer to [our troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
+
+Currently, all text-based files under site root **wwwroot** with the following extensions are supported:
+- *.json
+- *.xml
+- *.ini
+- *.yml
+- *.config
+- *.properties
+- *.html
+- *.cshtml
+- *.js
+- requirements.txt
+- Gemfile
+- Gemfile.lock
+- config.gemspec
+
+### Dependency changes
+
+Changes to resource dependencies can also cause issues in a resource. For example, if a web app calls into a Redis cache, the Redis cache SKU could affect the web app performance.
+
+As another example, if port 22 was closed in a virtual machine's Network Security Group, it will cause connectivity errors.
+
+#### Web App diagnose and solve problems navigator (Preview)
+
+To detect changes in dependencies, Change Analysis checks the web app's DNS record. In this way, it identifies changes in all app components that could cause issues.
+
+Currently the following dependencies are supported in **Web App Diagnose and solve problems | Navigator (Preview)**:
+
+- Web Apps
+- Azure Storage
+- Azure SQL
+
+#### Related resources
+
+Change Analysis detects related resources. Common examples are:
+
+- Network Security Group
+- Virtual Network
+- Application Gateway
+- Load Balancer related to a Virtual Machine.
+
+Network resources are usually provisioned in the same resource group as the resources using it. Filter the changes by resource group to show all changes for the virtual machine and its related networking resources.
++
+## Application Change Analysis service enablement
+
+The Application Change Analysis service:
+- Computes and aggregates change data from the data sources mentioned earlier.
+- Provides a set of analytics for users to:
+ - Easily navigate through all resource changes.
+ - Identify relevant changes in the troubleshooting or monitoring context.
+
+You'll need to register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription to make the tracked properties and proxied settings change data available. The `Microsoft.ChangeAnalysis` resource is automatically registered as you either:
+- Enter the Web App **Diagnose and Solve Problems** tool, or
+- Bring up the Change Analysis standalone tab.
+
+For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](change-analysis-visualizations.md#diagnose-and-solve-problems-tool-for-web-app) section.
+
+If you don't see changes within 30 minutes, refer to [the troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
++
+## Cost
+Application Change Analysis is a free service. Once enabled, the Change Analysis **Diagnose and solve problems** tool does not:
+- Incur any billing cost to subscriptions.
+- Have any performance impact for scanning Azure Resource properties changes.
+
+## Enable Change Analysis at scale for Web App in-guest file and environment variable changes
+
+If your subscription includes several web apps, enabling the service at the web app level would be inefficient. Instead, run the following script to enable all web apps in your subscription.
+
+### Pre-requisites
+
+PowerShell Az Module. Follow instructions at [Install the Azure PowerShell module](/powershell/azure/install-az-ps)
+
+### Run the following script:
+
+```PowerShell
+# Log in to your Azure subscription
+Connect-AzAccount
+
+# Get subscription Id
+$SubscriptionId = Read-Host -Prompt 'Input your subscription Id'
+
+# Make Feature Flag visible to the subscription
+Set-AzContext -SubscriptionId $SubscriptionId
+
+# Register resource provider
+Register-AzResourceProvider -ProviderNamespace "Microsoft.ChangeAnalysis"
+
+# Enable each web app
+$webapp_list = Get-AzWebApp | Where-Object {$_.kind -eq 'app'}
+foreach ($webapp in $webapp_list)
+{
+ $tags = $webapp.Tags
+ $tags[ΓÇ£hidden-related:diagnostics/changeAnalysisScanEnabledΓÇ¥]=$true
+ Set-AzResource -ResourceId $webapp.Id -Tag $tags -Force
+}
+
+```
+
+## Next steps
+
+- Learn about [visualizations in Change Analysis](change-analysis-visualizations.md)
+- Learn how to [troubleshoot problems in Change Analysis](change-analysis-troubleshoot.md)
+- Enable Application Insights for [Azure App Services apps](../../azure-monitor/app/azure-web-apps.md).
+- Enable Application Insights for [Azure VM and Azure virtual machine scale set IIS-hosted apps](../../azure-monitor/app/azure-vm-vmss-apps.md).
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-monitor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Monitor description: Sample Azure Resource Graph queries for Azure Monitor showing use of resource types and tables to access Azure Monitor related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 03/02/2022 Last updated : 03/11/2022 # Create and manage Active Directory connections for Azure NetApp Files
Several features of Azure NetApp Files require that you have an Active Directory
* The admin account you use must have the capability to create machine accounts in the organizational unit (OU) path that you will specify. In some cases, `msDS-SupportedEncryptionTypes` write permission is required to set account attributes within AD. +
+* Group Managed Service Accounts (GMSA) cannot be used with the Active Directory connection user account.
+ * If you change the password of the Active Directory user account that is used in Azure NetApp Files, be sure to update the password configured in the [Active Directory Connections](#create-an-active-directory-connection). Otherwise, you will not be able to create new volumes, and your access to existing volumes might also be affected depending on the setup. * Before you can remove an Active Directory connection from your NetApp account, you need to first remove all volumes associated with it.
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-resource-manager Bicep Functions Numeric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-numeric.md
The output from the preceding example with the default values is:
## max
-`max (arg1)`
+`max(arg1)`
Returns the maximum value from an array of integers or a comma-separated list of integers.
The output from the preceding example with the default values is:
## min
-`min (arg1)`
+`min(arg1)`
Returns the minimum value from an array of integers or a comma-separated list of integers.
azure-resource-manager Bicep Functions String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-string.md
description: Describes the functions to use in a Bicep file to work with strings
Previously updated : 02/07/2022 Last updated : 03/10/2022 # String functions for Bicep
The output from the preceding example with the default values is:
## base64ToJson
-`base64tojson`
+`base64ToJson(base64Value)`
Converts a base64 representation to a JSON object.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
## contains
-`contains (container, itemToFind)`
+`contains(container, itemToFind)`
Checks whether an array contains a value, an object contains a key, or a string contains a substring. The string comparison is case-sensitive. However, when testing if an object contains a key, the comparison is case-insensitive.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
## last
-`last (arg1)`
+`last(arg1)`
Returns last character of the string, or the last element of the array.
An integer that represents the last position of the item to find. The value is z
### Examples
-The following example shows how to use the indexOf and lastIndexOf functions:
+The following example shows how to use the `indexOf` and `lastIndexOf` functions:
```bicep output firstT int = indexOf('test', 't')
The output from the preceding example with the default values is:
## trim
-`trim (stringToTrim)`
+`trim(stringToTrim)`
Removes all leading and trailing white-space characters from the specified string.
The output from the preceding example with the default values is:
## uniqueString
-`uniqueString (baseString, ...)`
+`uniqueString(baseString, ...)`
Creates a deterministic hash string based on the values provided as parameters.
output uniqueDeploy string = uniqueString(resourceGroup().id, deployment().name)
## uri
-`uri (baseUri, relativeUri)`
+`uri(baseUri, relativeUri)`
Creates an absolute URI by combining the baseUri and the relativeUri string.
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/key-vault-parameter.md
The following procedure shows how to create a role with the minimum permission,
az role definition create --role-definition "<path-to-role-file>" az role assignment create \ --role "Key Vault resource manager template deployment operator" \
+ --scope /subscriptions/<Subscription-id>/resourceGroups/<resource-group-name> \
--assignee <user-principal-name> \ --resource-group ExampleGroup ```
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-resource-manager Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Resource Manager description: Sample Azure Resource Graph queries for Azure Resource Manager showing use of resource types and tables to access Azure Resource Manager related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
azure-resource-manager Child Resource Name Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/child-resource-name-type.md
Each parent resource accepts only certain resource types as child resources. The
In an Azure Resource Manager template (ARM template), you can specify the child resource either within the parent resource or outside of the parent resource. The values you provide for the resource name and resource type vary based on whether the child resource is defined inside or outside of the parent resource. > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [child resources](../bicep/child-resource-name-type.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [child resources](../bicep/child-resource-name-type.md).
## Within parent resource
azure-resource-manager Conditional Resource Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/conditional-resource-deployment.md
Sometimes you need to optionally deploy a resource in an Azure Resource Manager
> Conditional deployment doesn't cascade to [child resources](child-resource-name-type.md). If you want to conditionally deploy a resource and its child resources, you must apply the same condition to each resource type. > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [conditional deployments](../bicep/conditional-resource-deployment.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [conditional deployments](../bicep/conditional-resource-deployment.md).
## Deploy condition
azure-resource-manager Deploy To Management Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-to-management-group.md
As your organization matures, you can deploy an Azure Resource Manager template (ARM template) to create resources at the management group level. For example, you may need to define and assign [policies](../../governance/policy/overview.md) or [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) for a management group. With management group level templates, you can declaratively apply policies and assign roles at the management group level. > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [management group deployments](../bicep/deploy-to-management-group.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [management group deployments](../bicep/deploy-to-management-group.md).
## Supported resources
azure-resource-manager Deploy To Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-to-resource-group.md
This article describes how to scope your deployment to a resource group. You use an Azure Resource Manager template (ARM template) for the deployment. The article also shows how to expand the scope beyond the resource group in the deployment operation. > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [resource group deployments](../bicep/deploy-to-resource-group.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [resource group deployments](../bicep/deploy-to-resource-group.md).
## Supported resources
azure-resource-manager Deploy To Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-to-subscription.md
To simplify the management of resources, you can use an Azure Resource Manager t
To deploy templates at the subscription level, use Azure CLI, PowerShell, REST API, or the portal. > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [subscription deployments](../bicep/deploy-to-subscription.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [subscription deployments](../bicep/deploy-to-subscription.md).
## Supported resources
azure-resource-manager Deploy To Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-to-tenant.md
As your organization matures, you may need to define and assign [policies](../../governance/policy/overview.md) or [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) across your Azure AD tenant. With tenant level templates, you can declaratively apply policies and assign roles at a global level. > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [tenant deployments](../bicep/deploy-to-tenant.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [tenant deployments](../bicep/deploy-to-tenant.md).
## Supported resources
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/key-vault-parameter.md
For other users, grant the `Microsoft.KeyVault/vaults/deploy/action` permission.
az role definition create --role-definition "<path-to-role-file>" az role assignment create \ --role "Key Vault resource manager template deployment operator" \
+ --scope /subscriptions/<Subscription-id>/resourceGroups/<resource-group-name> \
--assignee <user-principal-name> \ --resource-group ExampleGroup ```
azure-resource-manager Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/outputs.md
This article describes how to define output values in your Azure Resource Manage
The format of each output value must resolve to one of the [data types](data-types.md). > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [outputs](../bicep/outputs.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [outputs](../bicep/outputs.md).
## Define output values
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/parameters.md
Resource Manager resolves parameter values before starting the deployment operat
Each parameter must be set to one of the [data types](data-types.md). > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [parameters](../bicep/parameters.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [parameters](../bicep/parameters.md).
## Minimal declaration
azure-resource-manager Resource Declaration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-declaration.md
Last updated 01/19/2022
To deploy a resource through an Azure Resource Manager template (ARM template), you add a resource declaration. Use the `resources` array in a JSON template. > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [resource declaration](../bicep/resource-declaration.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [resource declaration](../bicep/resource-declaration.md).
## Set resource type and version
azure-resource-manager Template Functions Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-array.md
Title: Template functions - arrays description: Describes the functions to use in an Azure Resource Manager template (ARM template) for working with arrays. Previously updated : 02/11/2022 Last updated : 03/10/2022 # Array functions for ARM templates
To get an array of string values delimited by a value, see [split](template-func
Converts the value to an array.
+In Bicep, use the [array](../bicep/bicep-functions-array.md#array) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Combines multiple arrays and returns the concatenated array, or combines multiple string values and returns the concatenated string.
+In Bicep, use the [concat](../bicep/bicep-functions-array.md#concat) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Checks whether an array contains a value, an object contains a key, or a string contains a substring. The string comparison is case-sensitive. However, when testing if an object contains a key, the comparison is case-insensitive.
+In Bicep, use the [contains](../bicep/bicep-functions-array.md#contains) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## createArray
-`createArray (arg1, arg2, arg3, ...)`
+`createArray(arg1, arg2, arg3, ...)`
Creates an array from the parameters.
+In Bicep, the `createArray` function isn't supported. To construct an array, see the Bicep [array](../bicep/data-types.md#arrays) data type.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Determines if an array, object, or string is empty.
+In Bicep, use the [empty](../bicep/bicep-functions-array.md#empty) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns the first element of the array, or first character of the string.
+In Bicep, use the [first](../bicep/bicep-functions-array.md#first) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns a single array or object with the common elements from the parameters.
+In Bicep, use the [intersection](../bicep/bicep-functions-array.md#intersection) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## last
-`last (arg1)`
+`last(arg1)`
Returns the last element of the array, or last character of the string.
+In Bicep, use the [last](../bicep/bicep-functions-array.md#last) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns the number of elements in an array, characters in a string, or root-level properties in an object.
+In Bicep, use the [length](../bicep/bicep-functions-array.md#length) function.
+ ### Parameters | Parameter | Required | Type | Description |
For more information about using this function with an array, see [Resource iter
Returns the maximum value from an array of integers or a comma-separated list of integers.
+In Bicep, use the [max](../bicep/bicep-functions-array.md#max) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns the minimum value from an array of integers or a comma-separated list of integers.
+In Bicep, use the [min](../bicep/bicep-functions-array.md#min) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Creates an array of integers from a starting integer and containing a number of items.
+In Bicep, use the [range](../bicep/bicep-functions-array.md#range) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns an array with all the elements after the specified number in the array, or returns a string with all the characters after the specified number in the string.
+In Bicep, use the [skip](../bicep/bicep-functions-array.md#skip) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns an array or string. An array has the specified number of elements from the start of the array. A string has the specified number of characters from the start of the string.
+In Bicep, use the [take](../bicep/bicep-functions-array.md#take) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns a single array or object with all elements from the parameters. For arrays, duplicate values are included once. For objects, duplicate property names are only included once.
+In Bicep, use the [union](../bicep/bicep-functions-array.md#union) function.
+ ### Parameters | Parameter | Required | Type | Description |
azure-resource-manager Template Functions Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-date.md
Title: Template functions - date description: Describes the functions to use in an Azure Resource Manager template (ARM template) to work with dates. Previously updated : 02/11/2022 Last updated : 03/10/2022 # Date functions for ARM templates
Resource Manager provides the following functions for working with dates in your
Adds a time duration to a base value. ISO 8601 format is expected.
+In Bicep, use the [dateTimeAdd](../bicep/bicep-functions-date.md#datetimeadd) function.
+ ### Parameters | Parameter | Required | Type | Description |
The next example template shows how to set the start time for an Automation sche
Returns the current (UTC) datetime value in the specified format. If no format is provided, the ISO 8601 (`yyyyMMddTHHmmssZ`) format is used. **This function can only be used in the default value for a parameter.**
+In Bicep, use the [utcNow](../bicep/bicep-functions-date.md#utcnow) function.
+ ### Parameters | Parameter | Required | Type | Description |
azure-resource-manager Template Functions Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-deployment.md
Title: Template functions - deployment description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve deployment information. Previously updated : 02/11/2022 Last updated : 03/10/2022 # Deployment functions for ARM templates
To get values from resources, resource groups, or subscriptions, see [Resource f
Returns information about the current deployment operation.
+In Bicep, use the [deployment](../bicep/bicep-functions-deployment.md#deployment) function.
+ ### Return value This function returns the object that is passed during deployment. The properties in the returned object differ based on whether you are:
For a subscription deployment, the following example returns a deployment object
Returns information about the Azure environment used for deployment.
+In Bicep, use the [environment](../bicep/bicep-functions-deployment.md#environment) function.
+ ### Return value This function returns properties for the current Azure environment. The following example shows the properties for global Azure. Sovereign clouds may return slightly different properties.
The preceding example returns the following object when deployed to global Azure
Returns a parameter value. The specified parameter name must be defined in the parameters section of the template.
-In Bicep, directly reference parameters by using their symbolic names.
+In Bicep, directly reference [parameters](../bicep/parameters.md) by using their symbolic names.
### Parameters
For more information about using parameters, see [Parameters in ARM templates](.
Returns the value of variable. The specified variable name must be defined in the variables section of the template.
-In Bicep, directly reference variables by using their symbolic names.
+In Bicep, directly reference [variables](../bicep/variables.md) by using their symbolic names.
### Parameters
azure-resource-manager Template Functions Logical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-logical.md
Resource Manager provides several functions for making comparisons in your Azure
Checks whether all parameter values are true.
-The `and` function isn't supported in Bicep, use the [&& operator](../bicep/operators-logical.md#and-) instead.
+The `and` function isn't supported in Bicep. Use the [&& operator](../bicep/operators-logical.md#and-) instead.
### Parameters
The output from the preceding example is:
Converts the parameter to a boolean.
+In Bicep, use the [bool](../bicep/bicep-functions-logical.md#bool) logical function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns false.
-The `false` function isn't available in Bicep. Use the `false` keyword instead.
+The `false` function isn't available in Bicep. Use the `false` keyword instead.
### Parameters
The following [example template](https://github.com/krnese/AzureDeploy/blob/mast
Converts boolean value to its opposite value.
-The `not` function isn't supported in Bicep, use the [! operator](../bicep/operators-logical.md#not-) instead.
+The `not` function isn't supported in Bicep. Use the [! operator](../bicep/operators-logical.md#not-) instead.
### Parameters
The output from the preceding example is:
Checks whether any parameter value is true.
-The `or` function isn't supported in Bicep, use the [|| operator](../bicep/operators-logical.md#or-) instead.
+The `or` function isn't supported in Bicep. Use the [|| operator](../bicep/operators-logical.md#or-) instead.
### Parameters
The output from the preceding example is:
Returns true.
-The `true` function isn't available in Bicep. Use the `true` keyword instead.
+The `true` function isn't available in Bicep. Use the `true` keyword instead.
### Parameters
azure-resource-manager Template Functions Numeric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-numeric.md
Title: Template functions - numeric description: Describes the functions to use in an Azure Resource Manager template (ARM template) to work with numbers. Previously updated : 02/11/2022 Last updated : 03/10/2022 # Numeric functions for ARM templates
Resource Manager provides the following functions for working with integers in y
Returns the sum of the two provided integers.
-The `add` function in not supported in Bicep. Use the [`+` operator](../bicep/operators-numeric.md#add-) instead.
+The `add` function isn't supported in Bicep. Use the [`+` operator](../bicep/operators-numeric.md#add-) instead.
### Parameters
The output from the preceding example with the default values is:
Returns the index of an iteration loop.
+In Bicep, use [iterative loops](../bicep/loops.md).
+ ### Parameters | Parameter | Required | Type | Description |
An integer representing the current index of the iteration.
Returns the integer division of the two provided integers.
-The `div` function in not supported in Bicep. Use the [`/` operator](../bicep/operators-numeric.md#divide-) instead.
+The `div` function isn't supported in Bicep. Use the [`/` operator](../bicep/operators-numeric.md#divide-) instead.
### Parameters
The following example shows how to use float to pass parameters to a Logic App:
Converts the specified value to an integer.
+In Bicep, use the [int](../bicep/bicep-functions-numeric.md#int) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## max
-`max (arg1)`
+`max(arg1)`
Returns the maximum value from an array of integers or a comma-separated list of integers.
+In Bicep, use the [max](../bicep/bicep-functions-numeric.md#max) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## min
-`min (arg1)`
+`min(arg1)`
Returns the minimum value from an array of integers or a comma-separated list of integers.
+In Bicep, use the [min](../bicep/bicep-functions-numeric.md#min) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns the subtraction of the two provided integers.
+The `sub` function isn't supported in Bicep. Use the [- operator](../bicep/operators-numeric.md#subtract--) instead.
+ ### Parameters | Parameter | Required | Type | Description |
azure-resource-manager Template Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-object.md
Title: Template functions - objects description: Describes the functions to use in an Azure Resource Manager template (ARM template) for working with objects. Previously updated : 02/11/2022 Last updated : 03/10/2022 # Object functions for ARM templates
Resource Manager provides several functions for working with objects in your Azu
Checks whether an array contains a value, an object contains a key, or a string contains a substring. The string comparison is case-sensitive. However, when testing if an object contains a key, the comparison is case-insensitive.
+In Bicep, use the [contains](../bicep/bicep-functions-object.md#contains) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Creates an object from the keys and values.
-The `createObject` function isn't supported by Bicep. Construct an object by using `{}`. See [Objects](../bicep/data-types.md#objects).
+The `createObject` function isn't supported by Bicep. Construct an object by using `{}`. See [Objects](../bicep/data-types.md#objects).
### Parameters
The output from the preceding example with the default values is an object named
Determines if an array, object, or string is empty.
+In Bicep, use the [empty](../bicep/bicep-functions-object.md#empty) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns a single array or object with the common elements from the parameters.
+In Bicep, use the [intersection](../bicep/bicep-functions-object.md#intersection) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Converts a valid JSON string into a JSON data type.
+In Bicep, use the [json](../bicep/bicep-functions-object.md#json) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns the number of elements in an array, characters in a string, or root-level properties in an object.
+In Bicep, use the [length](../bicep/bicep-functions-object.md#length) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example is:
Returns a single array or object with all elements from the parameters. For arrays, duplicate values are included once. For objects, duplicate property names are only included once.
+In Bicep, use the [union](../bicep/bicep-functions-object.md#union) function.
+ ### Parameters | Parameter | Required | Type | Description |
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
Title: Template functions - resources description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. Previously updated : 02/11/2022 Last updated : 03/10/2022
To get deployment scope values, see [Scope functions](template-functions-scope.m
Returns the resource ID for an [extension resource](../management/extension-resource-types.md). An extension resource is a resource type that's applied to another resource to add to its capabilities.
+In Bicep, use the [extensionResourceId](../bicep/bicep-functions-resource.md#extensionresourceid) function.
+ ### Parameters | Parameter | Required | Type | Description |
Built-in policy definitions are tenant level resources. For an example of deploy
The syntax for this function varies by name of the list operations. Each implementation returns values for the resource type that supports a list operation. The operation name must start with `list` and may have a suffix. Some common usages are `list`, `listKeys`, `listKeyValue`, and `listSecrets`.
+In Bicep, use the [list*](../bicep/bicep-functions-resource.md#list) function.
+ ### Parameters | Parameter | Required | Type | Description |
The next example shows a `list` function that takes a parameter. In this case, t
Determines whether a resource type supports zones for the specified location or region. This function **only supports zonal resources**. Zone redundant services return an empty array. For more information, see [Azure Services that support Availability Zones](../../availability-zones/az-region.md).
+In Bicep, use the [pickZones](../bicep/bicep-functions-resource.md#pickzones) function.
+ ### Parameters | Parameter | Required | Type | Description |
The following example shows how to use the `pickZones` function to enable zone r
**The providers function has been deprecated.** We no longer recommend using it. If you used this function to get an API version for the resource provider, we recommend that you provide a specific API version in your template. Using a dynamically returned API version can break your template if the properties change between versions.
+In Bicep, the [providers](../bicep/bicep-functions-resource.md#providers) function is deprecated.
+ ## reference `reference(resourceName or resourceIdentifier, [apiVersion], ['Full'])` Returns an object representing a resource's runtime state.
+In Bicep, use the [reference](../bicep/bicep-functions-resource.md#reference) function.
+ ### Parameters | Parameter | Required | Type | Description |
The following example template references a storage account that isn't deployed
See the [resourceGroup scope function](template-functions-scope.md#resourcegroup).
+In Bicep, use the [resourcegroup](../bicep/bicep-functions-scope.md#resourcegroup) scope function.
+ ## resourceId `resourceId([subscriptionId], [resourceGroupName], resourceType, resourceName1, [resourceName2], ...)` Returns the unique identifier of a resource. You use this function when the resource name is ambiguous or not provisioned within the same template. The format of the returned identifier varies based on whether the deployment happens at the scope of a resource group, subscription, management group, or tenant.
+In Bicep, use the [resourceId](../bicep/bicep-functions-resource.md#resourceid) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
See the [subscription scope function](template-functions-scope.md#subscription).
+In Bicep, use the [subscription](../bicep/bicep-functions-scope.md#subscription) scope function.
+ ## subscriptionResourceId `subscriptionResourceId([subscriptionId], resourceType, resourceName1, [resourceName2], ...)` Returns the unique identifier for a resource deployed at the subscription level.
+In Bicep, use the [subscriptionResourceId](../bicep/bicep-functions-resource.md#subscriptionresourceid) function.
+ ### Parameters | Parameter | Required | Type | Description |
The following template assigns a built-in role. You can deploy it to either a re
Returns the unique identifier for a resource deployed at the tenant level.
+In Bicep, use the [tenantResourceId](../bicep/bicep-functions-resource.md#tenantresourceid) function.
+ ### Parameters | Parameter | Required | Type | Description |
azure-resource-manager Template Functions Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-scope.md
Title: Template functions - scope description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about deployment scope. Previously updated : 02/11/2022 Last updated : 03/10/2022 # Scope functions for ARM templates
To get values from parameters, variables, or the current deployment, see [Deploy
Returns an object with properties from the management group in the current deployment.
+In Bicep, use the [managementGroup](../bicep/bicep-functions-scope.md#managementgroup) scope function.
+ ### Remarks `managementGroup()` can only be used on a [management group deployments](deploy-to-management-group.md). It returns the current management group for the deployment operation. Use to get properties for the current management group.
The next example creates a new management group and uses this function to set th
Returns an object that represents the current resource group.
+In Bicep, use the [resourceGroup](../bicep/bicep-functions-scope.md#resourcegroup) scope function.
+ ### Return value The returned object is in the following format:
The preceding example returns an object in the following format:
Returns details about the subscription for the current deployment.
+In Bicep, use the [subscription](../bicep/bicep-functions-scope.md#subscription) scope function.
+ ### Return value The function returns the following format:
The following example shows the subscription function called in the outputs sect
Returns properties about the tenant for the current deployment.
+In Bicep, use the [tenant](../bicep/bicep-functions-scope.md#tenant) scope function.
+ ### Remarks `tenant()` can be used with any deployment scope. It always returns the current tenant. Use this function to get properties for the current tenant.
azure-resource-manager Template Functions String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-string.md
Title: Template functions - string description: Describes the functions to use in an Azure Resource Manager template (ARM template) to work with strings. Previously updated : 02/11/2022 Last updated : 03/10/2022 # String functions for ARM templates
Resource Manager provides the following functions for working with strings in yo
Returns the base64 representation of the input string.
+In Bicep, use the [base64](../bicep/bicep-functions-string.md#base64) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## base64ToJson
-`base64tojson`
+`base64ToJson(base64Value)`
Converts a base64 representation to a JSON object.
+In Bicep, use the [base64ToJson](../bicep/bicep-functions-string.md#base64tojson) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Converts a base64 representation to a string.
+In Bicep, use the [base64ToString](../bicep/bicep-functions-string.md#base64tostring) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## concat
-`concat (arg1, arg2, arg3, ...)`
+`concat(arg1, arg2, arg3, ...)`
Combines multiple string values and returns the concatenated string, or combines multiple arrays and returns the concatenated array.
+In Bicep, use [string interpolation](../bicep/bicep-functions-string.md#concat) instead of the `concat` function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## contains
-`contains (container, itemToFind)`
+`contains(container, itemToFind)`
Checks whether an array contains a value, an object contains a key, or a string contains a substring. The string comparison is case-sensitive. However, when testing if an object contains a key, the comparison is case-insensitive.
+In Bicep, use the [contains](../bicep/bicep-functions-string.md#contains) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Converts a value to a data URI.
+In Bicep, use the [dataUri](../bicep/bicep-functions-string.md#datauri) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Converts a data URI formatted value to a string.
+In Bicep, use the [dataUriToString](../bicep/bicep-functions-string.md#datauritostring) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Determines if an array, object, or string is empty.
+In Bicep, use the [empty](../bicep/bicep-functions-string.md#empty) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Determines whether a string ends with a value. The comparison is case-insensitive.
+In Bicep, use the [endsWith](../bicep/bicep-functions-string.md#endswith) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns the first character of the string, or first element of the array.
+In Bicep, use the [first](../bicep/bicep-functions-string.md#first) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Creates a formatted string from input values.
+In Bicep, use the [format](../bicep/bicep-functions-string.md#format) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Creates a value in the format of a globally unique identifier based on the values provided as parameters.
+In Bicep, use the [guid](../bicep/bicep-functions-string.md#guid) function.
+ ### Parameters | Parameter | Required | Type | Description |
The following example returns results from `guid`:
Returns the first position of a value within a string. The comparison is case-insensitive.
+In Bicep, use the [indexOf](../bicep/bicep-functions-string.md#indexof) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Converts a valid JSON string into a JSON data type. For more information, see [json function](template-functions-object.md#json).
+In Bicep, use the [json](../bicep/bicep-functions-string.md#json) function.
+ ## last
-`last (arg1)`
+`last(arg1)`
Returns last character of the string, or the last element of the array.
+In Bicep, use the [last](../bicep/bicep-functions-string.md#last) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns the last position of a value within a string. The comparison is case-insensitive.
+In Bicep, use the [lastIndexOf](../bicep/bicep-functions-string.md#lastindexof) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns the number of characters in a string, elements in an array, or root-level properties in an object.
+In Bicep, use the [length](../bicep/bicep-functions-string.md#length) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns a value in the format of a globally unique identifier. **This function can only be used in the default value for a parameter.**
+In Bicep, use the [newGuid](../bicep/bicep-functions-string.md#newguid) function.
+ ### Remarks You can only use this function within an expression for the default value of a parameter. Using this function anywhere else in a template returns an error. The function isn't allowed in other parts of the template because it returns a different value each time it's called. Deploying the same template with the same parameters wouldn't reliably produce the same results.
The output from the preceding example varies for each deployment but will be sim
Returns a right-aligned string by adding characters to the left until reaching the total specified length.
+In Bicep, use the [padLeft](../bicep/bicep-functions-string.md#padleft) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns a new string with all instances of one string replaced by another string.
+In Bicep, use the [replace](../bicep/bicep-functions-string.md#replace) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns a string with all the characters after the specified number of characters, or an array with all the elements after the specified number of elements.
+In Bicep, use the [skip](../bicep/bicep-functions-string.md#skip) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns an array of strings that contains the substrings of the input string that are delimited by the specified delimiters.
+In Bicep, use the [split](../bicep/bicep-functions-string.md#split) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Determines whether a string starts with a value. The comparison is case-insensitive.
+In Bicep, use the [startsWith](../bicep/bicep-functions-string.md#startswith) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Converts the specified value to a string.
+In Bicep, use the [string](../bicep/bicep-functions-string.md#string) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns a substring that starts at the specified character position and contains the specified number of characters.
+In Bicep, use the [substring](../bicep/bicep-functions-string.md#substring) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns an array or string. An array has the specified number of elements from the start of the array. A string has the specified number of characters from the start of the string.
+In Bicep, use the [take](../bicep/bicep-functions-string.md#take) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Converts the specified string to lower case.
+In Bicep, use the [toLower](../bicep/bicep-functions-string.md#tolower) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Converts the specified string to upper case.
+In Bicep, use the [toUpper](../bicep/bicep-functions-string.md#toupper) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## trim
-`trim (stringToTrim)`
+`trim(stringToTrim)`
Removes all leading and trailing white-space characters from the specified string.
+In Bicep, use the [trim](../bicep/bicep-functions-string.md#trim) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
## uniqueString
-`uniqueString (baseString, ...)`
+`uniqueString(baseString, ...)`
Creates a deterministic hash string based on the values provided as parameters.
+In Bicep, use the [uniqueString](../bicep/bicep-functions-string.md#uniquestring) function.
+ ### Parameters | Parameter | Required | Type | Description |
The following example returns results from `uniquestring`:
## uri
-`uri (baseUri, relativeUri)`
+`uri(baseUri, relativeUri)`
Creates an absolute URI by combining the baseUri and the relativeUri string.
+In Bicep, use the [uri](../bicep/bicep-functions-string.md#uri) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Encodes a URI.
+In Bicep, use the [uriComponent](../bicep/bicep-functions-string.md#uricomponent) function.
+ ### Parameters | Parameter | Required | Type | Description |
The output from the preceding example with the default values is:
Returns a string of a URI encoded value.
+In Bicep, use the [uriComponentToString](../bicep/bicep-functions-string.md#uricomponenttostring) function.
+ ### Parameters | Parameter | Required | Type | Description |
azure-resource-manager Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/variables.md
This article describes how to define and use variables in your Azure Resource Ma
Resource Manager resolves variables before starting the deployment operations. Wherever the variable is used in the template, Resource Manager replaces it with the resolved value. > [!TIP]
-> For an improved authoring experience, you can use Bicep rather than JSON to develop templates. For more information about Bicep syntax, see [variables](../bicep/variables.md).
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [variables](../bicep/variables.md).
## Define variable
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-signalr Server Graceful Shutdown https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/server-graceful-shutdown.md
# Server graceful shutdown
-Microsoft Azure SignalR Service provides two modes for gracefully shutdown a server.
+Microsoft Azure SignalR Service provides two modes for gracefully shutdown a SignalR Hub server when Azure SignalR Service is configured as **Default mode** that Azure SignalR Service acts as a proxy between the SignalR Clients and the SignalR Hub Server.
The key advantage of using this feature is to prevent your customer from experiencing unexpectedly connection drops.
azure-sql Application Authentication Get Client Id Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/application-authentication-get-client-id-keys.md
$svcprincipal = az ad sp create --id $azureAdApplication.ApplicationId
Start-Sleep -s 15 # to avoid a PrincipalNotFound error, pause for 15 seconds # if you still get a PrincipalNotFound error, then rerun the following until successful.
-$roleassignment = az role assignment create --role "Contributor" --assignee $azureAdApplication.ApplicationId.Guid
+$roleassignment = az role assignment create --role "Contributor" --scope /subscriptions/{Subscription-id}/resourceGroups/{resource-group-name} --assignee $azureAdApplication.ApplicationId.Guid
# output the values we need for our C# application to successfully authenticate Write-Output "Copy these values into the C# sample app"
azure-sql Automation Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/automation-manage.md
Azure Automation also has the ability to communicate with SQL servers directly,
The runbook and module galleries for [Azure Automation](../../automation/automation-runbook-gallery.md) offer a variety of runbooks from Microsoft and the community that you can import into Azure Automation. To use one, download a runbook from the gallery, or you can directly import runbooks from the gallery, or from your Automation account in the Azure portal.
+>[!NOTE]
+> The Automation runbook may run from a range of IP addresses at any datacenter in an Azure region. To learn more, see [Automation region DNS records](/azure/automation/how-to/automation-region-dns-records).
+ ## Next steps Now that you've learned the basics of Azure Automation and how it can be used to manage Azure SQL Database, follow these links to learn more about Azure Automation.
azure-sql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/policy-reference.md
Title: Built-in policy definitions for Azure SQL Database description: Lists Azure Policy built-in policy definitions for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
azure-sql Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure SQL Database description: Sample Azure Resource Graph queries for Azure SQL Database showing use of resource types and tables to access Azure SQL Database related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tier-hyperscale.md
The vCore-based service tiers are differentiated based on database availability
|| **General Purpose** | **Hyperscale** | **Business Critical** | |::|::|::|::| | **Best for** | Offers budget oriented balanced compute and storage options.|Most business workloads. Autoscaling storage size up to 100 TB,fast vertical and horizontal compute scaling, fast database restore.| OLTP applications with high transaction rate and low IO latency. Offers highest resilience to failures and fast failovers using multiple synchronously updated replicas.|
-| **Resource type** | SQL Database / SQL Managed Instance | Single database | SQL Database / SQL Managed Instance |
| **Compute size** | 1 to 80 vCores | 1 to 80 vCores<sup>1</sup> | 1 to 80 vCores | | **Storage type** | Premium remote storage (per instance) | De-coupled storage with local SSD cache (per instance) | Super-fast local SSD storage (per instance)| | **Storage size**<sup>1</sup> | 5 GB ΓÇô 4 TB | Up to 100 TB | 5 GB ΓÇô 4 TB |
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/doc-changes-updates-release-notes-whats-new.md
ms.devlang: Previously updated : 03/07/2022 Last updated : 03/10/2022 # What's new in Azure SQL Managed Instance? [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqlmi.md)]
The following table lists the features of Azure SQL Managed Instance that are cu
| [Transactional Replication](replication-transactional-overview.md) | Replicate the changes from your tables into other databases in SQL Managed Instance, SQL Database, or SQL Server. Or update your tables when some rows are changed in other instances of SQL Managed Instance or SQL Server. For information, see [Configure replication in Azure SQL Managed Instance](replication-between-two-instances-configure-tutorial.md). | | [Threat detection](threat-detection-configure.md) | Threat detection notifies you of security threats detected to your database. | | [Windows Auth for Azure Active Directory principals](winauth-azuread-overview.md) | Kerberos authentication for Azure Active Directory (Azure AD) enables Windows Authentication access to Azure SQL Managed Instance. |
-|||
## General availability (GA)
The following table lists the features of Azure SQL Managed Instance that have t
|[Audit management operations](../database/auditing-overview.md#auditing-of-microsoft-support-operations) | March 2021 | Azure SQL audit capabilities enable you to audit operations done by Microsoft support engineers when they need to access your SQL assets during a support request, enabling more transparency in your workforce. | |[Granular permissions for dynamic data masking](../database/dynamic-data-masking-overview.md)| March 2021 | Dynamic data masking helps prevent unauthorized access to sensitive data by enabling customers to designate how much of the sensitive data to reveal with minimal impact on the application layer. It's a policy-based security feature that hides the sensitive data in the result set of a query over designated database fields, while the data in the database is not changed. It's now possible to assign granular permissions for data that's been dynamically masked. To learn more, see [Dynamic data masking](../database/dynamic-data-masking-overview.md#permissions). | |[Machine Learning Service](machine-learning-services-overview.md) | March 2021 | Machine Learning Services is a feature of Azure SQL Managed Instance that provides in-database machine learning, supporting both Python and R scripts. The feature includes Microsoft Python and R packages for high-performance predictive analytics and machine learning. |
-|||
+ ## Documentation changes
Learn about significant changes to the Azure SQL Managed Instance documentation.
| Changes | Details | | | |
-| **GA for maintenance window, preview for advance notifications** | The [maintenance window](../database/maintenance-window.md) feature allows you to configure a maintenance schedule for your Azure SQL Managed Instance and receive advance notifications of maintenance windows. [Maintenance window advance notifications](../database/advance-notifications.md) (preview) are available for databases configured to use a non-default [maintenance window](../database/maintenance-window.md). |
-|**Windows Auth for Azure Active Directory principals preview** | Windows Authentication for managed instances empowers customers to move existing services to the cloud while maintaining a seamless user experience, and provides the basis for infrastructure modernization. Learn more in [Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance](winauth-azuread-overview.md). |
-| **Data virtualization preview** | It's now possible to query data in external sources such as Azure Data Lake Storage Gen2 or Azure Blob Storage, joining it with locally stored relational data. This feature is currently in preview. To learn more, see [Data virtualization](data-virtualization-overview.md). |
-|||
+| **Data virtualization preview** | It's now possible to query data in external sources such as Azure Data Lake Storage Gen2 or Azure Blob Storage, joining it with locally stored relational data. This feature is currently in preview. To learn more, see [Data virtualization](data-virtualization-overview.md). |
+| **Link feature guidance** | We've published a number of guides for using the [link feature](link-feature.md) with SQL Managed Instance, including how to [prepare your environment](managed-instance-link-preparation.md), [configure replication](managed-instance-link-use-ssms-to-replicate-database.md), [failover your database](managed-instance-link-use-ssms-to-failover-database.md), and some [best practices](link-feature-best-practices.md) when using the link feature. |
+| **Maintenance window GA, advance notifications preview** | The [maintenance window](../database/maintenance-window.md) feature is now generally available, allowing you to configure a maintenance schedule for your Azure SQL Managed Instance. It's also possible to receive advance notifications for planned maintenance events, which is currently in preview. Review [Maintenance window advance notifications (preview)](../database/advance-notifications.md) to learn more. |
+| **Windows Auth for Azure Active Directory principals preview** | Windows Authentication for managed instances empowers customers to move existing services to the cloud while maintaining a seamless user experience, and provides the basis for infrastructure modernization. Learn more in [Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance](winauth-azuread-overview.md). |
++ ### 2021
Learn about significant changes to the Azure SQL Managed Instance documentation.
| **Maintenance window** | The maintenance window feature allows you to configure a maintenance schedule for your Azure SQL Managed Instance. To learn more, see [maintenance window](../database/maintenance-window.md).| | **Service Broker message exchange** | The Service Broker component of Azure SQL Managed Instance allows you to compose your applications from independent, self-contained services, by providing native support for reliable and secure message exchange between the databases attached to the service. Currently in preview. To learn more, see [Service Broker](/sql/database-engine/configure-windows/sql-server-service-broker). | **SQL insights** | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. To learn more, see [SQL insights](../../azure-monitor/insights/sql-insights-overview.md). |
-|||
+ ### 2020
The following changes were added to SQL Managed Instance and the documentation i
| **Enhanced management experience** | Using the new [OPERATIONS API](/rest/api/sql/2021-02-01-preview/managed-instance-operations), it's now possible to check the progress of long-running instance operations. To learn more, see [Management operations](management-operations-overview.md?tabs=azure-portal). | **Machine learning support** | Machine Learning Services with support for R and Python languages now include preview support on Azure SQL Managed Instance (Preview). To learn more, see [Machine learning with SQL Managed Instance](machine-learning-services-overview.md). | | **User-initiated failover** | User-initiated failover is now generally available, providing you with the capability to manually initiate an automatic failover using PowerShell, CLI commands, and API calls, improving application resiliency. To learn more, see, [testing resiliency](../database/high-availability-sla.md#testing-application-fault-resiliency).
-| | |
+
azure-sql Link Feature Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/link-feature-best-practices.md
Previously updated : 03/10/2022 Last updated : 03/11/2022 # Best practices with link feature for Azure SQL Managed Instance (preview) [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
This article outlines best practices when using the link feature for Azure SQL M
## Take log backups regularly
-The link feature replicates data using the [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups) concept based the Always On availability groups technology stack. Data replication with distributed availability groups is based on replicating transaction log records. No transaction log records can be truncated from the database on the primary instance until they're replicated to the database on the secondary instance. If transaction log record replication is slow or blocked due to network connection issues, the log file keeps growing on the primary instance. Growth speed depends on the intensity of workload and the network speed. If there's a prolonged network connection outage and heavy workload on primary instance, the log file may take all available storage space.
+The link feature replicates data using the [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups) concept based on the Always On availability groups technology stack. Data replication with distributed availability groups is based on replicating transaction log records. No transaction log records can be truncated from the database on the primary instance until they're replicated to the database on the secondary instance. If transaction log record replication is slow or blocked due to network connection issues, the log file keeps growing on the primary instance. Growth speed depends on the intensity of workload and the network speed. If there's a prolonged network connection outage and heavy workload on primary instance, the log file may take all available storage space.
-To minimize the risk of running out of space on your primary instance due to log file growth, make sure to take database log backups regularly. By taking log backups regularly, you make your database more resilient to unplanned log growth events. Consider scheduling daily log backup tasks using SQL Server Agent job.
+To minimize the risk of running out of space on your primary instance due to log file growth, make sure to **take database log backups regularly**. By taking log backups regularly, you make your database more resilient to unplanned log growth events. Consider scheduling daily log backup tasks using SQL Server Agent job.
You can use a Transact-SQL (T-SQL) script to back up the log file, such as the sample provided in this section. Replace the placeholders in the sample script with name of your database, name and path of the backup file, and the description.
The query output looks like the following example below for sample database **tp
:::image type="content" source="./media/link-feature-best-practices/database-log-file-size.png" alt-text="Screenshot with results of the command showing log file size and space used":::
-In this example, the database has used 76% of the available log, with an absolute log file size of approximately 27 GB (27,971 MB). The thresholds for action may vary based on your workload, but it's typically an indication that you should take a log backup to truncate log file and free up some space.
+In this example, the database has used 76% of the available log, with an absolute log file size of approximately 27 GB (27,971 MB). The thresholds for action may vary based on your workload, but it's typically an indication that you should take a log backup to truncate the log file and free up some space.
## Add startup trace flags
To get started with the link feature, [prepare your environment for replication]
For more information on the link feature, see the following articles: - [Managed Instance link ΓÇô overview](link-feature.md)-- [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog)
+- [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog)
azure-sql Link Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/link-feature.md
To use the link feature, you'll need:
The underlying technology of near real-time data replication between SQL Server and SQL Managed Instance is based on distributed availability groups, part of the well-known and proven Always On availability group technology stack. Extend your SQL Server on-premises availability group to SQL Managed Instance in Azure in a safe and secure manner.
-There's no need to have an existing availability group or multiple nodes. The link supports single node SQL Server instances without existing availability groups, and also multiple-node SQL Server instances with existing availability groups. Through the link, you can leverage the modern benefits of Azure without migrating your entire SQL Server data estate to the cloud.
+There's no need to have an existing availability group or multiple nodes. The link supports single node SQL Server instances without existing availability groups, and also multiple-node SQL Server instances with existing availability groups. Through the link, you can use the modern benefits of Azure without migrating your entire SQL Server data estate to the cloud.
You can keep running the link for as long as you need it, for months and even years at a time. And for your modernization journey, if or when you're ready to migrate to Azure, the link enables a considerably-improved migration experience with the minimum possible downtime compared to all other options available today, providing a true online migration to SQL Managed Instance. ## Supported scenarios
-Data replicated through the link feature from SQL Server to Azure SQL Managed Instance can be used with a number of scenarios, such as:
+Data replicated through the link feature from SQL Server to Azure SQL Managed Instance can be used with several scenarios, such as:
- **Use Azure services without migrating to the cloud** - **Offload read-only workloads to Azure**
Use the link feature to leverage Azure services using SQL Server data without mi
### Offload workloads to Azure
-You can also use the link feature to offload workloads to Azure. For example, an application could use SQL Server for read / write workloads, while offloading read-only workloads to SQL Managed Instance in any of Azure's 60+ regions worldwide. Once the link is established, the primary database on SQL Server is read/write accessible, while replicated data to SQL Managed Instance in Azure is read-only accessible. This allows for various scenarios where replicated databases on SQL Managed Instance can be used for read scale-out and offloading read-only workloads to Azure. SQL Managed Instance, in parallel, can also host independent read/write databases. This allows for copying the replicated database to another read/write database on the same managed instance for further data processing.
+You can also use the link feature to offload workloads to Azure. For example, an application could use SQL Server for read-write workloads, while offloading read-only workloads to SQL Managed Instance in any of Azure's 60+ regions worldwide. Once the link is established, the primary database on SQL Server is read/write accessible, while replicated data to SQL Managed Instance in Azure is read-only accessible. This allows for various scenarios where replicated databases on SQL Managed Instance can be used for read scale-out and offloading read-only workloads to Azure. SQL Managed Instance, in parallel, can also host independent read/write databases. This allows for copying the replicated database to another read/write database on the same managed instance for further data processing.
The link is database scoped (one link per one database), allowing for consolidation and deconsolidation of workloads in Azure. For example, you can replicate databases from multiple SQL Servers to a single SQL Managed Instance in Azure (consolidation), or replicate databases from a single SQL Server to multiple managed instances via a 1 to 1 relationship between a database and a managed instance - to any of Azure's regions worldwide (deconsolidation). The latter provides you with an efficient way to quickly bring your workloads closer to your customers in any region worldwide, which you can use as read-only replicas.
Managed Instance link has a set of general limitations, and those are listed in
- Replicating Databases using Hekaton (In-Memory OLTP) isn't supported on Managed Instance General Purpose service tier. Hekaton is only supported on Managed Instance Business Critical service tier. - For the full list of differences between SQL Server and Managed Instance, see [this article](./transact-sql-tsql-differences-sql-server.md). - In case Change data capture (CDC), log shipping, or service broker are used with database replicated on the SQL Server, and in case of database migration to Managed Instance, on the failover to the Azure, clients will need to connect using instance name of the current global primary replica. you'll need to manually re-configure these settings.-- In case Transactional Replication is used with database replicated on the SQL Server, and in case of migration scenario, on failover to Azure, transactional replication on Azure SQL Managed instance will not continue. you'll need to manually re-configure Transactional Replication.-- In case distributed transactions are used with database replicated from the SQL Server, and in case of migration scenario, on the cutover to the cloud, the DTC capabilities will not be transferred. There will be no possibility for migrated database to get involved in distributed transactions with SQL Server, as Managed Instance doesn't support distributed transactions with SQL Server at this time. For reference, Managed Instance today supports distributed transactions only between other Managed Instances, see [this article](../database/elastic-transactions-overview.md#transactions-for-sql-managed-instance).
+- In case Transactional Replication is used with database replicated on the SQL Server, and in case of migration scenario, on failover to Azure, transactional replication on Azure SQL Managed instance won't continue. you'll need to manually re-configure Transactional Replication.
+- In case distributed transactions are used with database replicated from the SQL Server, and in case of migration scenario, on the cutover to the cloud, the DTC capabilities won't be transferred. There will be no possibility for migrated database to get involved in distributed transactions with SQL Server, as Managed Instance doesn't support distributed transactions with SQL Server at this time. For reference, Managed Instance today supports distributed transactions only between other Managed Instances, see [this article](../database/elastic-transactions-overview.md#transactions-for-sql-managed-instance).
- Managed Instance link can replicate database of any size if it fits into chosen storage size of target Managed Instance. ### Additional limitations
Some Managed Instance link features and capabilities are limited **at this time*
- Managed Instance Link authentication between SQL Server instance and Managed Instance is certificate-based, available only through exchange of certificates. Windows authentication between instances isn't supported. - Replication of user databases from SQL Server to Managed Instance is one-way. User databases from Managed Instance can't be replicated back to SQL Server. - Auto failover groups replication to secondary Managed Instance can't be used in parallel while operating the Managed Instance Link with SQL Server.
+- Replicated databases aren't part of auto-backup process on SQL Managed Instance.
## Next steps
-If you are interested in using Link feature for Azure SQL Managed Instance with versions and editions that are currently not supported, sign-up [here](https://aka.ms/mi-link-signup).
+If you're interested in using Link feature for Azure SQL Managed Instance with versions and editions that are currently not supported, sign-up [here](https://aka.ms/mi-link-signup).
For more information on the link feature, see the following:
azure-sql Managed Instance Link Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-preparation.md
Previously updated : 03/07/2022 Last updated : 03/10/2022 # Prepare environment for link feature - Azure SQL Managed Instance [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article teaches you to prepare your environment for the [Managed Instance link feature](link-feature.md) so that you can replicate your databases from your instance of SQL Server to your instance of Azure SQL Managed Instance.
+This article teaches you to prepare your environment for the [Managed Instance link feature](link-feature.md) so that you can replicate databases from SQL Server instance to Azure SQL Managed Instance.
> [!NOTE] > The link feature for Azure SQL Managed Instance is currently in preview.
To use the Managed Instance link feature, you need the following prerequisites:
## Prepare your SQL Server instance
-To prepare your SQL Server instance, you need to validate you're on the minimum supported version, you've enabled the availability group feature, and you've added the proper trace flags at startup. You will need to restart SQL Server for these changes to take effect.
+To prepare your SQL Server instance, you need to validate:
+- you're on the minimum supported version;
+- you've enabled the availability group feature;
+- you've added the proper trace flags at startup;
+- your databases are in full recovery mode and backed up.
+
+You'll need to restart SQL Server for these changes to take effect.
### Install CU15 (or higher)
To check your SQL Server version, run the following Transact-SQL (T-SQL) script:
SELECT @@VERSION ```
-If your SQL Server version is lower than CU15 (15.0.4198.2), either install the minimally supported [CU15](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6), or the current latest cumulative update. Your SQL Server instance will be restarted during the update.
+If your SQL Server version is lower than CU15 (15.0.4198.2), either install the [CU15](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6), or the current latest cumulative update. Your SQL Server instance will be restarted during the update.
+
+### Create database master key in the master database
+
+Create database master key in the master database by running the following T-SQL script.
+
+```sql
+-- Create MASTER KEY
+USE MASTER
+CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<strong_password>'
+```
+
+To check if you have database master key, use the following T-SQL script.
+```sql
+SELECT * FROM sys.symmetric_keys WHERE name LIKE '%DatabaseMasterKey%'
+```
### Enable availability groups feature
-The link feature for SQL Managed Instance relies on the Always On availability groups feature, which is not enabled by default. To learn more, review [enabling the Always On availability groups feature](/sql/database-engine/availability-groups/windows/enable-and-disable-always-on-availability-groups-sql-server).
+The link feature for SQL Managed Instance relies on the Always On availability groups feature, which isn't enabled by default. To learn more, review [enabling the Always On availability groups feature](/sql/database-engine/availability-groups/windows/enable-and-disable-always-on-availability-groups-sql-server).
To confirm the Always On availability groups feature is enabled, run the following Transact-SQL (T-SQL) script:
select
end as 'HadrStatus' ```
-If the availability groups feature is not enabled, follow these steps to enable it:
+If the availability groups feature isn't enabled, follow these steps to enable it:
1. Open the **SQL Server Configuration Manager**. 1. Choose the SQL Server service from the navigation pane.
If the availability groups feature is not enabled, follow these steps to enable
To optimize Managed Instance link performance, enabling trace flags `-T1800` and `-T9567` at startup is highly recommended: -- **-T1800**: This trace flag optimizes SQL Server performance when the disks hosting the log files for the primary and secondary replica in an availability group have different sector sizes, such as 512 bytes and 4k. If both primary and secondary replicas have a disk sector size of 4k, this trace flag isn't required. To learn more, review [KB3009974](https://support.microsoft.com/topic/kb3009974-fix-slow-synchronization-when-disks-have-different-sector-sizes-for-primary-and-secondary-replica-log-files-in-sql-server-ag-and-logshipping-environments-ed181bf3-ce80-b6d0-f268-34135711043c). -- **-T9567**: This trace flag enables compression of the data stream for availability groups during automatic seeding, which increases the load on the processor but can significantly reduce transfer time during seeding.
+- **-T1800**: This trace flag optimizes performance when the log files for the primary and secondary replica in an availability group are hosted on disks with different sector sizes, such as 512 bytes and 4k. If both primary and secondary replicas have a disk sector size of 4k, this trace flag isn't required. To learn more, review [KB3009974](https://support.microsoft.com/topic/kb3009974-fix-slow-synchronization-when-disks-have-different-sector-sizes-for-primary-and-secondary-replica-log-files-in-sql-server-ag-and-logshipping-environments-ed181bf3-ce80-b6d0-f268-34135711043c).
+- **-T9567**: This trace flag enables compression of the data stream for availability groups during automatic seeding. The compression increases the load on the processor but can significantly reduce transfer time during seeding.
To enable these trace flags at startup, follow these steps:
To enable these trace flags at startup, follow these steps:
To learn more, review [enabling trace flags](/sql/t-sql/database-console-commands/dbcc-traceon-transact-sql). - ### Restart SQL Server and validate configuration - After you've validated you're on a supported version of SQL Server, enabled the Always On availability groups feature, and added your startup trace flags, restart your SQL Server instance to apply all of these changes. To restart your SQL Server instance, follow these steps:
To restart your SQL Server instance, follow these steps:
:::image type="content" source="./media/managed-instance-link-preparation/sql-server-configuration-manager-sql-server-restart.png" alt-text="Screenshot showing S Q L Server restart command call.":::
-After the restart, use Transact-SQL to validate the configuration of your SQL Server. Your SQL Server version should be 15.0.4198.2 or greater, the Always On availability groups feature should be enabled, and you should have the Trace flags -T1800 and -T9567 enabled.
+After the restart, use Transact-SQL to validate the configuration of your SQL Server. Your SQL Server version should be 15.0.4198.2 or greater, the Always On availability groups feature should be enabled, and you should have the Trace flags -T1800 and -T9567 enabled.
To validate your configuration, run the following Transact-SQL (T-SQL) script:
The following screenshot is an example of the expected outcome for a SQL Server
:::image type="content" source="./media/managed-instance-link-preparation/ssms-results-expected-outcome.png" alt-text="Screenshot showing expected outcome in S S M S.":::
+### User database recovery mode and backup
+
+All databases that are to be replicated via SQL Managed Instance link must be in full recovery mode and have at least one backup.
+
+```sql
+-- Set full recovery mode for all databases you want to replicate.
+ALTER DATABASE [<DatabaseName>] SET RECOVERY FULL
+GO
+
+-- Execute backup for all databases you want to replicate.
+BACKUP DATABASE [<DatabaseName>] TO DISK = N'<DiskPath>'
+GO
+```
+ ## Configure network connectivity For the Managed Instance link to work, there must be network connectivity between SQL Server and SQL Managed Instance. The network option that you choose depends on where your SQL Server resides - whether it's on-premises or on a virtual machine (VM).
If your SQL Server is hosted outside of Azure, establish a VPN connection betwee
### Open network ports between the environments
-Port 5022 needs to allow inbound and outbound traffic between SQL Server and SQL Managed Instance. Port 5022 is the standard port used for availability groups, and cannot be changed or customized.
+Port 5022 needs to allow inbound and outbound traffic between SQL Server and SQL Managed Instance. Port 5022 is the standard port used for availability groups, and can't be changed or customized.
The following table describes port actions for each environment:
Bidirectional network connectivity between SQL Server and SQL Managed Instance i
### Test connection from SQL Server to SQL Managed Instance
-To check if SQL Server can reach your SQL Managed Instance use the `tnc` command in PowerShell from the SQL Server host machine. Replace `<ManagedInstanceFQDN>` with the fully qualified domain name of the Azure SQL Managed Instance.
+To check if SQL Server can reach your SQL Managed Instance, use the `tnc` command in PowerShell from the SQL Server host machine. Replace `<ManagedInstanceFQDN>` with the fully qualified domain name of the Azure SQL Managed Instance.
```powershell tnc <ManagedInstanceFQDN> -port 5022
A successful test shows `TcpTestSucceeded : True`:
:::image type="content" source="./media/managed-instance-link-preparation/powershell-output-tnc-command.png" alt-text="Screenshot showing output of T N C command in PowerShell.":::
-If the response is unsuccessful, verify the following:
+If the response is unsuccessful, verify the following network settings:
- There are rules in both the network firewall *and* the windows firewall that allow traffic to the *subnet* of the SQL Managed Instance. -- There is an NSG rule allowing communication on port 5022 for the virtual network hosting the SQL Managed Instance.
+- There's an NSG rule allowing communication on port 5022 for the virtual network hosting the SQL Managed Instance.
#### Test connection from SQL Managed Instance to SQL Server
DROP CERTIFICATE TEST_CERT
GO ```
-If the connection is unsuccessful, verify the following:
+If the connection is unsuccessful, verify the following items:
- The firewall on the host SQL Server allows inbound and outbound communication on port 5022. -- There is an NSG rule for the virtual network hosting the SQL Managed instance that allows communication on port 5022. -- If your SQL Server is on an Azure VM, there is an NSG rule allowing communication on port 5022 on the virtual network hosting the VM.
+- There's an NSG rule for the virtual network hosting the SQL Managed instance that allows communication on port 5022.
+- If your SQL Server is on an Azure VM, there's an NSG rule allowing communication on port 5022 on the virtual network hosting the VM.
- SQL Server is running. > [!CAUTION]
azure-sql Managed Instance Link Use Ssms To Failover Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-failover-database.md
Previously updated : 03/07/2022 Last updated : 03/10/2022 # Failover database with link feature in SSMS - Azure SQL Managed Instance
To failover your database, follow these steps:
:::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-introduction.png" alt-text="Screenshot showing Introduction page.":::
-3. On the **Log in to Azure** page, select **Sign-in** to provide your credentials and sign into your Azure account. Select the subscription that is hosting the your SQL Managed Instance from the drop-down and then select **Next**:
+3. On the **Log in to Azure** page, select **Sign-in** to provide your credentials and sign into your Azure account. Select the subscription that is hosting your SQL Managed Instance from the drop-down and then select **Next**:
:::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-login-to-azure.png" alt-text="Screenshot showing Log in to Azure page.":::
azure-sql Managed Instance Link Use Ssms To Replicate Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-replicate-database.md
Previously updated : 03/07/2022 Last updated : 03/10/2022 # Replicate database with link feature in SSMS - Azure SQL Managed Instance
Use the **New Managed Instance link** wizard in SQL Server Management Studio (SS
To set up the Managed Instance link, follow these steps: 1. Open SQL Server Management Studio (SSMS) and connect to your instance of SQL Server.
-1. In **Object Explorer**, right-click your database, hover over **Azure SQL Managed Instance link** and select **Replicate database** to open the **New Managed Instance link** wizard:
+1. In **Object Explorer**, right-click your database, hover over **Azure SQL Managed Instance link** and select **Replicate database** to open the **New Managed Instance link** wizard. If SQL Server version isn't supported, this option won't be available in the context menu.
:::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-ssms-database-context-replicate-database.png" alt-text="Screenshot showing database's context menu option to replicate database after hovering over Azure SQL Managed Instance link.":::
azure-sql Availability Group Azure Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/availability-group-azure-portal-configure.md
If you do not already have an existing cluster, create it by using the Azure por
:::image type="content" source="media/availability-group-az-portal-configure/configure-new-cluster-1.png" alt-text="Provide name, storage account, and credentials for the cluster":::
-1. Expand **Windows Server Failover Cluster credentials** to provide [credentials](/rest/api/sqlvm/sqlvirtualmachinegroups/createorupdate#wsfcdomainprofile) for the SQL Server service account, as well as the cluster operator and bootstrap accounts if they're different than the account used for the SQL Server service.
+1. Expand **Windows Server Failover Cluster credentials** to provide [credentials](/rest/api/sqlvm/2021-11-01-preview/sql-virtual-machine-groups/create-or-update#wsfcdomainprofile) for the SQL Server service account, as well as the cluster operator and bootstrap accounts if they're different than the account used for the SQL Server service.
:::image type="content" source="media/availability-group-az-portal-configure/configure-new-cluster-2.png" alt-text="Provide credentials for the SQL Service account, cluster operator account and cluster bootstrap account":::
azure-sql Sql Assessment For Sql Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/sql-assessment-for-sql-vm.md
The SQL best practices assessment feature of the Azure portal identifies possible performance issues and evaluates that your SQL Server on Azure Virtual Machines (VMs) is configured to follow best practices using the [rich ruleset](https://github.com/microsoft/sql-server-samples/blob/master/samples/manage/sql-assessment-api/DefaultRuleset.csv) provided by the [SQL Assessment API](/sql/sql-assessment-api/sql-assessment-api-overview).
-To learn more, watch this video on [SQL best practices assessment](/shows/Data-Exposed/?WT.mc_id=dataexposed-c9-niner):
+To learn more, watch this video on [SQL best practices assessment](/shows/Data-Exposed/optimally-configure-sql-server-on-azure-virtual-machines-with-sql-assessment?WT.mc_id=dataexposed-c9-niner):
<iframe src="https://aka.ms/docs/player?id=13b2bf63-485c-4ec2-ab14-a1217734ad9f" width="640" height="370"></iframe>
backup Backup Azure Restore Files From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-files-from-vm.md
If you run the script on a computer with restricted access, ensure there's acces
> [!NOTE] > > In case, the backed up VM is Windows, then the geo-name will be mentioned in the password generated.<br><br>
-> For eg, if the generated password is *ContosoVM_wcus_GUID*, then then geo-name is wcus and the URL would be: <https://pod01-rec2.wcus.backup.windowsazure.com><br><br>
+> For eg, if the generated password is *ContosoVM_wcus_GUID*, then then geo-name is wcus and the URL would be: <`https://pod01-rec2.wcus.backup.windowsazure.com`><br><br>
> > > If the backed up VM is Linux, then the script file you downloaded in step 1 [above](#step-1-generate-and-download-script-to-browse-and-recover-files) will have the **geo-name** in the name of the file. Use that **geo-name** to fill in the URL. The downloaded script name will begin with: \'VMname\'\_\'geoname\'_\'GUID\'.<br><br>
-> So for example, if the script filename is *ContosoVM_wcus_12345678*, the **geo-name** is *wcus* and the URL would be: <https://pod01-rec2.wcus.backup.windowsazure.com><br><br>
+> So for example, if the script filename is *ContosoVM_wcus_12345678*, the **geo-name** is *wcus* and the URL would be: <`https://pod01-rec2.wcus.backup.windowsazure.com`><br><br>
>
backup Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-overview.md
Title: What is Azure Backup? description: Provides an overview of the Azure Backup service, and how it contributes to your business continuity and disaster recovery (BCDR) strategy. Previously updated : 01/04/2022 Last updated : 03/11/2022 # What is the Azure Backup service?
The Azure Backup service provides simple, secure, and cost-effective solutions t
- **Azure Files shares** - [Back up Azure File shares to a storage account](backup-afs.md) - **SQL Server in Azure VMs** - [Back up SQL Server databases running on Azure VMs](backup-azure-sql-database.md) - **SAP HANA databases in Azure VMs** - [Backup SAP HANA databases running on Azure VMs](backup-azure-sap-hana-database.md)-- **Azure Database for PostgreSQL servers (preview)** - [Back up Azure PostgreSQL databases and retain the backups for up to 10 years](backup-azure-database-postgresql.md)
+- **Azure Database for PostgreSQL servers** - [Back up Azure PostgreSQL databases and retain the backups for up to 10 years](backup-azure-database-postgresql.md)
- **Azure Blobs** - [Overview of operational backup for Azure Blobs](blob-backup-overview.md) ![Azure Backup Overview](./media/backup-overview/azure-backup-overview.png)
backup Blob Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-restore.md
Title: Restore Azure Blobs description: Learn how to restore Azure Blobs. Previously updated : 05/05/2021 Last updated : 03/11/2022
Block blobs in storage accounts with operational backup configured can be restor
- Blobs will be restored to the same storage account. So blobs that have undergone changes since the time to which you're restoring will be overwritten. - Only block blobs in a standard general-purpose v2 storage account can be restored as part of a restore operation. Append blobs, page blobs, and premium block blobs aren't restored.-- While a restore job is in progress, blobs in the storage cannot be read or written to.
+- When you perform a restore operation, Azure Storage blocks data operations on the blobs in the ranges being restored for the duration of the operation.
- A blob with an active lease cannot be restored. If a blob with an active lease is included in the range of blobs to restore, the restore operation will fail automatically. Break any active leases before starting the restore operation. - Snapshots aren't created or deleted as part of a restore operation. Only the base blob is restored to its previous state. - If you delete a container from the storage account by calling the **Delete Container** operation, that container cannot be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you may want to restore them later. Also, Microsoft recommends enabling soft delete for containers in addition to operational backup to protect against accidental deletion of containers.
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
cloudfoundry Create Cloud Foundry On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloudfoundry/create-cloud-foundry-on-azure.md
For more information, see [Use SSH keys with Windows on Azure](../virtual-machin
5. Set the permission role of your service principal as a Contributor. ```azurecli
- az role assignment create --assignee "{enter-your-homepage}" --role "Contributor"
+ az role assignment create --assignee "{enter-your-homepage}" --role "Contributor" --scope /subscriptions/{subscription-id}/resourceGroups/{resource-group-name}
``` Or you also can use ```azurecli
- az role assignment create --assignee {service-principal-name} --role "Contributor"
+ az role assignment create --assignee {service-principal-name} --role "Contributor" --scope /subscriptions/{subscription-id}/resourceGroups/{resource-group-name}
``` ![Service principal role assignment](media/deploy/svc-princ.png )
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
container-apps Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress.md
With ingress enabled, your container app features the following characteristics:
- Supports TLS termination - Supports HTTP/1.1 and HTTP/2
+- Supports WebSocket and gRPC
- Endpoints always use TLS 1.2, terminated at the ingress point - Endpoints always expose ports 80 (for HTTP) and 443 (for HTTPS). - By default, HTTP requests to port 80 are automatically redirected to HTTPS on 443.
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Title: Built-in policy definitions for Azure Container Instances description: Lists Azure Policy built-in policy definitions for Azure Container Instances. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
container-registry Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Container Registry description: Sample Azure Resource Graph queries for Azure Container Registry showing use of resource types and tables to access Azure Container Registry related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
cosmos-db Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Cosmos DB description: Sample Azure Resource Graph queries for Azure Cosmos DB showing use of resource types and tables to access Azure Cosmos DB related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
cosmos-db Change Feed Processor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/change-feed-processor.md
ms.devlang: csharp Previously updated : 11/16/2021 Last updated : 03/10/2022
There are four main components of implementing the change feed processor:
1. **The lease container:** The lease container acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account.
-1. **The host:** A host is an application instance that uses the change feed processor to listen for changes. Multiple instances with the same lease configuration can run in parallel, but each instance should have a different **instance name**.
+1. **The compute instance**: A compute instance hosts the change feed processor to listen for changes. Depending on the platform, it could be represented by a VM, a kubernetes pod, an Azure App Service instance, an actual physical machine. It has a unique identifier referenced as the *instance name* throughout this article.
1. **The delegate:** The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads. To further understand how these four elements of change feed processor work together, let's look at an example in the following diagram. The monitored container stores documents and uses 'City' as the partition key. We see that the partition key values are distributed in ranges that contain items.
-There are two host instances and the change feed processor is assigning different ranges of partition key values to each instance to maximize compute distribution.
+There are two compute instances and the change feed processor is assigning different ranges of partition key values to each instance to maximize compute distribution, each instance has a unique and different name.
Each range is being read in parallel and its progress is maintained separately from other ranges in the lease container. :::image type="content" source="./media/change-feed-processor/changefeedprocessor.png" alt-text="Change feed processor example" border="false":::
An example of a delegate would be:
[!code-csharp[Main](~/samples-cosmosdb-dotnet-change-feed-processor/src/Program.cs?name=Delegate)]
-Finally you define a name for this processor instance with `WithInstanceName` and which is the container to maintain the lease state with `WithLeaseContainer`.
+Afterwards, you define the compute instance name or unique identifier with `WithInstanceName`, this should be unique and different in each compute instance you are deploying, and finally which is the container to maintain the lease state with `WithLeaseContainer`.
Calling `Build` will give you the processor instance that you can start by calling `StartAsync`.
The change feed processor lets you hook to relevant events in its [life cycle](#
## Deployment unit
-A single change feed processor deployment unit consists of one or more instances with the same `processorName` and lease container configuration. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
+A single change feed processor deployment unit consists of one or more compute instances with the same `processorName` and lease container configuration but different instance name each. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
For example, you might have one deployment unit that triggers an external API anytime there is a change in your container. Another deployment unit might move data, in real time, each time there is a change. When a change happens in your monitored container, all your deployment units will get notified. ## Dynamic scaling
-As mentioned before, within a deployment unit you can have one or more instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are:
+As mentioned before, within a deployment unit you can have one or more compute instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are:
1. All instances should have the same lease container configuration. 1. All instances should have the same `processorName`.
cosmos-db Sql Api Dotnet V3sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-dotnet-v3sdk-samples.md
Title: 'Azure Cosmos DB: .NET (Microsoft.Azure.Cosmos) examples for the SQL API'
-description: Find the C# .NET V3 SDK examples on GitHub for common tasks using the Azure Cosmos DB SQL API.
+description: Find the C# .NET v3 SDK examples on GitHub for common tasks by using the Azure Cosmos DB SQL API.
Last updated 02/23/2022
-# Azure Cosmos DB.NET V3 SDK (Microsoft.Azure.Cosmos) examples for the SQL API
+
+# Azure Cosmos DB .NET v3 SDK (Microsoft.Azure.Cosmos) examples for the SQL API
+ [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
-> * [.NET V3 SDK Examples](sql-api-dotnet-v3sdk-samples.md)
-> * [Java V4 SDK Examples](sql-api-java-sdk-samples.md)
-> * [Spring Data V3 SDK Examples](sql-api-spring-data-sdk-samples.md)
+> * [.NET v3 SDK Examples](sql-api-dotnet-v3sdk-samples.md)
+> * [Java v4 SDK Examples](sql-api-java-sdk-samples.md)
+> * [Spring Data v3 SDK Examples](sql-api-spring-data-sdk-samples.md)
> * [Node.js Examples](sql-api-nodejs-samples.md) > * [Python Examples](sql-api-python-samples.md)
-> * [.NET V2 SDK Examples (Legacy)](sql-api-dotnet-v2sdk-samples.md)
+> * [.NET v2 SDK Examples (Legacy)](sql-api-dotnet-v2sdk-samples.md)
> * [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db) > >
-The [azure-cosmos-dotnet-v3](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage) GitHub repository includes the latest .NET sample solutions to perform CRUD and other common operations on Azure Cosmos DB resources. If you're familiar with the previous version of the .NET SDK, you may be used to the terms collection and document. Because Azure Cosmos DB supports multiple API models, version 3.0 of the .NET SDK uses the generic terms "container" and "item". A container can be a collection, graph, or table. An item can be a document, edge/vertex, or row, and is the content inside a container. This article provides:
+The [azure-cosmos-dotnet-v3](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage) GitHub repository includes the latest .NET sample solutions. You use these solutions to perform CRUD (create, read, update, and delete) and other common operations on Azure Cosmos DB resources.
+
+If you're familiar with the previous version of the .NET SDK, you might be used to the terms collection and document. Because Azure Cosmos DB supports multiple API models, version 3.0 of the .NET SDK uses the generic terms *container* and *item*. A container can be a collection, graph, or table. An item can be a document, edge/vertex, or row, and is the content inside a container. This article provides:
* Links to the tasks in each of the example C# project files. * Links to the related API reference content. ## Prerequisites
-Visual Studio 2019 with the Azure development workflow installed
--- You can download and use the **free** [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/). Make sure that you enable **Azure development** during the Visual Studio setup.
+- Visual Studio 2019 with the Azure development workflow installed. You can download and use the free [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/). Make sure that you enable **Azure development** during the Visual Studio setup.
- The [Microsoft.Azure.cosmos NuGet package](https://www.nuget.org/packages/Microsoft.Azure.cosmos/)
+- The [Microsoft.Azure.cosmos NuGet package](https://www.nuget.org/packages/Microsoft.Azure.cosmos/).
-An Azure subscription or free Cosmos DB trial account
+- An Azure subscription or free Azure Cosmos DB trial account.
-- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+ - [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month, which you can use for paid Azure services.
+- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Your Visual Studio subscription gives you credits every month, which you can use for paid Azure services.
+ - [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)] > [!NOTE]
An Azure subscription or free Cosmos DB trial account
## Database examples
-The [RunDatabaseDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/DatabaseManagement/Program.cs#L65-L91) method of the sample *DatabaseManagement* project shows how to do the following tasks. To learn about Azure Cosmos databases before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
+The [RunDatabaseDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/DatabaseManagement/Program.cs#L65-L91) method of the sample *DatabaseManagement* project shows how to do the following tasks. To learn about Azure Cosmos DB databases before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
| Task | API reference | | | |
The [RunDatabaseDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/maste
## Container examples
-The [RunContainerDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ContainerManagement/Program.cs#L69-L89) method of the sample *ContainerManagement* project shows how to do the following tasks. To learn about Azure Cosmos containers before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
+The [RunContainerDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ContainerManagement/Program.cs#L69-L89) method of the sample *ContainerManagement* project shows how to do the following tasks. To learn about Azure Cosmos DB containers before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
| Task | API reference | | | |
The [RunContainerDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/mast
## Item examples
-The [RunItemsDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L119-L130) method of the sample *ItemManagement* project shows how to do the following tasks. To learn about Azure Cosmos items before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
+The [RunItemsDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L119-L130) method of the sample *ItemManagement* project shows how to do the following tasks. To learn about Azure Cosmos DB items before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
| Task | API reference | | | |
The [RunIndexDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/M
## Query examples
-The [RunDemoAsync](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L76-L96) method of the sample *Queries* project shows how to do the following tasks using the SQL query grammar, the LINQ provider with query, and Lambda. To learn about the SQL query reference in Azure Cosmos DB before you run the following samples, see [SQL query examples for Azure Cosmos DB](./sql-query-getting-started.md).
+The [RunDemoAsync](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L76-L96) method of the sample *Queries* project shows how to do the following tasks, by using the SQL query grammar, the LINQ provider with query, and Lambda. To learn about the SQL query reference in Azure Cosmos DB before you run the following samples, see [SQL query examples for Azure Cosmos DB](./sql-query-getting-started.md).
| Task | API reference | | | |
The [RunBasicChangeFeed](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/ma
| [Basic change feed functionality](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L91-L119) |[Container.GetChangeFeedProcessorBuilder](/dotnet/api/microsoft.azure.cosmos.container.getchangefeedprocessorbuilder) | | [Read change feed from a specific time](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L127-L162) |[Container.GetChangeFeedProcessorBuilder](/dotnet/api/microsoft.azure.cosmos.container.getchangefeedprocessorbuilder) | | [Read change feed from the beginning](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L170-L198) |[ChangeFeedProcessorBuilder.WithStartTime(DateTime)](/dotnet/api/microsoft.azure.cosmos.changefeedprocessorbuilder.withstarttime) |
-| [Migrate from change feed processor to change feed in V3 SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L256-L333) |[Container.GetChangeFeedProcessorBuilder](/dotnet/api/microsoft.azure.cosmos.container.getchangefeedprocessorbuilder) |
+| [Migrate from change feed processor to change feed in v3 SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L256-L333) |[Container.GetChangeFeedProcessorBuilder](/dotnet/api/microsoft.azure.cosmos.container.getchangefeedprocessorbuilder) |
## Server-side programming examples
The [RunDemoAsync](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/M
| [Execute a stored procedure](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ServerSideScripts/Program.cs#L135) |[Scripts.ExecuteStoredProcedureAsync](/dotnet/api/microsoft.azure.cosmos.scripts.scripts.executestoredprocedureasync) | | [Delete a stored procedure](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ServerSideScripts/Program.cs#L351) |[Scripts.DeleteStoredProcedureAsync](/dotnet/api/microsoft.azure.cosmos.scripts.scripts.deletestoredprocedureasync) |
-## Custom Serialization
+## Custom serialization
-The [SystemTextJson](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/SystemTextJson/Program.cs) sample project shows how to use a custom serializer when initializing a new `CosmosClient` object. The sample also includes [a custom `CosmosSerializer` class](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/SystemTextJson/CosmosSystemTextJsonSerializer.cs) which leverages `System.Text.Json` for serialization and deserialization.
+The [SystemTextJson](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/SystemTextJson/Program.cs) sample project shows how to use a custom serializer when you're initializing a new `CosmosClient` object. The sample also includes [a custom `CosmosSerializer` class](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/SystemTextJson/CosmosSystemTextJsonSerializer.cs), which uses `System.Text.Json` for serialization and deserialization.
## Next steps Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units by using vCores or vCPUs](../convert-vcore-to-request-unit.md).
+
+* If you know typical request rates for your current database workload, read about [estimating request units by using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md
Title: Troubleshoot Azure Cosmos DB slow requests with the .NET SDK
-description: Learn how to diagnose and fix slow requests when using Azure Cosmos DB .NET SDK.
+ Title: Troubleshoot slow requests in Azure Cosmos DB .NET SDK
+description: Learn how to diagnose and fix slow requests when you use Azure Cosmos DB .NET SDK.
-# Diagnose and troubleshoot Azure Cosmos DB .NET SDK slow requests
+# Diagnose and troubleshoot slow requests in Azure Cosmos DB .NET SDK
+ [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-Azure Cosmos DB slow requests can happen for multiple reasons such as request throttling or the way your application is designed. This article explains the different root causes for this issue.
+In Azure Cosmos DB, you might notice slow requests. Delays can happen for multiple reasons, such as request throttling or the way your application is designed. This article explains the different root causes for this problem.
-## Request rate too large (429 throttles)
+## Request rate too large
-Request throttling is the most common reason for slow requests. Azure Cosmos DB will throttle requests if they exceed the allocated RUs for the database or container. The SDK has built-in logic to retry these requests. The [request rate too large](troubleshoot-request-rate-too-large.md#how-to-investigate) troubleshooting article explains how to check if the requests are being throttled and how to scale your account to avoid these issues in the future.
+Request throttling is the most common reason for slow requests. Azure Cosmos DB throttles requests if they exceed the allocated request units for the database or container. The SDK has built-in logic to retry these requests. The [request rate too large](troubleshoot-request-rate-too-large.md#how-to-investigate) troubleshooting article explains how to check if the requests are being throttled. The article also discusses how to scale your account to avoid these problems in the future.
## Application design
-If your application doesn't follow the SDK best practices, it can result in different issues that will cause slow or failed requests. Follow the [.NET SDK best practices](performance-tips-dotnet-sdk-v3-sql.md) for the best performance.
+When you design your application, [follow the .NET SDK best practices](performance-tips-dotnet-sdk-v3-sql.md) for the best performance. If your application doesn't follow the SDK best practices, you might get slow or failed requests.
Consider the following when developing your application:
-* Application should be in the same region as your Azure Cosmos DB account.
-* Singleton instance of the SDK instance. The SDK has several caches that have to be initialized which may slow down the first few requests.
-* Use Direct + TCP connectivity mode
-* Avoid High CPU. Make sure to look at Max CPU and not average, which is the default for most logging systems. Anything above roughly 40% can increase the latency.
+
+* The application should be in the same region as your Azure Cosmos DB account.
+* The SDK has several caches that have to be initialized, which might slow down the first few requests.
+* The connectivity mode should be direct and TCP.
+* Avoid high CPU. Make sure to look at the maximum CPU and not the average, which is the default for most logging systems. Anything above roughly 40 percent can increase the latency.
## Metadata operations
-Do not verify a Database and/or Container exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](https://aka.ms/CosmosDB/sql/errors/metadata-429) that do not scale like data operations.
+If you need to verify that a database or container exists, don't do so by calling `Create...IfNotExistsAsync` or `Read...Async` before doing an item operation. The validation should only be done on application startup when it's necessary, if you expect them to be deleted. These metadata operations generate extra latency, have no service-level agreement (SLA), and have their own separate [limitations](https://aka.ms/CosmosDB/sql/errors/metadata-429). They don't scale like data operations.
## Slow requests on bulk mode
Do not verify a Database and/or Container exists by calling `Create...IfNotExist
## <a name="capture-diagnostics"></a>Capture the diagnostics
-All the responses in the SDK including `CosmosException` have a Diagnostics property. This property records all the information related to the single request including if there were retries or any transient failures.
+All the responses in the SDK, including `CosmosException`, have a `Diagnostics` property. This property records all the information related to the single request, including if there were retries or any transient failures.
-The Diagnostics are returned as a string. The string changes with each version as it is improved to better troubleshooting different scenarios. With each version of the SDK, the string will have breaking changes to the formatting. Do not parse the string to avoid breaking changes. The following code sample shows how to read diagnostic logs using the .NET SDK:
+The diagnostics are returned as a string. The string changes with each version, as it's improved for troubleshooting different scenarios. With each version of the SDK, the string will have breaking changes to the formatting. Don't parse the string to avoid breaking changes. The following code sample shows how to read diagnostic logs by using the .NET SDK:
```c# try
if (response.Diagnostics.GetClientElapsedTime() > ConfigurableSlowRequestTimeSpa
} ```
+## Diagnostics in version 3.19 and later
-## Diagnostics in version 3.19 and higher
-The JSON structure has breaking changes with each version of the SDK. This makes it unsafe to be parsed. The JSON represents a tree structure of the request going through the SDK. This covers a few key things to look at:
+The JSON structure has breaking changes with each version of the SDK. This makes it unsafe to be parsed. The JSON represents a tree structure of the request going through the SDK. The following sections cover a few key things to look at.
### <a name="cpu-history"></a>CPU history
-High CPU utilization is the most common cause for slow requests. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries where the requests might do multiple connections for a single query.
-# [3.21 or greater SDK](#tab/cpu-new)
+High CPU utilization is the most common cause for slow requests. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries, where the requests might do multiple connections for a single query.
+
+# [3.21 or later SDK](#tab/cpu-new)
-The timeouts will contain *Diagnostics*, which contain:
+The timeouts include diagnostics, which contain the following, for example:
```json "systemHistory": [
The timeouts will contain *Diagnostics*, which contain:
] ```
-* If the `cpu` values are over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
-* If the `threadInfo/isThreadStarving` nodes have `True` values, the cause is thread starvation. In this case the solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
-* If the `dateUtc` time in-between measurements is not approximately 10 seconds, it also would indicate contention on the thread pool. CPU is measured as an independent Task that is enqueued in the thread pool every 10 seconds, if the time in-between measurement is longer, it would indicate that the async Tasks are not able to be processed in a timely fashion. Most common scenarios are when doing [blocking calls over async code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait) in the application code.
+* If the `cpu` values are over 70 percent, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
+* If the `threadInfo/isThreadStarving` nodes have `True` values, the cause is thread starvation. In this case, the solution is to investigate the source or sources of the thread starvation (potentially locked threads), or scale the machine or machines to a larger resource size.
+* If the `dateUtc` time between measurements is not approximately 10 seconds, it also indicates contention on the thread pool. CPU is measured as an independent task that is enqueued in the thread pool every 10 seconds. If the time between measurements is longer, it indicates that the async tasks aren't able to be processed in a timely fashion. The most common scenario is when your application code is [blocking calls over async code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait).
# [Older SDK](#tab/cpu-old)
-If the error contains `TransportException` information, it might contain also `CPU History`:
+If the error contains `TransportException` information, it might also contain `CPU history`:
``` CPU history:
CPU history:
CPU count: 8) ```
-* If the CPU measurements are over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
-* If the CPU measurements are not happening every 10 seconds (e.g., gaps or measurement times indicate larger times in between measurements), the cause is thread starvation. In this case the solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
+* If the CPU measurements are over 70 percent, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
+* If the CPU measurements are not happening every 10 seconds (for example, there are gaps, or measurement times indicate longer times in between measurements), the cause is thread starvation. In this case the solution is to investigate the source or sources of the thread starvation (potentially locked threads), or scale the machine or machines to a larger resource size.
+
-#### Solution:
-The client application that uses the SDK should be scaled up or out.
+#### Solution
+The client application that uses the SDK should be scaled up or out.
### <a name="httpResponseStats"></a>HttpResponseStats
-HttpResponseStats are request going to [gateway](sql-sdk-connection-modes.md). Even in Direct mode the SDK gets all the meta data information from the gateway.
-
-If the request is slow, first verify all the suggestions above don't yield results.
-If it is still slow different patterns point to different issues:
+`HttpResponseStats` are requests that go to the [gateway](sql-sdk-connection-modes.md). Even in direct mode, the SDK gets all the metadata information from the gateway.
-Single store result for a single request
+If the request is slow, first verify that none of the previous suggestions yield the desired results. If it's still slow, different patterns point to different problems. The following table provides more details.
| Number of requests | Scenario | Description | |-|-|-|
-| Single to all | Request Timeout or HttpRequestExceptions | Points to [SNAT Port exhaustion](troubleshoot-dot-net-sdk.md#snat) or lack of resources on the machine to process request in time. |
-| Single or small percentage (SLA is not violated) | All | A single or small percentage of slow requests can be caused by several different transient issues and should be expected. |
-| All | All | Points to an issue with the infrastructure or networking. |
-| SLA Violated | No changes to application and SLA dropped | Points to an issue with the Azure Cosmos DB service. |
+| Single to all | Request timeout or `HttpRequestExceptions` | Points to [SNAT port exhaustion](troubleshoot-dot-net-sdk.md#snat), or a lack of resources on the machine to process the request in time. |
+| Single or small percentage (SLA isn't violated) | All | A single or small percentage of slow requests can be caused by several different transient problems, and should be expected. |
+| All | All | Points to a problem with the infrastructure or networking. |
+| SLA violated | No changes to application, and SLA dropped. | Points to a problem with the Azure Cosmos DB service. |
```json "HttpResponseStats": [
Single store result for a single request
``` ### <a name="storeResult"></a>StoreResult
-StoreResult represents a single request to Azure Cosmos DB using Direct mode with TCP protocol.
-If it is still slow different patterns point to different issues:
+`StoreResult` represents a single request to Azure Cosmos DB, by using direct mode with the TCP protocol.
-Single store result for a single request
+If it's still slow, different patterns point to different problems. The following table provides more details.
| Number of requests | Scenario | Description | |-|-|-|
-| Single to all | StoreResult contains TransportException | Points to [SNAT Port exhaustion](troubleshoot-dot-net-sdk.md#snat) or lack of resources on the machine to process request in time. |
-| Single or small percentage (SLA is not violated) | All | A single or small percentage of slow requests can be caused by several different transient issues and should be expected. |
-| All | All | An issue with the infrastructure or networking. |
-| SLA Violated | Requests contain multiple failure error codes like 410 and IsValid is true | Points to an issue with the Cosmos DB service |
-| SLA Violated | Requests contain multiple failure error codes like 410 and IsValid is false | Points to an issue with the machine |
-| SLA Violated | StorePhysicalAddress is the same with no failure status code | Likely an issue with Cosmos DB service |
-| SLA Violated | StorePhysicalAddress have the same partition ID but different replica IDs with no failure status code | Likely an issue with the Cosmos DB service |
-| SLA Violated | StorePhysicalAddress are random with no failure status code | Points to an issue with the machine |
+| Single to all | `StoreResult` contains `TransportException` | Points to [SNAT port exhaustion](troubleshoot-dot-net-sdk.md#snat), or a lack of resources on the machine to process the request in time. |
+| Single or small percentage (SLA isn't violated) | All | A single or small percentage of slow requests can be caused by several different transient problems, and should be expected. |
+| All | All | A problem with the infrastructure or networking. |
+| SLA violated | Requests contain multiple failure error codes, like `410` and `IsValid is true`. | Points to a problem with the Azure Cosmos DB service. |
+| SLA violated | Requests contain multiple failure error codes, like `410` and `IsValid is false`. | Points to a problem with the machine. |
+| SLA violated | `StorePhysicalAddress` are the same, with no failure status code. | Likely a problem with Azure Cosmos DB. |
+| SLA violated | `StorePhysicalAddress` have the same partition ID, but different replica IDs, with no failure status code. | Likely a problem with Azure Cosmos DB. |
+| SLA violated | `StorePhysicalAddress` is random, with no failure status code. | Points to a problem with the machine. |
-Multiple StoreResults for single request:
+For multiple store results for a single request, be aware of the following:
-* Strong and bounded staleness consistency will always have at least two store results
-* Check the status code of each StoreResult. The SDK retries automatically on multiple different [transient failures](troubleshoot-dot-net-sdk-request-timeout.md). The SDK is constantly being improved to cover more scenarios.
+* Strong consistency and bounded staleness consistency always have at least two store results.
+* Check the status code of each `StoreResult`. The SDK retries automatically on multiple different [transient failures](troubleshoot-dot-net-sdk-request-timeout.md). The SDK is constantly improved to cover more scenarios.
### <a name="rntbdRequestStats"></a>RntbdRequestStats + Show the time for the different stages of sending and receiving a request in the transport layer.
-* ChannelAcquisitionStarted: The time to get or create a new connection. New connections can be created for numerous different regions. For example, a connection was unexpectedly closed or too many requests were getting sent through the existing connections so a new connection is being created.
-* Pipelined time is large points to possibly a large request.
-* Transit time is large, which leads to a networking issue. Compare this number to the `BELatencyInMs`. If the BELatencyInMs is small, then the time was spent on the network and not on the Azure Cosmos DB service.
-* Received time is large this points to a thread starvation issue. This the time between having the response and returning the result.
+* `ChannelAcquisitionStarted`: The time to get or create a new connection. You can create new connections for numerous different regions. For example, let's say that a connection was unexpectedly closed, or too many requests were getting sent through the existing connections. You create a new connection.
+* *Pipelined time is large* might be caused by a large request.
+* *Transit time is large*, which leads to a networking problem. Compare this number to the `BELatencyInMs`. If `BELatencyInMs` is small, then the time was spent on the network, and not on the Azure Cosmos DB service.
+* *Received time is large* might be caused by a thread starvation problem. This is the time between having the response and returning the result.
```json "StoreResult": {
Show the time for the different stages of sending and receiving a request in the
``` ### Failure rate violates the Azure Cosmos DB SLA
-Contact [Azure Support](https://aka.ms/azure-support).
+
+Contact [Azure support](https://aka.ms/azure-support).
## Next steps
-* [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
+
+* [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) problems when you use the Azure Cosmos DB .NET SDK.
* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md).
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 02/15/2022 Last updated : 03/08/2022 # Azure Policy built-in definitions for Data Factory (Preview)
data-lake-analytics Data Lake Analytics U Sql Develop With Python R Csharp In Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-develop-with-python-r-csharp-in-vscode.md
Register Python and, R extensions assemblies for your ADL account.
3. Select **Install U-SQL Extensions**. 4. Confirmation message is displayed after the U-SQL extensions are installed.
- ![Set up the environment for python and R](./media/data-lake-analytics-data-lake-tools-for-vscode/setup-the-enrionment-for-python-and-r.png)
+ ![Set up the environment for Python and R](./media/data-lake-analytics-data-lake-tools-for-vscode/setup-the-enrionment-for-python-and-r.png)
> [!Note] > For best experiences on Python and R language service, please install VSCode Python and R extension.
data-lake-analytics Data Lake Analytics U Sql Python Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-python-extensions.md
All the standard Python modules are included.
### Additional Python modules
-Besides the standard Python libraries, several commonly used python libraries are included:
+Besides the standard Python libraries, several commonly used Python libraries are included:
* pandas * numpy
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
databox-online Azure Stack Edge Pro 2 Deploy Activate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-activate.md
Before you configure and set up your Azure Stack Edge Pro 2, make sure that:
![Screenshot of local web UI with "Activate" highlighted in the Activation tile.](./media/azure-stack-edge-pro-2-deploy-activate/activate-1.png)
-3. In the **Activate** pane, enter the **Activation key** that you got in [Get the activation key for Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-prep.md#get-the-activation-key).
+3. In the **Activate** pane, enter the **Activation key** from [Get the activation key for Azure Stack Edge](azure-stack-edge-gpu-deploy-prep.md#get-the-activation-key).
4. Select **Activate**.
In this tutorial, you learned about:
To learn how to deploy workloads on your Azure Stack Edge device, see: > [!div class="nextstepaction"]
-> [Configure compute to deploy IoT Edge and Kubernetes workloads on Azure Stack Edge](./azure-stack-edge-pro-2-deploy-configure-compute.md)
+> [Configure compute to deploy IoT Edge and Kubernetes workloads on Azure Stack Edge Pro 2](./azure-stack-edge-pro-2-deploy-configure-compute.md)
databox-online Azure Stack Edge Pro 2 Deploy Configure Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-certificates.md
Use these steps to regenerate and download the Azure Stack Edge Pro 2 device cer
- Make sure that the status of all the certificates is shown as **Valid**.
- ![Screenshot of newly generated certificates on the Certificates page of an Azure Stack Edge device. Certificates with Valid state are highlighted.](./media/azure-stack-edge-gpu-deploy-configure-certificates/generate-certificate-6.png)
+ ![Screenshot of newly generated certificates on the Certificates page of an Azure Stack Edge device. Certificates with Valid state are highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/generate-certificate-6.png)
- You can select a specific certificate name, and view the certificate details.
Follow these steps to upload your own certificates including the signing chain.
![Screenshot of the Add Certificate pane for the Local Web UI certificate for an Azure Stack Edge device. The certificate type and certificate entries highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/add-certificate-5.png)
- At any time, you can select a certificate and view the details to ensure that these match with the certificate that you uploaded.
+ The certificate page should update to reflect the newly added certificates. At any time, you can select a certificate and view the details to ensure that these match with the certificate that you uploaded.
- ![Screenshot of the Add Certificate pane for a node certificate for an Azure Stack Edge device. The certificate type and certificate entries highlighted.](./media/azure-stack-edge-gpu-deploy-configure-certificates/add-certificate-6.png)
+ ![Screenshot of the Add Certificate pane for a node certificate for an Azure Stack Edge device. The certificate type and certificate entries highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/add-certificate-6.png)
- The certificate page should update to reflect the newly added certificates.
-
- ![Screenshot of the Certificates page in the local web UI for an Azure Stack Edge device. A newly added set of certificates is highlighted.](./media/azure-stack-edge-gpu-deploy-configure-certificates/add-certificate-7.png)
> [!NOTE] > Except for Azure public cloud, signing chain certificates are needed to be brought in before activation for all cloud configurations (Azure Government or Azure Stack).
In this tutorial, you learn about:
> * Configure certificates for the physical device > * Configure encryption-at-rest
-To learn how to activate your Azure Stack Edge Pro GPU device, see:
+To learn how to activate your Azure Stack Edge Pro 2 device, see:
> [!div class="nextstepaction"]
-> [Activate Azure Stack Edge Pro GPU device](./azure-stack-edge-gpu-deploy-activate.md)
+> [Activate Azure Stack Edge Pro 2 device](./azure-stack-edge-pro-2-deploy-activate.md)
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
na Previously updated : 02/15/2022 Last updated : 03/08/2022
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions for Microsoft Defender for Cloud description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022 # Azure Policy built-in definitions for Microsoft Defender for Cloud
defender-for-cloud Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Microsoft Defender for Cloud description: Sample Azure Resource Graph queries for Microsoft Defender for Cloud showing use of resource types and tables to access Microsoft Defender for Cloud related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
and [Resource Graph samples by Table](../governance/resource-graph/samples/sampl
## Sample queries ## Next steps
devtest-labs Automate Add Lab User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/automate-add-lab-user.md
The object that is being granted access can be specified by the `objectId`, `sig
The following Azure CLI example shows you how to add a person to the DevTest Labs User role for the specified Lab. ```azurecli
-az role assignment create --roleName "DevTest Labs User" --signInName <email@company.com> -ΓÇôresource-name "<Lab Name>" --resource-type "Microsoft.DevTestLab/labs" --resource-group "<Resource Group Name>"
+az role assignment create --roleName "DevTest Labs User" --signInName <email@company.com> -ΓÇôresource-name "<Lab Name>" --resource-type "Microsoft.DevTestLab/labs" --resource-group "<Resource Group Name>" --role Contributor --scope /subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>
``` ## Next steps
digital-twins Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-security.md
# Secure Azure Digital Twins
-This article explains Azure Digital Twins security best practices. It covers roles and permissions, managed identity, private network access with Azure Private Link (preview), service tags, encryption of data at rest, and Cross-Origin Resource Sharing (CORS).
+This article explains Azure Digital Twins security best practices. It covers roles and permissions, managed identity, private network access with Azure Private Link, service tags, encryption of data at rest, and Cross-Origin Resource Sharing (CORS).
For security, Azure Digital Twins enables precise access control over specific data, resources, and actions in your deployment. It does so through a granular role and permission management strategy called [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md).
You can use a system-assigned managed identity for your Azure Digital Instance t
For instructions on how to enable a system-managed identity for Azure Digital Twins and use it to route events, see [Route events with a managed identity](how-to-route-with-managed-identity.md).
-## Private network access with Azure Private Link (preview)
+## Private network access with Azure Private Link
[Azure Private Link](../private-link/private-link-overview.md) is a service that enables you to access Azure resources (like [Azure Event Hubs](../event-hubs/event-hubs-about.md), [Azure Storage](../storage/common/storage-introduction.md), and [Azure Cosmos DB](../cosmos-db/introduction.md)) and Azure-hosted customer and partner services over a private endpoint in your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md).
The private endpoint uses an IP address from your Azure VNet address space. Netw
Configuring a private endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure, as well as avoid data exfiltration from your VNet.
-For instructions on how to set up Private Link for Azure Digital Twins, see [Enable private access with Private Link (preview)](./how-to-enable-private-link.md).
+For instructions on how to set up Private Link for Azure Digital Twins, see [Enable private access with Private Link](./how-to-enable-private-link.md).
### Design considerations
digital-twins How To Enable Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-enable-private-link.md
# Mandatory fields. Title: Enable private access with Private Link (preview)
+ Title: Enable private access with Private Link
description: Learn how to enable private access for Azure Digital Twins solutions with Private Link.
ms.devlang: azurecli
#
-# Enable private access with Private Link (preview)
+# Enable private access with Private Link
-This article describes the different ways to [enable Private Link with a private endpoint for an Azure Digital Twins instance](concepts-security.md#private-network-access-with-azure-private-link-preview) (currently in preview). Configuring a private endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure. Additionally, it helps avoid data exfiltration from your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md).
+This article describes the different ways to [enable Private Link with a private endpoint for an Azure Digital Twins instance](concepts-security.md#private-network-access-with-azure-private-link). Configuring a private endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure. Additionally, it helps avoid data exfiltration from your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md).
Here are the steps that are covered in this article: 1. Turn on Private Link and configure a private endpoint for an Azure Digital Twins instance.
In this section, you'll enable Private Link with a private endpoint for an Azure
1. First, navigate to the [Azure portal](https://portal.azure.com) in a browser. Bring up your Azure Digital Twins instance by searching for its name in the portal search bar.
-1. Select **Networking (preview)** in the left-hand menu.
+1. Select **Networking** in the left-hand menu.
1. Switch to the **Private endpoint connections** tab.
In this section, you'll see how to view, edit, and delete a private endpoint aft
# [Portal](#tab/portal)
-Once a private endpoint has been created for your Azure Digital Twins instance, you can view it in the **Networking (preview)** tab for your Azure Digital Twins instance. This page will show all the private endpoint connections associated with the instance.
+Once a private endpoint has been created for your Azure Digital Twins instance, you can view it in the **Networking** tab for your Azure Digital Twins instance. This page will show all the private endpoint connections associated with the instance.
:::image type="content" source="media/how-to-enable-private-link/view-endpoint-digital-twins.png" alt-text="Screenshot of the Azure portal showing the Networking page for an existing Azure Digital Twins instance with one private endpoint." lightbox="media/how-to-enable-private-link/view-endpoint-digital-twins.png":::
You can update the value of the network flag using the [Azure portal](https://po
To disable or enable public network access in the [Azure portal](https://portal.azure.com), open the portal and navigate to your Azure Digital Twins instance.
-1. Select **Networking (preview)** in the left-hand menu.
+1. Select **Networking** in the left-hand menu.
1. In the **Public access** tab, set **Allow public network access to** either **Disabled** or **All networks**.
digital-twins How To Use Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-azure-digital-twins-explorer.md
You can use the **Query Explorer** panel to run [queries](concepts-query-languag
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/query-explorer-panel.png" alt-text="Screenshot of Azure Digital Twins Explorer. The Query Explorer panel is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/query-explorer-panel.png":::
-Enter the query you want to run and select the **Run Query** button. Doing so will load the query results in the **Twin Graph** panel.
+Enter the query you want to run. If you want to enter a query in multiple lines, you can use SHIFT + ENTER to add a new line to the query box.
+
+Select the **Run Query** button to display query results in the **Twin Graph** panel.
>[!NOTE] > Query results containing relationships can only be rendered in the **Twin Graph** panel if the results include at least one twin as well. While queries that return only relationships are possible in Azure Digital Twins, you can only view them in Azure Digital Twins Explorer by using the [Output panel](#accessibility-and-advanced-settings).
Once the two twins are simultaneously selected, right-click the target twin to b
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-add-relationship.png" alt-text="Screenshot of Azure Digital Twins Explorer Twin Graph panel. The FactoryA and Consumer twins are selected, and a menu shows the option to Add relationships." lightbox="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-add-relationship.png":::
-Doing so will bring up the **Create Relationship** dialog, which shows the source twin and target twin of the relationship, followed by a **Relationship** dropdown menu that contains the types of relationship that the source twin can have (defined in its DTDL model). Select an option for the relationship type, and **Save** the new relationship.
+Doing so will bring up the **Create Relationship** dialog, populated with the source twin and target twin of the relationship (you can also use the **Swap Relationship** icon to switch them). There is a **Relationship** dropdown menu that contains the types of relationship that the source twin can have, according to its DTDL model. Select an option for the relationship type, and **Save** the new relationship.
### Edit twin and relationship properties
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
Azure Database Migration Service prerequisites that are common across all suppor
* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace * Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
- - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
+ - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
- Owner or Contributor role for the Azure subscription. > [!IMPORTANT] > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
To complete this tutorial, you need to:
* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace * Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
- - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
+ - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
- Owner or Contributor role for the Azure subscription (required if creating a new DMS service). > [!IMPORTANT] > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
To complete this tutorial, you need to:
> [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
-1. If you picked the first option for network share, provide details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.
+* For backups located on a network share provide the below details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.
|Field |Description | ||-|
To complete this tutorial, you need to:
|**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. | |**Storage account details** |The resource group and storage account where backup files will be uploaded to. You do not need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process. |
-1. If you picked the second option for backups stored in an Azure Blob Container specify the **Target database name**, **Resource group**, **Azure storage account**, **Blob container** and **Last backup file** from the corresponding drop-down lists. This Azure storage account will be used by DMS to upload the database backups from network share. You do not need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
+* For backups stored in an Azure storage blob container specify the below details of the **Target database name**, **Resource group**, **Azure storage account**, **Blob container** and **Last backup file from** the corresponding drop-down lists.
+
+ |Field |Description |
+ ||-|
+ |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
+ |**Storage account details** |The resource group, storage account and container where backup files are located.
+ |**Last Backup File** |The file name of the last backup of the database that you are migrating.
+ > [!IMPORTANT] > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd)
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
To complete this tutorial, you need to:
* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace * Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
- - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
+ - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
- Owner or Contributor role for the Azure subscription. > [!IMPORTANT] > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
To complete this tutorial, you need to:
3. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container. > [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
-4. After selecting the backup location, provide details of your source SQL Server and source backup location.
+* For backups located on a network share provide the below details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.
|Field |Description | ||-|
To complete this tutorial, you need to:
|**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. | |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
-5. Specify the **Azure storage account** by selecting the **Subscription**, **Location**, and **Resource Group** from the corresponding drop-down lists. This Azure storage account will be used by DMS to upload the database backups from network share. You don't need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
+* For backups stored in an Azure storage blob container specify the below details of the **Target database name**, **Resource group**, **Azure storage account**, **Blob container** and **Last backup file from** the corresponding drop-down lists.
+
+ |Field |Description |
+ ||-|
+ |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
+ |**Storage account details** |The resource group, storage account and container where backup files are located.
+ |**Last Backup File** |The file name of the last backup of the database that you are migrating.
+
> [!IMPORTANT] > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd)
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
To complete this tutorial, you need to:
* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace * Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
- - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
+ - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
- Owner or Contributor role for the Azure subscription. > [!IMPORTANT] > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
To complete this tutorial, you need to:
3. In step 5, select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container. > [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
-4. After selecting the backup location, provide details of your source SQL Server and source backup location.
+
+* For backups located on a network share provide the below details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.
|Field |Description | ||-|
To complete this tutorial, you need to:
|**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. | |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
-5. Specify the **Azure storage account** by selecting the **Subscription**, **Location**, and **Resource Group** from the corresponding drop-down lists. This Azure storage account will be used by DMS to upload the database backups from network share. You don't need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
+* For backups stored in an Azure storage blob container specify the below details of the Target database name,
+Resource group, Azure storage account, Blob container from the corresponding drop-down lists.
+
+ |Field |Description |
+ ||-|
+ |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
+ |**Storage account details** |The resource group, storage account and container where backup files are located.
+ 6. Select **Next** to continue. > [!IMPORTANT] > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd)
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
event-hubs Event Hubs Premium Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-premium-overview.md
In comparison to the dedicated offering, the premium tier provides the following
- Scale far more elastically and quicker - PUs can be dynamically adjusted
-Therefore, the premium tier is often a more cost effective option for mid-range (<120MB/sec) throughput requirements, especially with changing loads throughout the day or week, when compared to the dedicated tier.
+Therefore, the premium tier is often a more cost effective option for event streaming workloads up to 160 MB/sec (per namespace), especially with changing loads throughout the day or week, when compared to the dedicated tier.
For the extra robustness gained by availability-zone support, the minimal deployment scale for the dedicated tier is 8 capacity units (CU), but you'll have availability zone support in the premium tier from the first PU in all availability zone regions.
event-hubs Event Hubs Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-scalability.md
For more information about the auto-inflate feature, see [Automatically scale th
[Event Hubs Premium](./event-hubs-premium-overview.md) provides superior performance and better isolation with in a managed multitenant PaaS environment. The resources in a Premium tier are isolated at the CPU and memory level so that each tenant workload runs in isolation. This resource container is called a *Processing Unit*(PU). You can purchase 1, 2, 4, 8 or 16 processing Units for each Event Hubs Premium namespace.
-How much you can ingest and stream with a processing unit depends on various factors such as your producers, consumers, the rate at which you're ingesting and processing, and much more. One processing unit can approximately offer core capacity of ~5-10 MB/s ingress and 10-20 MB/s egress, given that we have sufficient partitions so that storage is not a throttling factor.
+How much you can ingest and stream with a processing unit depends on various factors such as your producers, consumers, the rate at which you're ingesting and processing, and much more.
+
+For example, Event Hubs Premium namespace with 1 PU and 1 event hub(100 partitions) can approximately offer core capacity of ~5-10 MB/s ingress and 10-20 MB/s egress for both AMQP or Kafka workloads.
To learn about configuring PUs for a premium tier namespace, see [Configure processing units](configure-processing-units-premium-namespace.md).
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
genomics Quickstart Input Bam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/genomics/quickstart-input-bam.md
output_storage_account_container: outputs
Submit the `config.txt` file with this invocation: `msgen submit -f config.txt` ## Next steps
-In this article, you uploaded a BAM file into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` python client. For additional information regarding workflow submission and other commands you can use with the Microsoft Genomics service, see our [FAQ](frequently-asked-questions-genomics.yml).
+In this article, you uploaded a BAM file into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` Python client. For additional information regarding workflow submission and other commands you can use with the Microsoft Genomics service, see our [FAQ](frequently-asked-questions-genomics.yml).
genomics Quickstart Input Multiple https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/genomics/quickstart-input-multiple.md
output_storage_account_container: outputs
Submit the `config.txt` file with this invocation: `msgen submit -f config.txt` ## Next steps
-In this article, you uploaded multiple BAM files or paired FASTQ files into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` python client. For more information regarding workflow submission and other commands you can use with the Microsoft Genomics service, see the [FAQ](frequently-asked-questions-genomics.yml).
+In this article, you uploaded multiple BAM files or paired FASTQ files into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` Python client. For more information regarding workflow submission and other commands you can use with the Microsoft Genomics service, see the [FAQ](frequently-asked-questions-genomics.yml).
genomics Quickstart Input Pair Fastq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/genomics/quickstart-input-pair-fastq.md
output_storage_account_container: outputs
Submit the `config.txt` file with this invocation: `msgen submit -f config.txt` ## Next steps
-In this article, you uploaded a pair of FASTQ files into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` python client. To learn more about workflow submission and other commands you can use with the Microsoft Genomics service, see our [FAQ](frequently-asked-questions-genomics.yml).
+In this article, you uploaded a pair of FASTQ files into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` Python client. To learn more about workflow submission and other commands you can use with the Microsoft Genomics service, see our [FAQ](frequently-asked-questions-genomics.yml).
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/cis-azure-1-1-0.md
- Title: CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample
-description: Overview of the CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample. This blueprint sample helps customers assess specific controls.
Previously updated : 09/08/2021--
-# CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample
-
-The CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample provides governance guardrails
-using [Azure Policy](../../policy/overview.md) that help you assess specific CIS Microsoft Azure
-Foundations Benchmark recommendations. This blueprint helps customers deploy a core set of policies
-for any Azure-deployed architecture that must implement CIS Microsoft Azure Foundations Benchmark
-v1.1.0 recommendations.
-
-## Recommendation mapping
-
-The [Azure Policy recommendation mapping](../../policy/samples/cis-azure-1-1-0.md) provides details
-on policy definitions included within this blueprint and how these policy definitions map to the
-**recommendations** in CIS Microsoft Azure Foundations Benchmark v1.1.0. When assigned to an
-architecture, resources are evaluated by Azure Policy for non-compliance with assigned policy
-definitions. For more information, see [Azure Policy](../../policy/overview.md).
-
-## Deploy
-
-To deploy the Azure Blueprints CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample,
-the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-### Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **CIS Microsoft Azure Foundations Benchmark v1.1.0** blueprint sample under _Other
- Samples_ and select **Use this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the CIS Microsoft Azure Foundations
- Benchmark blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that are included in the blueprint sample. Many of the artifacts
- have parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-### Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with CIS Microsoft Azure Foundations Benchmark v1.1.0 recommendations.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the CIS
- Microsoft Azure Foundations Benchmark blueprint sample." Then select **Publish** at the bottom of
- the page.
-
-### Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are provided
-to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since
- they're defined during the assignment of the blueprint. For a full list or artifact parameters
- and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-### Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|Audit CIS Microsoft Azure Foundations Benchmark v1.1.0 recommendations and deploy specific supporting VM Extensions|Policy assignment|List of regions where Network Watcher should be enabled|A semicolon-separated list of regions. To see a complete list of regions use Get-AzLocation. Ex: eastus; eastus2|
-|Audit CIS Microsoft Azure Foundations Benchmark v1.1.0 recommendations and deploy specific supporting VM Extensions|Policy assignment|List of virtual machine extensions that are approved for use|A semicolon-separated list of extensions. To see a complete list of virtual machine extensions, use Get-AzVMExtensionImage. Ex: AzureDiskEncryption; IaaSAntimalware|
-
-## Next steps
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/cis-azure-1-3-0.md
- Title: CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample
-description: Overview of the CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample. This blueprint sample helps customers assess specific controls.
Previously updated : 09/08/2021--
-# CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample
-
-The CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample provides governance guardrails
-using [Azure Policy](../../policy/overview.md) that help you assess specific CIS Microsoft Azure
-Foundations Benchmark v1.3.0 recommendations. This blueprint helps customers deploy a core set of
-policies for any Azure-deployed architecture that must implement CIS Microsoft Azure Foundations
-Benchmark v1.3.0 recommendations.
-
-## Recommendation mapping
-
-The [Azure Policy recommendation mapping](../../policy/samples/cis-azure-1-3-0.md) provides details
-on policy definitions included within this blueprint and how these policy definitions map to the
-**recommendations** in CIS Microsoft Azure Foundations Benchmark v1.3.0. When assigned to an
-architecture, resources are evaluated by Azure Policy for non-compliance with assigned policy
-definitions. For more information, see [Azure Policy](../../policy/overview.md).
-
-## Deploy
-
-To deploy the Azure Blueprints CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample,
-the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-### Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **CIS Microsoft Azure Foundations Benchmark v1.3.0** blueprint sample under _Other
- Samples_ and select **Use this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the CIS Microsoft Azure Foundations
- Benchmark blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that are included in the blueprint sample. Many of the artifacts
- have parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-### Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with CIS Microsoft Azure Foundations Benchmark v1.3.0 recommendations.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the CIS
- Microsoft Azure Foundations Benchmark blueprint sample." Then select **Publish** at the bottom of
- the page.
-
-### Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are provided
-to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since
- they're defined during the assignment of the blueprint. For a full list or artifact parameters
- and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-### Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|List of virtual machine extensions that are approved for use|A semicolon-separated list of virtual machine extensions; to see a complete list of extensions, use the Azure PowerShell command Get-AzVMExtensionImage|
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: SQL managed instances should use customer-managed keys to encrypt data at rest|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Azure Data Lake Store should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Disk encryption should be applied on virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Key vault should have purge protection enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: SQL servers should use customer-managed keys to encrypt data at rest|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Managed identity should be used in your Function App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for Key Vault should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Custom subscription owner roles should not exist|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Keys should have expiration dates set|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Transparent Data Encryption on SQL databases should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on SQL Managed Instance|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'PHP version' is the latest, if used as a part of the API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An Azure Active Directory administrator should be provisioned for SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for App Service should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage accounts should restrict network access using virtual network rules|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Managed identity should be used in your Web App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: SSH access from the Internet should be blocked|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Unattached disks should be encrypted|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for Storage should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage accounts should restrict network access|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Logic Apps should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in IoT Hub should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: FTPS only should be required in your Function App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Security operations (Microsoft.Security/securitySolutions/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Security operations (Microsoft.Security/securitySolutions/write)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Secure transfer to storage accounts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Batch accounts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Auto provisioning of the Log Analytics agent should be enabled on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the Web app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: FTPS should be required in your Web App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for servers should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Subscriptions should have a contact email address for security issues|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage account public access should be disallowed|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for Kubernetes should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Connection throttling should be enabled for PostgreSQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure WEB app has 'Client Certificates (Incoming client certificates)' set to 'On'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: External accounts with write permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: External accounts with read permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for SQL servers on machines should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Email notification for high severity alerts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage account should use customer-managed key for encryption|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the Web app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the Function app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'PHP version' is the latest, if used as a part of the WEB app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Virtual Machine Scale Sets should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for Azure SQL Database servers should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Event Hub should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: System updates should be installed on your machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: SQL servers should be configured with 90 days auditing retention or higher.|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the Web app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Latest TLS version should be used in your API App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: MFA should be enabled accounts with write permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Authentication should be enabled on your web app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Secrets should have expiration dates set|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: FTPS only should be required in your API App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the Function app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Web Application should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Auditing on SQL server should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: MFA should be enabled on accounts with owner permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Advanced data security should be enabled on your SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Advanced data security should be enabled on SQL Managed Instance|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Role-Based Access Control (RBAC) should be used on Kubernetes Services|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Monitor missing Endpoint Protection in Azure Security Center|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Search services should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in App Services should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/delete) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/securityRules/delete) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/securityRules/write) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/write) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Sql/servers/firewallRules/delete) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Sql/servers/firewallRules/write) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Only approved VM extensions should be installed|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for container registries should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Managed identity should be used in your API App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Authentication should be enabled on your API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Policy operations (Microsoft.Authorization/policyAssignments/delete) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Policy operations (Microsoft.Authorization/policyAssignments/write) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Authentication should be enabled on your Function app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Data Lake Analytics should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage accounts should allow access from trusted Microsoft services|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Key Vault should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Enforce SSL connection should be enabled for PostgreSQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the Function app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: MFA should be enabled on accounts with read permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: RDP access from the Internet should be blocked|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Enforce SSL connection should be enabled for MySQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure Function app has 'Client Certificates (Incoming client certificates)' set to 'On'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Log checkpoints should be enabled for PostgreSQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Log connections should be enabled for PostgreSQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Disconnections should be logged for PostgreSQL database servers.|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on your SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Latest TLS version should be used in your Web App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: External accounts with owner permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Service Bus should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Azure Stream Analytics should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Latest TLS version should be used in your Function App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage account containing the container with activity logs must be encrypted with BYOK|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Include AKS clusters when auditing if virtual machine scale set diagnostic logs are enabled||
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Latest Java version for App Services|Latest supported Java version for App Services|
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Latest Python version for Linux for App Services|Latest supported Python version for App Services|
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|List of regions where Network Watcher should be enabled|To see a complete list of regions, run the PowerShell command Get-AzLocation|
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Latest PHP version for App Services|Latest supported PHP version for App Services|
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Required retention period (days) for resource logs|For more information about resource logs, visit [https://aka.ms/resourcelogs](../../../azure-monitor/essentials/resource-logs.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Name of the resource group for Network Watcher|Name of the resource group where Network Watchers are located|
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Required auditing setting for SQL servers||
-
-## Next steps
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/cmmc-l3.md
- Title: CMMC Level 3 blueprint sample
-description: Overview of the CMMC Level 3 blueprint sample. This blueprint sample helps customers assess specific controls.
Previously updated : 09/08/2021--
-# CMMC Level 3 blueprint sample
-
-The CMMC Level 3 blueprint sample provides governance guardrails using
-[Azure Policy](../../policy/overview.md) that help you assess specific
-[Cybersecurity Maturity Model Certification (CMMC) framework](https://www.acq.osd.mil/cmmc/https://docsupdatetracker.net/index.html)
-controls. This blueprint helps customers deploy a core set of policies for any Azure-deployed
-architecture that must implement controls for CMMC Level 3.
-
-## Control mapping
-
-The [Azure Policy control mapping](../../policy/samples/cmmc-l3.md) provides details on policy
-definitions included within this blueprint and how these policy definitions map to the **controls**
-in the CMMC framework. When assigned to an architecture, resources are evaluated by Azure Policy for
-non-compliance with assigned policy definitions. For more information, see
-[Azure Policy](../../policy/overview.md).
-
-## Deploy
-
-To deploy the Azure Blueprints CMMC Level 3 blueprint sample,
-the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-### Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **CMMC Level 3** blueprint sample under _Other
- Samples_ and select **Use this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the CMMC Level 3 blueprint
- sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that are included in the blueprint sample. Many of the artifacts
- have parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-### Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with CMMC Level 3 controls.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the CMMC Level
- 3 blueprint sample." Then select **Publish** at the bottom of the page.
-
-### Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are provided
-to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since
- they're defined during the assignment of the blueprint. For a full list or artifact parameters
- and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-### Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|CMMC Level 3|Policy Assignment|Include Arc-connected servers when evaluating guest configuration policies|By selecting 'true', you agree to be charged monthly per Arc connected machine; for more information, visit https://aka.ms/policy-pricing|
-|CMMC Level 3|Policy Assignment|List of users that must be excluded from Windows VM Administrators group|A semicolon-separated list of users that should be excluded in the Administrators local group; Ex: Administrator; myUser1; myUser2|
-|CMMC Level 3|Policy Assignment|List of users that must be included in Windows VM Administrators group|A semicolon-separated list of users that should be included in the Administrators local group; Ex: Administrator; myUser1; myUser2|
-|CMMC Level 3|Policy Assignment|Log Analytics workspace ID for VM agent reporting|ID (GUID) of the Log Analytics workspace where VMs agents should report|
-|CMMC Level 3|Policy Assignment|Allowed elliptic curve names|The list of allowed curve names for elliptic curve cryptography certificates.|
-|CMMC Level 3|Policy Assignment|Allowed key types|The list of allowed key types|
-|CMMC Level 3|Policy Assignment|Allow host network usage for Kubernetes cluster pods|Set this value to true if pod is allowed to use host network otherwise false.|
-|CMMC Level 3|Policy Assignment|Audit Authentication Policy Change|Specifies whether audit events are generated when changes are made to authentication policy. This setting is useful for tracking changes in domain-level and forest-level trust and privileges that are granted to user accounts or groups.|
-|CMMC Level 3|Policy Assignment|Audit Authorization Policy Change|Specifies whether audit events are generated for assignment and removal of user rights in user right policies, changes in security token object permission, resource attributes changes and Central Access Policy changes for file system objects.|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Backup should be enabled for Virtual Machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Cognitive Services accounts should restrict network access|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: SQL managed instances should use customer-managed keys to encrypt data at rest|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure API for FHIR should use a customer-managed key (CMK) to encrypt data at rest|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should be enabled for Azure Front Door Service|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Public network access should be disabled for Cognitive Services accounts|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: CORS should not allow every resource to access your Function Apps|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Adaptive network hardening recommendations should be applied on internet facing virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: There should be more than one owner assigned to your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Disk encryption should be applied on virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Email notification to subscription owner for high severity alerts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Key vault should have purge protection enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: SQL servers should use customer-managed keys to encrypt data at rest|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Remote debugging should be turned off for Function Apps|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Defender for Key Vault should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Geo-redundant backup should be enabled for Azure Database for MariaDB|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: CORS should not allow every domain to access your API for FHIR|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Windows machines should meet requirements for 'Security Options - Network Security'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Allowlist rules in your adaptive application control policy should be updated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should use the specified mode for Application Gateway|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Keys should have expiration dates set|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Transparent Data Encryption on SQL databases should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Monitor log profile should collect logs for categories 'write,' 'delete,' and 'action'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on SQL Managed Instance|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'PHP version' is the latest, if used as a part of the API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Key vault should have soft delete enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: An Azure Active Directory administrator should be provisioned for SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Only secure connections to your Azure Cache for Redis should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Infrastructure encryption should be enabled for Azure Database for PostgreSQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Endpoint protection solution should be installed on virtual machine scale sets|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Defender for App Service should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Windows machines should meet requirements for 'System Audit Policies - Policy Change'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Cognitive Services accounts should enable data encryption|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: SSH access from the Internet should be blocked|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Unattached disks should be encrypted|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Defender for Storage should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Storage accounts should restrict network access|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: CORS should not allow every resource to access your API App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Deploy Advanced Threat Protection on Storage Accounts|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Automation account variables should be encrypted|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Diagnostic logs in IoT Hub should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Infrastructure encryption should be enabled for Azure Database for MySQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: An activity log alert should exist for specific Security operations (Microsoft.Security/securitySolutions/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Vulnerabilities in security configuration on your virtual machine scale sets should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Windows machines should meet requirements for 'Security Options - Network Access'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Secure transfer to storage accounts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Monitor should collect activity logs from all regions|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should use the specified mode for Azure Front Door Service|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Storage accounts should have infrastructure encryption|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Adaptive application controls for defining safe applications should be enabled on your machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Geo-redundant backup should be enabled for Azure Database for PostgreSQL|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Windows machines should meet requirements for 'Security Options - User Account Control'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the Web app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Defender for servers should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: A maximum of 3 owners should be designated for your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Subscriptions should have a contact email address for security issues|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Storage account public access should be disallowed|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: A vulnerability assessment solution should be enabled on your virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Defender for Kubernetes should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Firewall should be enabled on Key Vault|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should be enabled for Application Gateway|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: CORS should not allow every resource to access your Web Applications|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Audit Windows machines that allow re-use of the previous 24 passwords|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Container registries should be encrypted with a customer-managed key (CMK)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: External accounts with write permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Public network access should be disabled for PostgreSQL flexible servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Vulnerabilities in Azure Container Registry images should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: External accounts with read permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Defender for SQL servers on machines should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Cognitive Services accounts should enable data encryption with customer-managed key|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Deprecated accounts should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Function App should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Email notification for high severity alerts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Storage account should use customer-managed key for encryption|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the Web app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the Function app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'PHP version' is the latest, if used as a part of the WEB app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Keys should be the specified cryptographic type RSA or EC|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure subscriptions should have a log profile for Activity Log|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Both operating systems and data disks in Azure Kubernetes Service clusters should be encrypted by customer-managed keys|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Defender for Azure SQL Database servers should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Data Explorer encryption at rest should use a customer-managed key|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Keys using RSA cryptography should have a specified minimum key size|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Geo-redundant backup should be enabled for Azure Database for MySQL|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Kubernetes cluster pods should only use approved host network and port range|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: System updates should be installed on your machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Windows machines should meet requirements for 'System Audit Policies - Privilege Use'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Stream Analytics jobs should use customer-managed keys to encrypt data|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the Web app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Latest TLS version should be used in your API App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: MFA should be enabled accounts with write permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Microsoft IaaSAntimalware extension should be deployed on Windows servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the Function app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: All network ports should be restricted on network security groups associated to your virtual machine|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Security Center standard pricing tier should be selected|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Audit Windows machines that do not restrict the minimum password length to 14 characters|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Audit usage of custom RBAC rules|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Web Application should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Auditing on SQL server should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: The Log Analytics agent should be installed on virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: MFA should be enabled on accounts with owner permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Advanced data security should be enabled on your SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Advanced data security should be enabled on SQL Managed Instance|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Role-Based Access Control (RBAC) should be used on Kubernetes Services|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Virtual machines should have the Guest Configuration extension|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Monitor missing Endpoint Protection in Azure Security Center|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Activity log should be retained for at least one year|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Management ports of virtual machines should be protected with just-in-time network access control|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Public network access should be disabled for PostgreSQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Deploy Advanced Threat Protection for Cosmos DB Accounts|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Diagnostic logs in App Services should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: API App should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.ClassicNetwork/networkSecurityGroups/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.ClassicNetwork/networkSecurityGroups/securityRules/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/securityRules/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Sql/servers/firewallRules/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Non-internet-facing virtual machines should be protected with network security groups|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Audit Windows machines that do not have the password complexity setting enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Defender for container registries should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Data Box jobs should enable double encryption for data at rest on the device|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: System updates on virtual machine scale sets should be installed|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Microsoft Antimalware for Azure should be configured to automatically update protection signatures|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: An activity log alert should exist for specific Policy operations (Microsoft.Authorization/policyAssignments/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Public network access should be disabled for MySQL flexible servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Storage accounts should allow access from trusted Microsoft services|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Remote debugging should be turned off for Web Applications|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Certificates using RSA cryptography should have the specified minimum key size|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Container registries should not allow unrestricted network access|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Enforce SSL connection should be enabled for PostgreSQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Guest Configuration extension should be deployed to Azure virtual machines with system assigned managed identity|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Long-term geo-redundant backup should be enabled for Azure SQL Databases|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Public network access should be disabled for MySQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Audit Windows machines that do not store passwords using reversible encryption|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Windows machines should meet requirements for 'User Rights Assignment'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Vulnerabilities in security configuration on your machines should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the Function app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: MFA should be enabled on accounts with read permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: RDP access from the Internet should be blocked|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Audit Linux machines that do not have the passwd file permissions set to 0644|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Subnets should be associated with a Network Security Group|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Enforce SSL connection should be enabled for MySQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Vulnerabilities in container security configurations should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Remote debugging should be turned off for API Apps|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Audit Linux machines that allow remote connections from accounts without passwords|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Deprecated accounts with owner permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Double encryption should be enabled on Azure Data Explorer|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on your SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: The Log Analytics agent should be installed on Virtual Machine Scale Sets|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Latest TLS version should be used in your Web App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Disk encryption should be enabled on Azure Data Explorer|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Internet-facing virtual machines should be protected with network security groups|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Audit Linux machines that have accounts without passwords|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Azure Synapse workspaces should use customer-managed keys to encrypt data at rest|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: External accounts with owner permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Latest TLS version should be used in your Function App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: All Internet traffic should be routed via your deployed Azure Firewall|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Linux machines should meet requirements for the Azure security baseline|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Public network access should be disabled for MariaDB servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Vulnerabilities on your SQL databases should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Effect for policy: Keys using elliptic curve cryptography should have the specified curve names|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|CMMC Level 3|Policy Assignment|Namespaces excluded from evaluation of policy: Kubernetes cluster pods should only use approved host network and port range|List of Kubernetes namespaces to exclude from policy evaluation.|
-|CMMC Level 3|Policy Assignment|Latest Java version for App Services|Latest supported Java version for App Services|
-|CMMC Level 3|Policy Assignment|Latest Python version for Linux for App Services|Latest supported Python version for App Services|
-|CMMC Level 3|Policy Assignment|Optional: List of VM images that have supported Linux OS to add to scope when auditing Log Analytics agent deployment|Example value: `/subscriptions/<subscriptionId>/resourceGroups/YourResourceGroup/providers/Microsoft.Compute/images/ContosoStdImage`|
-|CMMC Level 3|Policy Assignment|Optional: List of VM images that have supported Windows OS to add to scope when auditing Log Analytics agent deployment|Example value: `/subscriptions/<subscriptionId>/resourceGroups/YourResourceGroup/providers/Microsoft.Compute/images/ContosoStdImage`|
-|CMMC Level 3|Policy Assignment|List of regions where Network Watcher should be enabled|Audit if Network Watcher is not enabled for region(s).|
-|CMMC Level 3|Policy Assignment|List of resource types that should have diagnostic logs enabled||
-|CMMC Level 3|Policy Assignment|Maximum value in the allowable host port range that pods can use in the host network namespace|The maximum value in the allowable host port range that pods can use in the host network namespace.|
-|CMMC Level 3|Policy Assignment|Minimum RSA key size for keys|The minimum key size for RSA keys.|
-|CMMC Level 3|Policy Assignment|Minimum RSA key size certificates|The minimum key size for RSA certificates.|
-|CMMC Level 3|Policy Assignment|Minimum TLS version for Windows web servers|Windows web servers with lower TLS versions will be assessed as non-compliant|
-|CMMC Level 3|Policy Assignment|Minimum value in the allowable host port range that pods can use in the host network namespace|The minimum value in the allowable host port range that pods can use in the host network namespace.|
-|CMMC Level 3|Policy Assignment|Mode Requirement|Mode required for all WAF policies|
-|CMMC Level 3|Policy Assignment|Mode Requirement|Mode required for all WAF policies|
-|CMMC Level 3|Policy Assignment|Allowed host paths for pod hostPath volumes to use|The host paths allowed for pod hostPath volumes to use. Provide an empty paths list to block all host paths.|
-|CMMC Level 3|Policy Assignment|Network access: Remotely accessible registry paths|Specifies which registry paths will be accessible over the network, regardless of the users or groups listed in the access control list (ACL) of the `winreg` registry key.|
-|CMMC Level 3|Policy Assignment|Network access: Remotely accessible registry paths and sub-paths|Specifies which registry paths and sub-paths will be accessible over the network, regardless of the users or groups listed in the access control list (ACL) of the `winreg` registry key.|
-|CMMC Level 3|Policy Assignment|Network access: Shares that can be accessed anonymously|Specifies which network shares can be accessed by anonymous users. The default configuration for this policy setting has little effect because all users have to be authenticated before they can access shared resources on the server.|
-|CMMC Level 3|Policy Assignment|Network Security: Configure encryption types allowed for Kerberos|Specifies the encryption types that Kerberos is allowed to use.|
-|CMMC Level 3|Policy Assignment|Network security: LAN Manager authentication level|Specify which challenge-response authentication protocol is used for network logons. This choice affects the level of authentication protocol used by clients, the level of session security negotiated, and the level of authentication accepted by servers.|
-|CMMC Level 3|Policy Assignment|Network security: LDAP client signing requirements|Specify the level of data signing that is requested on behalf of clients that issue LDAP BIND requests.|
-|CMMC Level 3|Policy Assignment|Network security: Minimum session security for NTLM SSP based (including secure RPC) clients|Specifies which behaviors are allowed by clients for applications using the NTLM Security Support Provider (SSP). The SSP Interface (SSPI) is used by applications that need authentication services. See [https://docs.microsoft.com/windows/security/threat-protection/security-policy-settings/network-security-minimum-session-security-for-ntlm-ssp-based-including-secure-rpc-servers](/windows/security/threat-protection/security-policy-settings/network-security-minimum-session-security-for-ntlm-ssp-based-including-secure-rpc-servers) for more information.|
-|CMMC Level 3|Policy Assignment|Network security: Minimum session security for NTLM SSP based (including secure RPC) servers|Specifies which behaviors are allowed by servers for applications using the NTLM Security Support Provider (SSP). The SSP Interface (SSPI) is used by applications that need authentication services.|
-|CMMC Level 3|Policy Assignment|Latest PHP version for App Services|Latest supported PHP version for App Services|
-|CMMC Level 3|Policy Assignment|Required retention period (days) for IoT Hub diagnostic logs||
-|CMMC Level 3|Policy Assignment|Name of the resource group for Network Watcher|Name of the resource group of NetworkWatcher, such as NetworkWatcherRG. This is the resource group where the Network Watchers are located.|
-|CMMC Level 3|Policy Assignment|Required auditing setting for SQL servers||
-|CMMC Level 3|Policy Assignment|Azure Data Box SKUs that support software-based double encryption|The list of Azure Data Box SKUs that support software-based double encryption|
-|CMMC Level 3|Policy Assignment|UAC: Admin Approval Mode for the Built-in Administrator account|Specifies the behavior of Admin Approval Mode for the built-in Administrator account.|
-|CMMC Level 3|Policy Assignment|UAC: Behavior of the elevation prompt for administrators in Admin Approval Mode|Specifies the behavior of the elevation prompt for administrators.|
-|CMMC Level 3|Policy Assignment|UAC: Detect application installations and prompt for elevation|Specifies the behavior of application installation detection for the computer.|
-|CMMC Level 3|Policy Assignment|UAC: Run all administrators in Admin Approval Mode|Specifies the behavior of all User Account Control (UAC) policy settings for the computer.|
-|CMMC Level 3|Policy Assignment|User and groups that may force shutdown from a remote system|Specifies which users and groups are permitted to shut down the computer from a remote location on the network.|
-|CMMC Level 3|Policy Assignment|Users and groups that are denied access to this computer from the network|Specifies which users or groups are explicitly prohibited from connecting to the computer across the network.|
-|CMMC Level 3|Policy Assignment|Users and groups that are denied local logon|Specifies which users and groups are explicitly not permitted to log on to the computer.|
-|CMMC Level 3|Policy Assignment|Users and groups that are denied logging on as a batch job|Specifies which users and groups are explicitly not permitted to log on to the computer as a batch job (i.e. scheduled task).|
-|CMMC Level 3|Policy Assignment|Users and groups that are denied logging on as a service|Specifies which service accounts are explicitly not permitted to register a process as a service.|
-|CMMC Level 3|Policy Assignment|Users and groups that are denied log on through Remote Desktop Services|Specifies which users and groups are explicitly not permitted to log on to the computer via Terminal Services/Remote Desktop Client.|
-|CMMC Level 3|Policy Assignment|Users and groups that may restore files and directories|Specifies which users and groups are permitted to bypass file, directory, registry, and other persistent object permissions when restoring backed up files and directories.|
-|CMMC Level 3|Policy Assignment|Users and groups that may shut down the system|Specifies which users and groups who are logged on locally to the computers in your environment are permitted to shut down the operating system with the Shut Down command.|
-|CMMC Level 3|Policy Assignment|Users or groups that may log on locally|Specifies which remote users on the network are permitted to connect to the computer. This does not include Remote Desktop Connection.|
-|CMMC Level 3|Policy Assignment|Users or groups that may back up files and directories|Specifies users and groups allowed to circumvent file and directory permissions to back up the system.|
-|CMMC Level 3|Policy Assignment|Users or groups that may change the system time|Specifies which users and groups are permitted to change the time and date on the internal clock of the computer.|
-|CMMC Level 3|Policy Assignment|Users or groups that may change the time zone|Specifies which users and groups are permitted to change the time zone of the computer.|
-|CMMC Level 3|Policy Assignment|Users or groups that may create a token object|Specifies which users and groups are permitted to create an access token, which may provide elevated rights to access sensitive data.|
-|CMMC Level 3|Policy Assignment|Users or groups that may log on locally|Specifies which users or groups can interactively log on to the computer. Users who attempt to log on via Remote Desktop Connection or IIS also require this user right.|
-|CMMC Level 3|Policy Assignment|Remote Desktop Users|Users or groups that may log on through Remote Desktop Services|
-|CMMC Level 3|Policy Assignment|Users or groups that may manage auditing and security log|Specifies users and groups permitted to change the auditing options for files and directories and clear the Security log.|
-|CMMC Level 3|Policy Assignment|Users or groups that may take ownership of files or other objects|Specifies which users and groups are permitted to take ownership of files, folders, registry keys, processes, or threads. This user right bypasses any permissions that are in place to protect objects to give ownership to the specified user.|
-
-## Next steps
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/hipaa-hitrust-9-2.md
- Title: HIPAA HITRUST 9.2 blueprint sample overview
-description: Overview of the HIPAA HITRUST 9.2 blueprint sample. This blueprint sample helps customers assess specific HIPAA HITRUST 9.2 controls.
Previously updated : 09/08/2021--
-# HIPAA HITRUST 9.2 blueprint sample
-
-The HIPAA HITRUST 9.2 blueprint sample provides governance guardrails using
-[Azure Policy](../../policy/overview.md) that help you assess specific HIPAA HITRUST 9.2 controls.
-This blueprint helps customers deploy a core set of policies for any Azure-deployed architecture
-that must implement HIPAA HITRUST 9.2 controls.
-
-## Control mapping
-
-The [Azure Policy control mapping](../../policy/samples/hipaa-hitrust-9-2.md) provides details on
-policy definitions included within this blueprint and how these policy definitions map to the
-**compliance domains** and **controls** in HIPAA HITRUST 9.2. When assigned to an architecture,
-resources are evaluated by Azure Policy for non-compliance with assigned policy definitions. For
-more information, see [Azure Policy](../../policy/overview.md).
-
-## Deploy
-
-To deploy the Azure Blueprints HIPAA HITRUST 9.2 blueprint sample, the following steps must
-be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-### Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **HIPAA HITRUST** blueprint sample under _Other Samples_ and select **Use
- this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the HIPAA HITRUST 9.2 blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that make up the blueprint sample. Many of the artifacts have
- parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-### Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with HIPAA HITRUST 9.2 controls.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the HIPAA
- HITRUST 9.2 blueprint sample." Then select **Publish** at the bottom of the page.
-
-### Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are provided
-to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since they're
- defined during the assignment of the blueprint. For a full list or artifact parameters and
- their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-### Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name |Parameter name |Description |
-||||
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Access through Internet facing endpoint should be restricted |Enable or disable overly permissive inbound NSG rules monitoring |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Accounts: Guest account status |Specifies whether the local Guest account is disabled. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Adaptive Application Controls should be enabled on virtual machines |Enable or disable the monitoring of application whitelisting in Azure Security Center |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Allow simultaneous connections to the Internet or a Windows Domain |Specify whether to prevent computers from connecting to both a domain based network and a non-domain based network at the same time. A value of 0 allows simultaneous connections, and a value of 1 blocks them. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |API App should only be accessible over HTTPS V2 |Enable or disable the monitoring of the use of HTTPS in API App V2 |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Application names (supports wildcards) |A semicolon-separated list of the names of the applications that should be installed. e.g. 'Microsoft SQL Server 2014 (64-bit); Microsoft Visual Studio Code' or 'Microsoft SQL Server 2014*' (to match any application starting with 'Microsoft SQL Server 2014') |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Audit Process Termination |Specifies whether audit events are generated when a process has exited. Recommended for monitoring termination of critical processes. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Audit unrestricted network access to storage accounts |Enable or disable the monitoring of network access to storage account |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Audit: Shut down system immediately if unable to log security audits |Audits if the system will shut down when unable to log Security events. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Certificate thumbprints |A semicolon-separated list of certificate thumbprints that should exist under the Trusted Root certificate store (Cert:\LocalMachine\Root). e.g. THUMBPRINT1;THUMBPRINT2;THUMBPRINT3 |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Diagnostic logs in Batch accounts should be enabled |Enable or disable the monitoring of diagnostic logs in Batch accounts |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Diagnostic logs in Event Hub should be enabled |Enable or disable the monitoring of diagnostic logs in Event Hub accounts |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Diagnostic logs in Search services should be enabled |Enable or disable the monitoring of diagnostic logs in Azure Search service |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Diagnostic logs in Virtual Machine Scale Sets should be enabled |Enable or disable the monitoring of diagnostic logs in Service Fabric |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Disk encryption should be applied on virtual machines |Enable or disable the monitoring for VM disk encryption |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Enable insecure guest logons |Specifies whether the SMB client will allow insecure guest logons to an SMB server. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Just-In-Time network access control should be applied on virtual machines |Enable or disable the monitoring of network just In time access |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Management ports should be closed on your virtual machines |Enable or disable the monitoring of open management ports on Virtual Machines |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |MFA should be enabled accounts with write permissions on your subscription |Enable or disable the monitoring of MFA for accounts with write permissions in subscription |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |MFA should be enabled on accounts with owner permissions on your subscription |Enable or disable the monitoring of MFA for accounts with owner permissions in subscription |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Network access: Remotely accessible registry paths |Specifies which registry paths will be accessible over the network, regardless of the users or groups listed in the access control list (ACL) of the `winreg` registry key. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Network access: Remotely accessible registry paths and sub-paths |Specifies which registry paths and sub-paths will be accessible over the network, regardless of the users or groups listed in the access control list (ACL) of the `winreg` registry key. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Network access: Shares that can be accessed anonymously |Specifies which network shares can be accessed by anonymous users. The default configuration for this policy setting has little effect because all users have to be authenticated before they can access shared resources on the server. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Recovery console: Allow floppy copy and access to all drives and all folders |Specifies whether to make the Recovery Console SET command available, which allows setting of recovery console environment variables. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Remote debugging should be turned off for API App |Enable or disable the monitoring of remote debugging for API App |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Remote debugging should be turned off for Web Application |Enable or disable the monitoring of remote debugging for Web App |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Required retention (in days) for logs in Batch accounts |The required diagnostic logs retention period in days |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Required retention (in days) of logs in Azure Search service |The required diagnostic logs retention period in days |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Required retention (in days) of logs in Event Hub accounts |The required diagnostic logs retention period in days |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Resource Group Name for Storage Account (must exist) to deploy diagnostic settings for Network Security Groups |The resource group that the storage account will be created in. This resource group must already exist. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Role-Based Access Control (RBAC) should be used on Kubernetes Services |Enable or disable the monitoring of Kubernetes Services without RBAC enabled |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |SQL managed instance TDE protector should be encrypted with your own key |Enable or disable the monitoring of Transparent Data Encryption (TDE) with your own key support. TDE with your own key support provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |SQL server TDE protector should be encrypted with your own key |Enable or disable the monitoring of Transparent Data Encryption (TDE) with your own key support. TDE with your own key support provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Storage Account Prefix for Regional Storage Account to deploy diagnostic settings for Network Security Groups |This prefix will be combined with the network security group location to form the created storage account name. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |System updates on virtual machine scale sets should be installed |Enable or disable virtual machine scale sets reporting of system updates |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |System updates on virtual machine scale sets should be installed |Enable or disable virtual machine scale sets reporting of system updates |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Turn off multicast name resolution |Specifies whether LLMNR, a secondary name resolution protocol that transmits using multicast over a local subnet link on a single subnet, is enabled. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Virtual machines should be migrated to new Azure Resource Manager resources |Enable or disable the monitoring of classic compute VMs |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Vulnerabilities in security configuration on your virtual machine scale sets should be remediated |Enable or disable virtual machine scale sets OS vulnerabilities monitoring |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Vulnerabilities should be remediated by a Vulnerability Assessment solution |Enable or disable the detection of VM vulnerabilities by a vulnerability assessment solution |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Vulnerability assessment should be enabled on your SQL managed instances |Audit SQL managed instances which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Domain): Apply local firewall rules |Specifies whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy for the Domain profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Domain): Behavior for outbound connections |Specifies the behavior for outbound connections for the Domain profile that do not match an outbound firewall rule. The default value of 0 means to allow connections, and a value of 1 means to block connections. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Domain): Behavior for outbound connections |Specifies the behavior for outbound connections for the Domain profile that do not match an outbound firewall rule. The default value of 0 means to allow connections, and a value of 1 means to block connections. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Domain): Display notifications |Specifies whether Windows Firewall with Advanced Security displays notifications to the user when a program is blocked from receiving inbound connections, for the Domain profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Domain): Use profile settings |Specifies whether Windows Firewall with Advanced Security uses the settings for the Domain profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Private): Apply local connection security rules |Specifies whether local administrators are allowed to create connection security rules that apply together with connection security rules configured by Group Policy for the Private profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Private): Apply local firewall rules |Specifies whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy for the Private profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Private): Behavior for outbound connections |Specifies the behavior for outbound connections for the Private profile that do not match an outbound firewall rule. The default value of 0 means to allow connections, and a value of 1 means to block connections. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Private): Display notifications |Specifies whether Windows Firewall with Advanced Security displays notifications to the user when a program is blocked from receiving inbound connections, for the Private profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Private): Use profile settings |Specifies whether Windows Firewall with Advanced Security uses the settings for the Private profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Public): Apply local connection security rules |Specifies whether local administrators are allowed to create connection security rules that apply together with connection security rules configured by Group Policy for the Public profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Public): Apply local firewall rules |Specifies whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy for the Public profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Public): Behavior for outbound connections |Specifies the behavior for outbound connections for the Public profile that do not match an outbound firewall rule. The default value of 0 means to allow connections, and a value of 1 means to block connections. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Public): Display notifications |Specifies whether Windows Firewall with Advanced Security displays notifications to the user when a program is blocked from receiving inbound connections, for the Public profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall (Public): Use profile settings |Specifies whether Windows Firewall with Advanced Security uses the settings for the Public profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall: Domain: Allow unicast response |Specifies whether Windows Firewall with Advanced Security permits the local computer to receive unicast responses to its outgoing multicast or broadcast messages; for the Domain profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall: Private: Allow unicast response |Specifies whether Windows Firewall with Advanced Security permits the local computer to receive unicast responses to its outgoing multicast or broadcast messages; for the Private profile. |
-|Audit HITRUST/HIPAA controls and deploy specific VM Extensions to support audit requirements |Windows Firewall: Public: Allow unicast response |Specifies whether Windows Firewall with Advanced Security permits the local computer to receive unicast responses to its outgoing multicast or broadcast messages; for the Public profile. |
-
-## Next steps
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/index.md
Title: Index of blueprint samples description: Index of compliance and standard samples for deploying environments, policies, and Cloud Adoptions Framework foundations with Azure Blueprints. Previously updated : 08/17/2021 Last updated : 03/11/2022 # Azure Blueprints samples
quality and ready to deploy today to assist you in meeting your various complian
| [Australian Government ISM PROTECTED](./ism-protected/index.md) | Provides guardrails for compliance to Australian Government ISM PROTECTED. | | [Azure Security Benchmark Foundation](./azure-security-benchmark-foundation/index.md) | Deploys and configures Azure Security Benchmark Foundation. | | [Canada Federal PBMM](./canada-federal-pbmm.md) | Provides guardrails for compliance to Canada Federal Protected B, Medium Integrity, Medium Availability (PBMM). |
-| [CIS Microsoft Azure Foundations Benchmark v1.3.0](./cis-azure-1-3-0.md) | Provides a set of policies to help comply with CIS Microsoft Azure Foundations Benchmark v1.3.0 recommendations. |
-| [CIS Microsoft Azure Foundations Benchmark v1.1.0](./cis-azure-1-1-0.md) | Provides a set of policies to help comply with CIS Microsoft Azure Foundations Benchmark v1.1.0 recommendations. |
-| [CMMC Level 3](./cmmc-l3.md) | Provides guardrails for compliance with CMMC Level 3. |
-| [HIPAA HITRUST 9.2](./hipaa-hitrust-9-2.md) | Provides a set of policies to help comply with HIPAA HITRUST. |
-| [IRS 1075 September 2016](./irs-1075-sept2016.md) | Provides guardrails for compliance with IRS 1075.|
| [ISO 27001](./iso-27001-2013.md) | Provides guardrails for compliance with ISO 27001. | | [ISO 27001 Shared Services](./iso27001-shared/index.md) | Provides a set of compliant infrastructure patterns and policy guardrails that help toward ISO 27001 attestation. | | [ISO 27001 App Service Environment/SQL Database workload](./iso27001-ase-sql-workload/index.md) | Provides more infrastructure to the [ISO 27001 Shared Services](./iso27001-shared/index.md) blueprint sample. |
-| [Media](./medi) | Provides a set of policies to help comply with Media MPAA. |
| [New Zealand ISM Restricted](./new-zealand-ism.md) | Assigns policies to address specific New Zealand Information Security Manual controls. |
-| [NIST SP 800-171 R2](./nist-sp-800-171-r2.md) | Provides guardrails for compliance with NIST SP 800-171 R2. |
-| [PCI-DSS v3.2.1](./pci-dss-3.2.1/index.md) | Provides a set of policies to aide in PCI-DSS v3.2.1 compliance. |
| [SWIFT CSP-CSCF v2020](./swift-2020/index.md) | Aides in SWIFT CSP-CSCF v2020 compliance. | | [UK OFFICIAL and UK NHS Governance](./ukofficial-uknhs.md) | Provides a set of compliant infrastructure patterns and policy guardrails that help toward UK OFFICIAL and UK NHS attestation. | | [CAF Foundation](./caf-foundation/index.md) | Provides a set of controls to help you manage your cloud estate in alignment with the [Microsoft Cloud Adoption Framework for Azure (CAF)](/azure/architecture/cloud-adoption/governance/journeys/index). |
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/irs-1075-sept2016.md
- Title: IRS 1075 September 2016 blueprint sample
-description: Overview of the IRS 1075 September 2016 blueprint sample. This blueprint sample helps customers assess specific controls.
Previously updated : 09/08/2021--
-# IRS 1075 September 2016 blueprint sample
-
-The IRS 1075 September 2016 blueprint sample provides governance guardrails using
-[Azure Policy](../../policy/overview.md) that help you assess specific
-IRS 1075 September 2016 controls. This blueprint helps
-customers deploy a core set of policies for any Azure-deployed architecture that must implement
-controls for IRS 1075 September 2016.
-
-## Control mapping
-
-The [Azure Policy control mapping](../../policy/samples/irs-1075-sept2016.md) provides details on
-policy definitions included within this blueprint and how these policy definitions map to the
-**controls** in the IRS 1075 September 2016 framework. When assigned to an architecture, resources
-are evaluated by Azure Policy for non-compliance with assigned policy definitions. For more
-information, see [Azure Policy](../../policy/overview.md).
-
-## Deploy
-
-To deploy the Azure Blueprints IRS 1075 September 2016 blueprint sample,
-the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-### Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **IRS 1075 September 2016** blueprint sample under _Other Samples_ and select **Use this
- sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the IRS 1075 September 2016 blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that are included in the blueprint sample. Many of the artifacts
- have parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-### Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with IRS 1075 September 2016 controls.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the IRS 1075
- September 2016 blueprint sample." Then select **Publish** at the bottom of the page.
-
-### Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are provided
-to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since they're
- defined during the assignment of the blueprint. For a full list or artifact parameters and
- their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-### Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|Audit IRS 1075 (Rev.11-2016) controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Log Analytics workspace ID that VMs should be configured for|This is the ID (GUID) of the Log Analytics workspace that the VMs should be configured for.|
-|Audit IRS 1075 (Rev.11-2016) controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of resource types that should have diagnostic logs enabled|List of resource types to audit if diagnostic log setting is not enabled. Acceptable values can be found at [Azure Monitor diagnostic logs schemas](../../../azure-monitor/essentials/resource-logs-schema.md#service-specific-schemas).|
-|Audit IRS 1075 (Rev.11-2016) controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of users that should be excluded from Windows VM Administrators group|A semicolon-separated list of members that should be excluded in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|Audit IRS 1075 (Rev.11-2016) controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of users that should be included in Windows VM Administrators group|A semicolon-separated list of members that should be included in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)|Policy assignment|Log Analytics workspace for Linux VM Scale Sets (VMSS)|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)|Policy assignment|Optional: List of VM images that have supported Linux OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Log Analytics Agent for Linux VMs|Policy assignment|Log Analytics workspace for Linux VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|Deploy Log Analytics Agent for Linux VMs|Policy assignment|Optional: List of VM images that have supported Linux OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Log Analytics workspace for Windows VM Scale Sets (VMSS)|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Log Analytics Agent for Windows VMs|Policy assignment|Log Analytics workspace for Windows VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|Deploy Log Analytics Agent for Windows VMs|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Advanced Threat Protection on Storage Accounts|Policy assignment|Effect|Information about policy effects can be found at [Understand Azure Policy Effects](../../policy/concepts/effects.md) |
-|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention) |Retention days (optional, 180 days if unspecified) |
-|Deploy Auditing on SQL servers|Policy assignment|Resource group name for storage account for SQL server auditing|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region). Important - for proper operation of Auditing do not delete or rename the resource group or the storage accounts.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Storage account prefix for network security group diagnostics|This prefix will be combined with the network security group location to form the created storage account name.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist) |The resource group that the storage account will be created in. This resource group must already exist.|
-
-## Next steps
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/media/control-mapping.md
- Title: Media blueprint sample controls
-description: Control mapping of the Media blueprint samples. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Previously updated : 09/08/2021--
-# Control mapping of the Media blueprint sample
-
-The following article details how the Azure Blueprints Media blueprint sample maps to the Media
-controls. For more information about the controls, see
-[Media](https://www.motionpictures.org/best-practices).
-
-The following mappings are to the **Media** controls. Use the navigation on the right to jump
-directly to a specific control mapping. Many of the mapped controls are implemented with an
-[Azure Policy](../../../policy/overview.md) initiative. To review the complete initiative, open
-**Policy** in the Azure portal and select the **Definitions** page. Then, find and select the
-**\[Preview\]: Audit Media controls** built-in policy initiative.
-
-> [!IMPORTANT]
-> Each control below is associated with one or more [Azure Policy](../../../policy/overview.md)
-> definitions. These policies may help you
-> [assess compliance](../../../policy/how-to/get-compliance-data.md) with the control; however,
-> there often is not a one-to-one or complete match between a control and one or more policies. As
-> such, **Compliant** in Azure Policy refers only to the policies themselves; this doesn't ensure
-> you're fully compliant with all requirements of a control. In addition, the compliance standard
-> includes controls that aren't addressed by any Azure Policy definitions at this time. Therefore,
-> compliance in Azure Policy is only a partial view of your overall compliance status. The
-> associations between controls and Azure Policy definitions for this compliance blueprint sample
-> may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/governance/blueprints/samples/medi).
-
-## Access Control
-
-### AC-1.1- Ensure no root access key exists
--- \[Preview\]: Deploy prerequisites to audit Windows VMs that do not contain the specified
- certificates in Trusted Root
-
-### AC-1.2 - Passwords, PINs, and Tokens must be protected
--- \[Preview\]: Deploy prerequisites to audit Windows VMs that do not restrict the minimum password
- length to 14 characters
-
-### AC-1.8 - Shared account access is prohibited
--- All authorization rules except RootManageSharedAccessKey should be removed from Service Bus
- namespace
-
-### AC-1.9 -System must restrict access to authorized users.
--- Audit unrestricted network access to storage accounts-
-### AC- 1.14 -System must enforce access rights.
--- \[Preview\]: Deploy prerequisites to audit Windows VMs configurations in 'User Rights Assignment'-
-### AC- 1.15 -Prevent unauthorized access to security relevant information or functions.
--- \[Preview\]: Show audit results from Windows VMs configurations in 'Security Options - System
- settings'
-
-### AC-1-21 - Separation of duties must be enforced through appropriate assignment of role.
--- [Preview\]: Role-Based Access Control (RBAC) should be used on Kubernetes Services-
-### AC-1.40- Ensure that systems are not connecting trusted network and untrusted networks at the same time.
--- \[Preview\]: Deploy prerequisites to audit Windows VMs configurations in 'Security Options -
- Network Access'
-
-### AC-1.42 & AC- 1.43 - Remote access for non-employees must be restricted to allow access only to specifically approved information systems
--- \[Preview\]: Show audit results from Linux VMs that allow remote connections from accounts without
- passwords
-
-### AC-1.50- Log security related events for all information system components.
--- Diagnostic logs in Logic Apps should be enabled-
-### AC-1.54- Ensure multi-factor authentication (MFA) is enabled for all cloud console users.
--- MFA should be enabled accounts with write permissions on your subscription-- Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write
- privileges to prevent a breach of accounts or resources.
-
-## Auditing & Logging
-
-### AL-2.1- Successful and unsuccessful events must be logged.
--- Diagnostic logs in Search services should be enabled-
-### AL -2.16 - Network devices/instances must log any event classified as a critical security event by that network device/instance (ELBs, web application firewalls, etc.)
--- \[Preview\]: Show audit results from Windows VMs configurations in 'Security Options - Accounts'-
-### AL-2.17- Servers/instances must log any event classified as a critical security event by that server/instance
--- \[Preview\]: Show audit results from Windows VMs configurations in 'Security Options - Accounts'-
-### AL-2.19 - Domain events must log any event classified as a critical or high security event by the domain management software
--- \[Preview\]: Show audit results from Windows VMs configurations in 'Security Options - Accounts'-- \[Preview\]: Deploy prerequisites to audit Windows VMs configurations in 'Security Options -
- Microsoft Network Client'
-
-### AL-2.20- Domain events must log any event classified as a critical security event by domain security controls
--- \[Preview\]: Show audit results from Windows VMs configurations in 'Security Options - Accounts'-
-### AL-2.21- Domain events must log any access or changes to the domain log
--- \[Preview\]: Show audit results from Windows VMs configurations in 'Security Options - Recovery
- console'
-
-## Cryptographic Controls
-
-### CC-4.2- Applications and systems must use current cryptographic solutions for protecting data.
--- Transparent Data Encryption on SQL databases should be enabled-- Transparent data encryption should be enabled to protect data-at-rest and meet compliance
- requirements
-
-### CC-4.5- Digital Certificates must be signed by an approved Certificate Authority.
--- \[Preview\]: Show audit results from Windows VMs that contain certificates expiring within the
- specified number of days
-
-### CC-4.6- Digital Certificates must be uniquely assigned to a user or device.
--- \[Preview\]: Deploy prerequisites to audit Windows VMs that contain certificates expiring within
- the specified number of days
-
-### CC-4.7- Cryptographic material must be stored to enable decryption of the records for the length of time the records are retained.
--- Disk encryption should be applied on virtual machines-- VMs without an enabled disk encryption will be monitored by Azure Security Center as
- recommendations
-
-### CC-4.8- Secret and private keys must be stored securely.
--- Transparent Data Encryption on SQL databases should be enabled-- Transparent data encryption should be enabled to protect data-at-rest and meet compliance
- requirements
-
-## Change & Config Management
-
-### CM-5.2- Only authorized users may implement approved changes on the system.
--- System updates should be installed on your machines-- Missing security system updates on your servers will be monitored by Azure Security Center as
- recommendations
-
-### CM-5.12- Maintain an up-to-date, complete, accurate, and readily available baseline configuration of the information system.
--- System updates should be installed on your machines-- Missing security system updates on your servers will be monitored by Azure Security Center as
- recommendations
-
-### CM-5.13- Employ automated tools to maintain a baseline configuration of the information system.
--- System updates should be installed on your machines-- Missing security system updates on your servers will be monitored by Azure Security Center as
- recommendations
-
-### CM-5.14- Identify and disable unnecessary and/or non-secure functions, ports, protocols and services.
--- Network interfaces should disable IP forwarding-- \[Preview\]: IP Forwarding on your virtual machine should be disabled-
-### CM-5.19- Monitor changes to the security configuration settings.
--- Deploy Diagnostic Settings for Network Security Groups-
-### CM-5.22- Ensure that only authorized software and updates are installed on Company systems.
--- System updates should be installed on your machines-- Missing security system updates on your servers will be monitored by Azure Security Center as
- recommendations
-
-## Identity & Authentication
-
-### IA-7.1- User accounts must be uniquely assigned to individuals for access to information that is not classified as Public. Account IDs must be constructed using a standardized logical format.
--- External accounts with owner permissions should be removed from your subscription-- External accounts with owner permissions should be removed from your subscription in order to
- prevent unmonitored access.
-
-## Network Security
-
-### NS-9.2- Access to network device management functionality is restricted to authorized users.
--- \[Preview\]: Deploy prerequisites to audit Windows VMs configurations in 'Security Options -
- Network Access'
-
-### NS-9.3- All network devices must be configured using their most secure configurations.
--- \[Preview\]: Deploy prerequisites to audit Windows VMs configurations in 'Security Options -
- Network Access'
-
-### NS-9.5- All network connections to a system through a firewall must be approved and audited on a regular basis.
--- \[Preview\]: Show audit results from Windows VMs configurations in 'Windows Firewall Properties'-
-### NS-9.7- Appropriate controls must be present at any boundary between a trusted network and any untrusted or public network.
--- \[Preview\]: Deploy prerequisites to audit Windows VMs configurations in 'Windows Firewall
- Properties'
-
-## Security Planning
-
-### SP-11.3- Threats must be identified that could negatively impact the confidentiality, integrity, or availability of Company information and content along with the likelihood of their occurrence.
--- Advanced Threat Protection types should be set to 'All' in SQL managed instance Advanced Data
- Security settings
-
-### Security Continuity
-
-## SC-12.5- Data in long-term storage must be accessible throughout the retention period and protected against media degradation and technology changes.
--- SQL servers should be configured with auditing retention days greater than 90 days.-- Audit SQL servers configured with an auditing retention period of less than 90 days.-
-## System Integrity
-
-### SI-14.3- Only authorized personnel may monitor network and user activities.
--- Vulnerabilities on your SQL databases should be remediated-- Monitor Vulnerability Assessment scan results and recommendations for how to remediate database
- vulnerabilities.
-
-### SI-14.4- Internet facing systems must have intrusion detection.
--- Deploy Threat Detection on SQL servers-
-### SI-14.13- Standardized centrally managed anti-malware software should be implemented across the company.
--- Deploy default Microsoft IaaSAntimalware extension for Windows Server-
-### SI-14.14- Anti-malware software must scan computers and media weekly at a minimum.
--- Deploy default Microsoft IaaSAntimalware extension for Windows Server-
-## Vulnerability Management
-
-### VM-15.4- Ensure that applications are scanned for vulnerabilities on a monthly basis.
--- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks.-
-### VM-15.5- Ensure that vulnerabilities are identified, paired to threats, and evaluated for risk.
--- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks.-
-### VM-15.6- Ensure that identified vulnerabilities have been remediated within a mutually agreed upon timeline.
--- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks.-
-### VM-15.7- Access to and use of vulnerability management systems must be restricted to authorized personnel.
--- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks.-
-> [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
-> clouds.
-
-## Next steps
-
-You've reviewed the control mapping of the Media blueprint sample. Next, visit the following
-articles to learn about the overview and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [Media blueprint - Overview](./control-mapping.md)
-> [Media blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/media/deploy.md
- Title: Deploy Media blueprint sample
-description: Deploy steps for the Media blueprint sample including blueprint artifact parameter details.
Previously updated : 09/08/2021--
-# Deploy the Media blueprint sample
-
-To deploy the Media blueprint sample, the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-## Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** and search for and select **Policy** in the left pane. On the **Policy**
- page, select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **Media** blueprint sample under _Other Samples_ and select **Use this
- sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that make up the blueprint sample. Many of the artifacts have
- parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-## Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move
-it away from the standard.
-
-1. Select **All services** and search for and select **Policy** in the left pane. On the **Policy**
- page, select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the Media blueprint sample." Then select **Publish** at the bottom of the page.
-
-## Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are
-provided to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** and search for and select **Policy** in the left pane. On the **Policy**
- page, select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see [blueprints resource locking](../../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../../concepts/parameters.md#dynamic-parameters) since
- they're defined during the assignment of the blueprint. For a full list or artifact parameters
- and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/)
-> to estimate the cost of running resources deployed by this blueprint sample.
-
-## Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|\[Preview\]: Deploy Log Analytics Agent for Linux VMs |Policy assignment |Log Analytics workspace for Linux VMs |For more information, see [Create a Log Analytics workspace in the Azure portal](../../../../azure-monitor/logs/quick-create-workspace.md). |
-|\[Preview\]: Deploy Log Analytics Agent for Linux VMs |Policy assignment |Optional: List of VM images that have supported Linux OS to add to scope |An empty array may be used to indicate no optional parameters: `[]` |
-|\[Preview\]: Deploy Log Analytics Agent for Windows VMs |Policy assignment |Optional: List of VM images that have supported Windows OS to add to scope |An empty array may be used to indicate no optional parameters: `[]` |
-|\[Preview\]: Deploy Log Analytics Agent for Windows VMs |Policy assignment |Log Analytics workspace for Windows VMs |For more information, see [Create a Log Analytics workspace in the Azure portal](../../../../azure-monitor/logs/quick-create-workspace.md). |
-|\[Preview\]: Audit Media controls and deploy specific VM Extensions to support audit requirements |Policy assignment |Log Analytics workspace ID that VMs should be configured for |This is the ID (GUID) of the Log Analytics workspace that the VMs should be configured for. |
-|\[Preview\]: Audit Media controls and deploy specific VM Extensions to support audit requirements |Policy assignment |List of resource types that should have diagnostic logs enabled |List of resource types to audit if diagnostic log setting isn't enabled. Acceptable values can be found at [Azure Monitor diagnostic logs schemas](../../../../azure-monitor/essentials/resource-logs-schema.md#service-specific-schemas). |
-|\[Preview\]: Audit Media controls and deploy specific VM Extensions to support audit requirements |Policy assignment |Administrators group |Group. Example: `Administrator; myUser1; myUser2` |
-|\[Preview\]: Audit Media controls and deploy specific VM Extensions to support audit requirements |Policy assignment |List of users that should be included in Windows VM Administrators group |A semicolon-separated list of members that should be included in the Administrators local group. Example: `Administrator; myUser1; myUser2` |
-|Deploy Advanced Threat Protection on Storage Accounts |Policy assignment |Effect |Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md). |
-|Deploy Auditing on SQL servers |Policy assignment |The value in days of the retention period (0 indicates unlimited retention) |Retention days (optional, _180_ days if unspecified) |
-|Deploy Auditing on SQL servers |Policy assignment |Resource group name for storage account for SQL server auditing |Auditing writes database events to an audit log in your Azure Storage account (a storage account is created in each region where a SQL Server is created that is shared by all servers in that region). Important - for proper operation of Auditing don't delete or rename the resource group or the storage accounts. |
-|Deploy diagnostic settings for Network Security Groups |Policy assignment |Storage account prefix for network security group diagnostics |This prefix is combined with the network security group location to form the created storage account name. |
-|Deploy diagnostic settings for Network Security Groups |Policy assignment |Resource group name for storage account for network security group diagnostics (must exist) |The resource group that the storage account is created in. This resource group must already exist. |
-
-## Next steps
-
-Now that you've reviewed the steps to deploy the Media sample, visit the following
-articles to learn about the overview and control mapping:
-
-> [!div class="nextstepaction"]
-> [Media blueprints - Overview](./index.md)
-> [Media blueprints - Control mapping](./control-mapping.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/media/index.md
- Title: Media blueprint sample overview
-description: Overview of the Media blueprint sample. This blueprint sample helps customers assess specific Media controls.
Previously updated : 09/08/2021--
-# Overview of the Media blueprint sample
-
-Media blueprint sample provides a set of governance guardrails using
-[Azure Policy](../../../policy/overview.md) that help toward
-[Media](https://www.hhs.gov/hipaa/for-professionals/security/laws-regulations/https://docsupdatetracker.net/index.html)
-attestation.
-
-## Blueprint sample
-
-The blueprint sample helps customers deploy a core set of policies for any Azure-deployed
-architecture requiring accreditation or compliance with the Media framework. The
-[control mapping](./control-mapping.md) section provides details on policies included within this
-initiative and how these policies help meet various controls defined by Media framework. When
-assigned to an architecture, resources are evaluated by Azure Policy for compliance with assigned
-policies.
-
-## Next steps
-
-You've reviewed the overview of the Media blueprint sample. Next, visit the following
-articles to learn about the control mapping and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [Media blueprint - Control mapping](./control-mapping.md)
-> [Media blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/nist-sp-800-171-r2.md
- Title: NIST SP 800-171 R2 blueprint sample overview
-description: Overview of the NIST SP 800-171 R2 blueprint sample. This blueprint sample helps customers assess specific NIST SP 800-171 R2 requirements or controls.
Previously updated : 09/08/2021--
-# NIST SP 800-171 R2 blueprint sample
-
-The NIST SP 800-171 R2 blueprint sample provides governance guardrails using
-[Azure Policy](../../policy/overview.md) that help you assess specific NIST SP 800-171 R2
-requirements or controls. This blueprint helps customers deploy a core set of policies for any
-Azure-deployed architecture that must implement NIST SP 800-171 R2 requirements or controls.
-
-## Control mapping
-
-The [Azure Policy control mapping](../../policy/samples/nist-sp-800-171-r2.md) provides details on
-policy definitions included within this blueprint and how these policy definitions map to the
-**compliance domains** and **requirements** in NIST SP 800-171 R2. When assigned to an architecture,
-resources are evaluated by Azure Policy for non-compliance with assigned policy definitions. For
-more information, see [Azure Policy](../../policy/overview.md).
-
-## Deploy
-
-To deploy the Azure Blueprints NIST SP 800-171 R2 blueprint sample, the following steps must
-be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-### Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **NIST SP 800-171 R2** blueprint sample under _Other Samples_ and select **Use
- this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the NIST SP 800-171 R2 blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that are included in the blueprint sample. Many of the artifacts
- have parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-### Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with NIST SP 800-171 requirements.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the NIST SP
- 800-171 R2 blueprint sample." Then select **Publish** at the bottom of the page.
-
-### Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are provided
-to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since they're
- defined during the assignment of the blueprint. For a full list or artifact parameters and
- their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-### Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|List of users that should be excluded from Windows VM Administrators group|A semicolon-separated list of members that should be excluded in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|List of users that should be included in Windows VM Administrators group|A semicolon-separated list of members that should be included in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|List of regions where Network Watcher should be enabled|A semicolon-separated list of regions. To see a complete list of regions use Get-AzLocation. Ex: East US; East US2|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|Log Analytics workspace ID that VMs should be configured for|This is the ID (GUID) of the Log Analytics workspace that the VMs should be configured for.|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|Optional: List of Windows VM images that support Log Analytics agent to add to audit scope|A semicolon-separated list of images|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|Optional: List of Linux VM images that support Log Analytics agent to add to audit scope|A semicolon-separated list of images|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|Latest PHP version|Latest supported PHP version for App Services|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|Latest Java version|Latest supported Java version for App Services|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|Latest Windows Python version|Latest supported Python version for App Services|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|Latest Linux Python version|Latest supported Python version for App Services|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|List of resource types that should have diagnostic logs enabled|List of resource types to audit if diagnostic log setting is not enabled. Acceptable values can be found at [Azure Monitor diagnostic logs schemas](../../../azure-monitor/essentials/resource-logs-schema.md).|
-|\[Preview\]: NIST SP 800-171 R2|Policy assignment|Minimum TLS version for Windows Web servers|The minimum TLS protocol version that should be enabled on Windows web servers.|
-
-## Next steps
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/pci-dss-3.2.1/control-mapping.md
- Title: PCI-DSS v3.2.1 blueprint sample controls
-description: Control mapping of the Payment Card Industry Data Security Standard v3.2.1 blueprint sample to Azure Policy and Azure RBAC.
Previously updated : 09/08/2021--
-# Control mapping of the PCI-DSS v3.2.1 blueprint sample
-
-The following article details how the Azure Blueprints PCI-DSS v3.2.1 blueprint sample maps to the
-PCI-DSS v3.2.1 controls. For more information about the controls, see
-[PCI-DSS v3.2.1](https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf).
-
-The following mappings are to the **PCI-DSS v3.2.1:2018** controls. Use the navigation on the right
-to jump directly to a specific control mapping. Many of the mapped controls are implemented with an
-[Azure Policy](../../../policy/overview.md) initiative. To review the complete initiative, open
-**Policy** in the Azure portal and select the **Definitions** page. Then, find and select the **PCI
-v3.2.1:2018** built-in policy initiative.
-
-> [!IMPORTANT]
-> Each control below is associated with one or more [Azure Policy](../../../policy/overview.md)
-> definitions. These policies may help you
-> [assess compliance](../../../policy/how-to/get-compliance-data.md) with the control; however,
-> there often is not a one-to-one or complete match between a control and one or more policies. As
-> such, **Compliant** in Azure Policy refers only to the policies themselves; this doesn't ensure
-> you're fully compliant with all requirements of a control. In addition, the compliance standard
-> includes controls that aren't addressed by any Azure Policy definitions at this time. Therefore,
-> compliance in Azure Policy is only a partial view of your overall compliance status. The
-> associations between controls and Azure Policy definitions for this compliance blueprint sample
-> may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/governance/blueprints/samples/pci-dss-3.2.1/control-mapping.md).
-
-## 1.3.2 and 1.3.4 Boundary Protection
-
-This blueprint helps you manage and control networks by assigning [Azure
-Policy](../../../policy/overview.md) definitions that monitors network security groups with
-permissive rules. Rules that are too permissive may allow unintended network access and should be
-reviewed. This blueprint assigns one Azure Policy definitions that monitor unprotected endpoints,
-applications, and storage accounts. Endpoints and applications that aren't protected by a firewall,
-and storage accounts with unrestricted access can allow unintended access to information contained
-within the information system.
--- Audit unrestricted network access to storage accounts-- Access through Internet facing endpoint should be restricted-
-## 3.4.a, 4.1, 4.1.g, 4.1.h and 6.5.3 Cryptographic Protection
-
-This blueprint helps you enforce your policy with the use of cryptograph controls by assigning
-[Azure Policy](../../../policy/overview.md) definitions which enforce specific cryptograph controls
-and audit use of weak cryptographic settings. Understanding where your Azure resources may have
-non-optimal cryptographic configurations can help you take corrective actions to ensure resources
-are configured in accordance with your information security policy. Specifically, the policies
-assigned by this blueprint require transparent data encryption on SQL databases; audit missing
-encryption on storage accounts, and automation account variables. There are also policies which
-address audit insecure connections to storage accounts, Function Apps, WebApp, API Apps, and Redis
-Cache, and audit unencrypted Service Fabric communication.
--- Function App should only be accessible over HTTPS-- Web Application should only be accessible over HTTPS-- API App should only be accessible over HTTPS-- Transparent Data Encryption on SQL databases should be enabled-- Disk encryption should be applied on virtual machines-- Automation account variables should be encrypted-- Only secure connections to your Redis Cache should be enabled-- Secure transfer to storage accounts should be enabled-- Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign-- Transparent Data Encryption on SQL databases should be enabled-- Deploy SQL DB transparent data encryption-
-## 5.1, 6.2, 6.6 and 11.2.1 Vulnerability Scanning and System Updates
-
-This blueprint helps you manage information system vulnerabilities by assigning [Azure
-Policy](../../../policy/overview.md) definitions that monitor missing system updates, operating
-system vulnerabilities, SQL vulnerabilities, and virtual machine vulnerabilities in Azure Security
-Center. Azure Security Center provides reporting capabilities that enable you to have real-time
-insight into the security state of deployed Azure resources.
--- Monitor missing Endpoint Protection in Azure Security Center-- Deploy default Microsoft IaaSAntimalware extension for Windows Server-- Deploy Threat Detection on SQL Servers-- System updates should be installed on your machines-- Vulnerabilities in security configuration on your machines should be remediated-- Vulnerabilities on your SQL databases should be remediated-- Vulnerabilities should be remediated by a Vulnerability Assessment solution-
-## 7.1.1. 7.1.2 and 7.1.3 Separation of Duties
-
-Having only one Azure subscription owner doesn't allow for administrative redundancy. Conversely,
-having too many Azure subscription owners can increase the potential for a breach via a compromised
-owner account. This blueprint helps you maintain an appropriate number of Azure subscription owners
-by assigning [Azure Policy](../../../policy/overview.md) definitions which audit the number of
-owners for Azure subscriptions. Managing subscription owner permissions can help you implement
-appropriate separation of duties.
--- There should be more than one owner assigned to your subscription-- A maximum of 3 owners should be designated for your subscription-
-## 3.2, 7.2.1, 8.3.1.a and 8.3.1.b Management of Privileged Access Rights
-
-This blueprint helps you restrict and control privileged access rights by assigning [Azure
-Policy](../../../policy/overview.md) definitions to audit external accounts with owner, write and/or
-read permissions and employee accounts with owner and/or write permissions that don't have
-multi-factor authentication enabled. Azure role-based access control (Azure RBAC) helps to manage
-who has access to Azure resources. Understanding where custom Azure RBAC rules are implement can
-help you verify need and proper implementation, as custom Azure RBAC rules are error prone. This
-blueprint also assigns [Azure Policy](../../../policy/overview.md) definitions to audit use of Azure
-Active Directory authentication for SQL Servers. Using Azure Active Directory authentication
-simplifies permission management and centralizes identity management of database users and other
-Microsoft services.
--- External accounts with owner permissions should be removed from your subscription-- External accounts with write permissions should be removed from your subscription-- External accounts with read permissions should be removed from your subscription-- MFA should be enabled on accounts with owner permissions on your subscription-- MFA should be enabled accounts with write permissions on your subscription-- MFA should be enabled on accounts with read permissions on your subscription-- An Azure Active Directory administrator should be provisioned for SQL servers-- Audit usage of custom RBAC rules-
-## 8.1.2 and 8.1.5 Least Privilege and Review of User Access Rights
-
-Azure role-based access control (Azure RBAC) helps you manage who has access to resources in
-Azure. Using the Azure portal, you can review who has access to Azure resources and their
-permissions. This blueprint assigns [Azure Policy](../../../policy/overview.md) definitions to audit
-accounts that should be prioritized for review, including depreciated accounts and external accounts
-with elevated permissions.
--- Deprecated accounts should be removed from your subscription-- Deprecated accounts with owner permissions should be removed from your subscription-- External accounts with owner permissions should be removed from your subscription-- External accounts with write permissions should be removed from your subscription-- External accounts with read permissions should be removed from your subscription-
-## 8.1.3 Removal or Adjustment of Access Rights
-
-Azure role-based access control (Azure RBAC) helps you manage who has access to resources in Azure.
-Using Azure Active Directory and Azure RBAC, you can update user roles to reflect organizational
-changes. When needed, accounts can be blocked from signing in (or removed), which immediately
-removes access rights to Azure resources. This blueprint assigns [Azure
-Policy](../../../policy/overview.md) definitions to audit depreciated account that should be
-considered for removal.
--- Deprecated accounts should be removed from your subscription-- Deprecated accounts with owner permissions should be removed from your subscription-
-## 8.2.3.a,b, 8.2.4.a,b and 8.2.5 Password-based Authentication
-
-This blueprint helps you enforce strong passwords by assigning [Azure
-Policy](../../../policy/overview.md) definitions that audit Windows VMs that don't enforce minimum
-strength and other password requirements. Awareness of VMs in violation of the password strength
-policy helps you take corrective actions to ensure passwords for all VM user accounts are compliant
-with policy.
--- \[Preview\]: Audit Windows VMs that do not have a maximum password age of 70 days-- \[Preview\]: Deploy requirements to audit Windows VMs that do not have a maximum password age of
- 70 days
-- \[Preview\]: Audit Windows VMs that do not restrict the minimum password length to 14 characters-- \[Preview\]: Deploy requirements to audit Windows VMs that do not restrict the minimum password
- length to 14 characters
-- \[Preview\]: Audit Windows VMs that allow re-use of the previous 24 passwords-- \[Preview\]: Deploy requirements to audit Windows VMs that allow re-use of the previous 24
- passwords
-
-## 10.3 and 10.5.4 Audit Generation
-
-This blueprint helps you ensure system events are logged by assigning [Azure
-Policy](../../../policy/overview.md) definitions that audit log settings on Azure resources.
-Diagnostic logs provide insight into operations that were performed within Azure resources. Azure
-logs rely on synchronized internal clocks to create a time-correlated record of events across
-resources.
--- Auditing should be enabled on advanced data security settings on SQL Server-- Audit diagnostic setting-- Audit SQL server level Auditing settings-- Deploy Auditing on SQL servers-- Storage accounts should be migrated to new Azure Resource Manager resources-- Virtual machines should be migrated to new Azure Resource Manager resources-
-## 12.3.6 and 12.3.7 Information Security
-
-This blueprint helps you manage and control your network by assigning [Azure
-Policy](../../../policy/overview.md) definitions that audit the acceptable network locations and the
-approved company products allowed for the environment. These are customizable by each company
-through the policy parameters within each of these policies.
--- Allowed locations-- Allowed locations for resource groups-
-## Next steps
-
-Now that you've reviewed the control mapping of the PCI-DSS v3.2.1 blueprint, visit the following
-articles to learn about the overview and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [PCI-DSS v3.2.1 blueprint - Overview](./index.md)
-> [PCI-DSS v3.2.1 blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/pci-dss-3.2.1/deploy.md
- Title: Deploy PCI-DSS v3.2.1 blueprint sample
-description: Deploy steps for the Payment Card Industry Data Security Standard v3.2.1 blueprint sample including blueprint artifact parameter details.
Previously updated : 09/08/2021--
-# Deploy the PCI-DSS v3.2.1 blueprint sample
-
-To deploy the Azure Blueprints PCI-DSS v3.2.1 blueprint sample, the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-## Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **PCI-DSS v3.2.1** blueprint sample under _Other Samples_ and select **Use
- this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the PCI-DSS v3.2.1 blueprint
- sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that make up the blueprint sample. Many of the artifacts have
- parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-## Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move
-it away from the PCI-DSS v3.2.1 standard.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the PCI-DSS
- v3.2.1 blueprint sample." Then select **Publish** at the bottom of the page.
-
-## Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are
-provided to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../../concepts/parameters.md#dynamic-parameters) since
- they're defined during the assignment of the blueprint. For a full list or artifact parameters
- and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-## Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|PCI v3.2.1:2018|Policy Assignment|List of Resource Types | Audit diagnostic setting for selected resource types. Default value is all resources are selected|
-|Allowed locations|Policy Assignment|List Of Allowed Locations|List of data center locations allowed for any resource to be deployed into. This list is customizable to the desired Azure locations globally. Select locations you wish to allow.|
-|Allowed Locations for resource groups|Policy Assignment |Allowed Location |This policy enables you to restrict the locations your organization can create resource groups in. Use to enforce your geo-compliance requirements.|
-|Deploy Auditing on SQL servers|Policy Assignment|Retention days|Data retention in number of days. Default value is 180 but PCI requires 365.|
-|Deploy Auditing on SQL servers|Policy Assignment|Resource group name for storage account|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region).|
-
-## Next steps
-
-Now that you've reviewed the steps to deploy the PCI-DSS v3.2.1 blueprint sample, visit the
-following articles to learn about the overview and control mapping:
-
-> [!div class="nextstepaction"]
-> [PCI-DSS v3.2.1 blueprint - Overview](./index.md)
-> [PCI-DSS v3.2.1 blueprint - Control mapping](./control-mapping.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/pci-dss-3.2.1/index.md
- Title: PCI-DSS v3.2.1 blueprint sample overview
-description: Overview of the Payment Card Industry Data Security Standard v3.2.1 blueprint sample. This blueprint sample helps customers assess specific controls.
Previously updated : 09/08/2021--
-# Overview of the PCI-DSS v3.2.1 blueprint sample
-
-The PCI-DSS v3.2.1 blueprint sample is a set of policies which aides in achieving PCI-DSS v3.2.1
-compliance. This blueprint helps customers govern cloud-based environments with PCI-DSS workloads.
-The PCI-DSS blueprint deploys a core set of policies for any Azure-deployed architecture requiring
-this accreditation.
-
-## Control mapping
-
-The control mapping section provides details on policies included within this initiative and how
-these policies help meet various controls defined by PCI-DSS v3.2.1. When assigned to an
-architecture, resources are evaluated by Azure Policy for non-compliance with assigned policies.
-
-After assigning this blueprint, view your Azure environments level of compliance in the Azure Policy
-Compliance Dashboard.
-
-## Next steps
-
-You've reviewed the overview of the PCI-DSS v3.2.1 blueprint sample. Next, visit the following
-articles to learn about the control mapping and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [PCI-DSS v3.2.1 blueprint - Control mapping](./control-mapping.md)
-> [PCI-DSS v3.2.1 blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/resource-graph-samples.md
Title: Azure Resource Graph sample queries for management groups description: Sample Azure Resource Graph queries for management groups showing use of resource types and tables to access management group details. Previously updated : 02/16/2022 Last updated : 03/08/2022
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 02/15/2022 Last updated : 03/08/2022
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 02/15/2022 Last updated : 03/08/2022
governance Gov Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-irs-1075-sept2016.md
initiative definition, open **Policy** in the Azure portal and select the **Defi
Then, find and select the **IRS1075 September 2016** Regulatory Compliance built-in initiative definition.
-This built-in initiative is deployed as part of the
-[IRS 1075 September 2016 blueprint sample](../../blueprints/samples/irs-1075-sept2016.md).
- > [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions. > These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
governance Guest Configuration Baseline Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-linux.md
Title: Reference - Azure Policy guest configuration baseline for Linux description: Details of the Linux baseline on Azure implemented through Azure Policy guest configuration. Previously updated : 02/16/2022 Last updated : 03/08/2022
governance Guest Configuration Baseline Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-windows.md
Title: Reference - Azure Policy guest configuration baseline for Windows description: Details of the Windows baseline on Azure implemented through Azure Policy guest configuration. Previously updated : 02/16/2022 Last updated : 03/08/2022
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/15/2022 Last updated : 03/11/2022
initiative definition, open **Policy** in the Azure portal and select the **Defi
Then, find and select the **IRS1075 September 2016** Regulatory Compliance built-in initiative definition.
-This built-in initiative is deployed as part of the
-[IRS 1075 September 2016 blueprint sample](../../blueprints/samples/irs-1075-sept2016.md).
- > [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions. > These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Policy description: Sample Azure Resource Graph queries for Azure Policy showing use of resource types and tables to access Azure Policy related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/reference/supported-tables-resources.md
Title: Supported Azure Resource Manager resource types description: Provide a list of the Azure Resource Manager resource types supported by Azure Resource Graph and Change History. Previously updated : 02/16/2022 Last updated : 03/08/2022
For sample queries for this table, see [Resource Graph sample queries for adviso
- microsoft.chaos/targets - microsoft.chaos/targets/capabilities
+## communitygalleryresources
+
+- microsoft.compute/locations/communitygalleries
+- microsoft.compute/locations/communitygalleries/images
+- microsoft.compute/locations/communitygalleries/images/versions
+ ## desktopvirtualizationresources - microsoft.desktopvirtualization/hostpools/sessionhosts
For sample queries for this table, see [Resource Graph sample queries for kubern
- microsoft.maintenance/applyupdates - microsoft.maintenance/configurationassignments
+- microsoft.maintenance/maintenanceconfigurations/applyupdates
- microsoft.maintenance/updates-- microsoft.resources/subscriptions (Subscriptions)
- - Sample query: [Count of subscriptions per management group](../samples/samples-by-category.md#count-of-subscriptions-per-management-group)
- - Sample query: [Key vaults with subscription name](../samples/samples-by-category.md#key-vaults-with-subscription-name)
- - Sample query: [List all management group ancestors for a specified subscription](../samples/samples-by-category.md#list-all-management-group-ancestors-for-a-specified-subscription)
- - Sample query: [List all subscriptions under a specified management group](../samples/samples-by-category.md#list-all-subscriptions-under-a-specified-management-group)
- - Sample query: [Remove columns from results](../samples/samples-by-category.md#remove-columns-from-results)
- - Sample query: [Secure score per management group](../samples/samples-by-category.md#secure-score-per-management-group)
## patchassessmentresources
For sample queries for this table, see [Resource Graph sample queries for resour
- Microsoft.ApiManagement/service (API Management services) - microsoft.app/containerapps - microsoft.app/managedenvironments
+- microsoft.app/managedenvironments/certificates
- microsoft.appassessment/migrateprojects - Microsoft.AppConfiguration/configurationStores (App Configuration) - Microsoft.AppPlatform/Spring (Azure Spring Cloud)
For sample queries for this table, see [Resource Graph sample queries for resour
- Microsoft.Logic/workflows (Logic apps) - Microsoft.Logz/monitors (Logz main account) - Microsoft.Logz/monitors/accounts (Logz sub account)
+- Microsoft.Logz/monitors/metricsSource (Logz metrics data source)
- Microsoft.MachineLearning/commitmentPlans (Machine Learning Studio (classic) web service plans) - Microsoft.MachineLearning/webServices (Machine Learning Studio (classic) web services) - Microsoft.MachineLearning/workspaces (Machine Learning Studio (classic) workspaces)
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.offazure/serversites - microsoft.offazure/vmwaresites - Microsoft.OpenEnergyPlatform/energyServices (Project Oak Forest)
+- microsoft.openlogisticsplatform/applicationmanagers
- microsoft.openlogisticsplatform/applicationworkspaces - Microsoft.OpenLogisticsPlatform/workspaces (Open Supply Chain Platform) - microsoft.operationalinsights/clusters
For sample queries for this table, see [Resource Graph sample queries for securi
- microsoft.authorization/roleassignments/providers/assessments/governanceassignments - microsoft.security/assessments - Sample query: [Count healthy, unhealthy, and not applicable resources per recommendation](../samples/samples-by-category.md#count-healthy-unhealthy-and-not-applicable-resources-per-recommendation)
- - Sample query: [List Azure Security Center recommendations](../samples/samples-by-category.md#list-azure-security-center-recommendations)
- Sample query: [List Container Registry vulnerability assessment results](../samples/samples-by-category.md#list-container-registry-vulnerability-assessment-results)
+ - Sample query: [List Microsoft Defender recommendations](../samples/samples-by-category.md)
- Sample query: [List Qualys vulnerability assessment results](../samples/samples-by-category.md#list-qualys-vulnerability-assessment-results) - microsoft.security/assessments/governanceassignments - microsoft.security/assessments/subassessments
For sample queries for this table, see [Resource Graph sample queries for securi
- Sample query: [Get specific IoT alert](../samples/samples-by-category.md#get-specific-iot-alert) - microsoft.security/locations/alerts (Security Alerts) - microsoft.security/pricings
- - Sample query: [Show Azure Defender pricing tier per subscription](../samples/samples-by-category.md#show-azure-defender-pricing-tier-per-subscription)
+ - Sample query: [Show Defender for Cloud plan pricing tier per subscription](../samples/samples-by-category.md)
- microsoft.security/regulatorycompliancestandards - Sample query: [Regulatory compliance state per compliance standard](../samples/samples-by-category.md#regulatory-compliance-state-per-compliance-standard) - microsoft.security/regulatorycompliancestandards/regulatorycompliancecontrols
For sample queries for this table, see [Resource Graph sample queries for servic
- Sample query: [All active Service Health events](../samples/samples-by-category.md#all-active-service-health-events) - Sample query: [All active service issue events](../samples/samples-by-category.md#all-active-service-issue-events)
+## spotresources
+
+- microsoft.compute/skuspotevictionrate/location
+- microsoft.compute/skuspotpricehistory/ostype/location
+ ## workloadmonitorresources - microsoft.workloadmonitor/monitors
governance Samples By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-category.md
Title: List of sample Azure Resource Graph queries by category description: List sample queries for Azure Resource-Graph. Categories include Tags, Azure Advisor, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 02/16/2022 Last updated : 03/08/2022
Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature
[!INCLUDE [azure-resource-graph-samples-cat-azure-policy-guest-configuration](../../../../includes/resource-graph/samples/bycat/azure-policy-guest-configuration.md)]
-## Azure Security Center
-- ## Azure Service Health [!INCLUDE [azure-resource-graph-samples-cat-azure-service-health](../../../../includes/resource-graph/samples/bycat/azure-service-health.md)]
Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature
[!INCLUDE [azure-resource-graph-samples-cat-management-groups](../../../../includes/resource-graph/samples/bycat/management-groups.md)]
+## Microsoft Defender
++ ## Networking [!INCLUDE [azure-resource-graph-samples-cat-networking](../../../../includes/resource-graph/samples/bycat/networking.md)]
governance Samples By Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-table.md
Title: List of sample Azure Resource Graph queries by table description: List sample queries for Azure Resource-Graph. Tables include Resources, ResourceContainers, PolicyResources, and more. Previously updated : 02/16/2022 Last updated : 03/08/2022
hdinsight Apache Hadoop Hive Java Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-hive-java-udf.md
This problem may be caused by the line endings in the Python file. Many Windows
You can use the following PowerShell statements to remove the CR characters before uploading the file to HDInsight: ```PowerShell
-# Set $original_file to the python file path
+# Set $original_file to the Python file path
$text = [IO.File]::ReadAllText($original_file) -replace "`r`n", "`n" [IO.File]::WriteAllText($original_file, $text) ```
hdinsight Python Udf Hdinsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/python-udf-hdinsight.md
In the commands below, replace `sshuser` with the actual username if different.
ssh sshuser@mycluster-ssh.azurehdinsight.net ```
-3. From the SSH session, add the python files uploaded previously to the storage for the cluster.
+3. From the SSH session, add the Python files uploaded previously to the storage for the cluster.
```bash hdfs dfs -put hiveudf.py /hiveudf.py
In the commands below, replace `sshuser` with the actual username if different.
ssh sshuser@mycluster-ssh.azurehdinsight.net ```
-3. From the SSH session, add the python files uploaded previously to the storage for the cluster.
+3. From the SSH session, add the Python files uploaded previously to the storage for the cluster.
```bash hdfs dfs -put pigudf.py /pigudf.py
hdinsight Hdinsight Analyze Twitter Data Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-analyze-twitter-data-linux.md
Twitter allows you to retrieve the data for each tweet as a JavaScript Object No
### Create a Twitter application
-1. From a web browser, sign in to [https://developer.twitter.com/apps/](https://developer.twitter.com/apps/). Select the **Sign-up now** link if you don't have a Twitter account.
+1. From a web browser, sign in to [https://developer.twitter.com](https://developer.twitter.com). Select the **Sign-up now** link if you don't have a Twitter account.
2. Select **Create New App**.
These commands store the data in a location that all nodes in the cluster can ac
You've learned how to transform an unstructured JSON dataset into a structured [Apache Hive](https://hive.apache.org/) table. To learn more about Hive on HDInsight, see the following documents: * [Get started with HDInsight](hadoop/apache-hadoop-linux-tutorial-get-started.md)
-* [Analyze flight delay data using HDInsight](./interactive-query/interactive-query-tutorial-analyze-flight-data.md)
+* [Analyze flight delay data using HDInsight](./interactive-query/interactive-query-tutorial-analyze-flight-data.md)
hdinsight Hdinsight For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-for-vscode.md
Using the PySpark interactive command to submit the queries, follow these steps:
:::image type="content" source="./media/hdinsight-for-vscode/select-interpreter-to-start-jupyter-server.png" alt-text="select interpreter to start jupyter server":::
-8. Select the python option below.
+8. Select the Python option below.
:::image type="content" source="./media/hdinsight-for-vscode/choose-the-below-option.png" alt-text="choose the below option":::
hdinsight Hdinsight Hadoop Linux Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-linux-information.md
There are a various ways to access data from outside the HDInsight cluster. The
If using __Azure Blob storage__, see the following links for ways that you can access your data: * [Azure CLI](/cli/azure/install-az-cli2): Command-Line interface commands for working with Azure. After installing, use the `az storage` command for help on using storage, or `az storage blob` for blob-specific commands.
-* [blobxfer.py](https://github.com/Azure/blobxfer): A python script for working with blobs in Azure Storage.
+* [blobxfer.py](https://github.com/Azure/blobxfer): A Python script for working with blobs in Azure Storage.
* Various SDKs: * [Java](https://github.com/Azure/azure-sdk-for-java)
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
description: Archived release notes for Azure HDInsight. Get development tips an
Previously updated : 12/27/2021 Last updated : 03/10/2022 # Archived release notes
Last updated 12/27/2021
Azure HDInsight is one of the most popular services among enterprise customers for open-source Apache Hadoop and Apache Spark analytics on Azure.
+## Release date: 12/27/2021
+
+This release applies for HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region over several days.
+
+The OS versions for this release are:
+- HDInsight 4.0: Ubuntu 18.04.5 LTS
+
+HDInsight 4.0 image has been updated to mitigate Log4j vulnerability as described in [MicrosoftΓÇÖs Response to CVE-2021-44228 Apache Log4j 2.](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/)
+
+> [!Note]
+> * Any HDI 4.0 clusters created post 27 Dec 2021 00:00 UTC are created with an updated version of the image which mitigates the log4j vulnerabilities. Hence, customers need not patch/reboot these clusters.
+> * For new HDInsight 4.0 clusters created between 16 Dec 2021 at 01:15 UTC and 27 Dec 2021 00:00 UTC, HDInsight 3.6 or in pinned subscriptions after 16 Dec 2021 the patch is auto applied within the hour in which the cluster is created, however customers must then reboot their nodes for the patching to complete (except for Kafka Management nodes, which are automatically rebooted).
+ ## Release date: 07/27/2021 This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days.
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-94901 | [HBASE-19285](https://issues.apache.org/jira/browse/HBASE-19285) | Add per-table latency histograms | | BUG-94908 | [ATLAS-1921](https://issues.apache.org/jira/browse/ATLAS-1921) | UI: Search using entity and trait attributes: UI doesn't perform range check and allows providing out of bounds values for integral and float data types. | | BUG-95086 | [RANGER-1953](https://issues.apache.org/jira/browse/RANGER-1953) | improvement on user-group page listing |
-| BUG-95193 | [SLIDER-1252](https://issues.apache.org/jira/browse/SLIDER-1252) | Slider agent fails with SSL validation errors with python 2.7.5-58 |
+| BUG-95193 | [SLIDER-1252](https://issues.apache.org/jira/browse/SLIDER-1252) | Slider agent fails with SSL validation errors with Python 2.7.5-58 |
| BUG-95314 | [YARN-7699](https://issues.apache.org/jira/browse/YARN-7699) | queueUsagePercentage is coming as INF for getApp REST api call | | BUG-95315 | [HBASE-13947](https://issues.apache.org/jira/browse/HBASE-13947), [HBASE-14517](https://issues.apache.org/jira/browse/HBASE-14517), [HBASE-17931](https://issues.apache.org/jira/browse/HBASE-17931) | Assign system tables to servers with highest version | | BUG-95392 | [ATLAS-2421](https://issues.apache.org/jira/browse/ATLAS-2421) | Notification updates to support V2 data structures |
hdinsight Rest Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/rest-proxy.md
The steps below use the Azure portal. For an example using Azure CLI, see [Creat
## Client application sample
-You can use the python code below to interact with the REST proxy on your Kafka cluster. To use the code sample, follow these steps:
+You can use the Python code below to interact with the REST proxy on your Kafka cluster. To use the code sample, follow these steps:
1. Save the sample code on a machine with Python installed.
-1. Install required python dependencies by executing `pip3 install msal`.
+1. Install required Python dependencies by executing `pip3 install msal`.
1. Modify the code section **Configure these properties** and update the following properties for your environment: |Property |Description |
You can use the python code below to interact with the REST proxy on your Kafka
|Client Secret|The secret for the application that you registered in the security group.| |Kafkarest_endpoint|Get this value from the **Properties** tab in the cluster overview as described in the [deployment section](#create-a-kafka-cluster-with-rest-proxy-enabled). It should be in the following format ΓÇô `https://<clustername>-kafkarest.azurehdinsight.net`|
-1. From the command line, execute the python file by executing `sudo python3 <filename.py>`
+1. From the command line, execute the Python file by executing `sudo python3 <filename.py>`
This code does the following action:
This code does the following action:
For more information about getting OAuth tokens in Python, see [Python AuthenticationContext class](/python/api/adal/adal.authentication_context.authenticationcontext). You might see a delay while `topics` that aren't created or deleted through the Kafka REST proxy are reflected there. This delay is because of cache refresh. The **value** field of the Producer API has been enhanced. Now, it accepts JSON objects and any serialized form. ```python
-#Required python packages
+#Required Python packages
#pip3 install msal import json
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
hdinsight Apache Spark Jupyter Notebook Use External Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-notebook-use-external-packages.md
In this article, you'll learn how to use the [spark-csv](https://search.maven.or
### Tools and extensions
-* [Use external python packages with Jupyter Notebooks in Apache Spark clusters on HDInsight Linux](apache-spark-python-package-installation.md)
+* [Use external Python packages with Jupyter Notebooks in Apache Spark clusters on HDInsight Linux](apache-spark-python-package-installation.md)
* [Use HDInsight Tools Plugin for IntelliJ IDEA to create and submit Spark Scala applications](apache-spark-intellij-tool-plugin.md) * [Use HDInsight Tools Plugin for IntelliJ IDEA to debug Apache Spark applications remotely](apache-spark-intellij-tool-plugin-debug-jobs-remotely.md) * [Use Apache Zeppelin notebooks with an Apache Spark cluster on HDInsight](apache-spark-zeppelin-notebook.md)
hdinsight Apache Spark Python Package Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-python-package-installation.md
Title: Script action for Python packages with Jupyter on Azure HDInsight
-description: Step-by-step instructions on how to use script action to configure Jupyter Notebooks available with HDInsight Spark clusters to use external python packages.
+description: Step-by-step instructions on how to use script action to configure Jupyter Notebooks available with HDInsight Spark clusters to use external Python packages.
export PYSPARK3_PYTHON=${PYSPARK_PYTHON:-/usr/bin/miniforge/envs/py38/bin/python
HDInsight cluster depends on the built-in Python environment, both Python 2.7 and Python 3.5. Directly installing custom packages in those default built-in environments may cause unexpected library version changes. And break the cluster further. To safely install custom external Python packages for your Spark applications, follow below steps.
-1. Create Python virtual environment using conda. A virtual environment provides an isolated space for your projects without breaking others. When creating the Python virtual environment, you can specify python version that you want to use. You still need to create virtual environment even though you would like to use Python 2.7 and 3.5. This requirement is to make sure the cluster's default environment not getting broke. Run script actions on your cluster for all nodes with below script to create a Python virtual environment.
+1. Create Python virtual environment using conda. A virtual environment provides an isolated space for your projects without breaking others. When creating the Python virtual environment, you can specify Python version that you want to use. You still need to create virtual environment even though you would like to use Python 2.7 and 3.5. This requirement is to make sure the cluster's default environment not getting broke. Run script actions on your cluster for all nodes with below script to create a Python virtual environment.
- `--prefix` specifies a path where a conda virtual environment lives. There are several configs that need to be changed further based on the path specified here. In this example, we use the py35new, as the cluster has an existing virtual environment called py35 already. - `python=` specifies the Python version for the virtual environment. In this example, we use version 3.5, the same version as the cluster built in one. You can also use other Python versions to create the virtual environment.
hdinsight Apache Spark Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-settings.md
Spark clusters in HDInsight include a number of components by default. Each of t
|Component |Description| ||| |Spark Core|Spark Core, Spark SQL, Spark streaming APIs, GraphX, and Apache Spark MLlib.|
-|Anaconda|A python package manager.|
+|Anaconda|A Python package manager.|
|Apache Livy|The Apache Spark REST API, used to submit remote jobs to an HDInsight Spark cluster.| |Jupyter Notebooks and Apache Zeppelin Notebooks|Interactive browser-based UI for interacting with your Spark cluster.| |ODBC driver|Connects Spark clusters in HDInsight to business intelligence (BI) tools such as Microsoft Power BI and Tableau.|
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
iot-develop Concepts Azure Rtos Security Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-azure-rtos-security-practices.md
Using hardware-based X.509 certificates with TLS mutual authentication and a PKI
**Hardware**: No specific hardware requirements.
-**Azure RTOS**: Azure RTOS TLS provides support for mutual certificate authentication in both TLS Server and Client applications. For more information, see the [Azure RTOS NetX Secure TLS documentation](/netx-secure-tls/chapter1#netx-secure-unique-features).
+**Azure RTOS**: Azure RTOS TLS provides support for mutual certificate authentication in both TLS Server and Client applications. For more information, see the [Azure RTOS NetX Secure TLS documentation](/azure/rtos/netx-duo/netx-secure-tls/chapter1#netx-secure-unique-features).
**Application**: Applications using TLS should always default to mutual certificate authentication whenever possible. Mutual authentication requires TLS clients to have a device certificate. Mutual authentication is an optional TLS feature but is highly recommended when possible.
iot-dps Concepts Device Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-device-reprovision.md
Depending on the scenario, as a device moves between IoT hubs, it may also be ne
## Reprovisioning policies
-Depending on the scenario, a device usually sends a request to a provisioning service instance on reboot. It also supports a method to manually trigger provisioning on demand. The reprovisioning policy on an enrollment entry determines how the device provisioning service instance handles these provisioning requests. The policy also determines whether device state data should be migrated during reprovisioning. The same policies are available for individual enrollments and enrollment groups:
+Depending on the scenario, a device could send a request to a provisioning service instance on reboot. It also supports a method to manually trigger provisioning on demand. The reprovisioning policy on an enrollment entry determines how the device provisioning service instance handles these provisioning requests. The policy also determines whether device state data should be migrated during reprovisioning. The same policies are available for individual enrollments and enrollment groups:
* **Re-provision and migrate data**: This policy is the default for new enrollment entries. This policy takes action when devices associated with the enrollment entry submit a new request (1). Depending on the enrollment entry configuration, the device may be reassigned to another IoT hub. If the device is changing IoT hubs, the device registration with the initial IoT hub will be removed. The updated device state information from that initial IoT hub will be migrated over to the new IoT hub (2). During migration, the device's status will be reported as **Assigning**.
Depending on the scenario, a device usually sends a request to a provisioning se
> [!NOTE] > DPS will always call the custom allocation webhook regardless of re-provisioning policy in case there is new [ReturnData](how-to-send-additional-data.md) for the device. If the re-provisioning policy is set to **never re-provision**, the webhook will be called but the device will not change its assigned hub.
+When designing your solution and defining a reprovisioning logic there are a few things to consider. For example:
+
+* How often you expect your devices to restart
+* The [DPS quotas and limits](about-iot-dps.md#quotas-and-limits)
+* Expected deployment time for your fleet (phased rollout vs all at once)
+* Retry capability implemented on your client code, as described on the [Retry general guidance](/architecture/best-practices/transient-faults) at the Azure Architecture Center
+
+>[!TIP]
+> We recommend not provisioning on every reboot of the device, as this could cause some issues when reprovisioning several thousands or millions of devices at once. Instead you should attempt to [get the device registration state](/rest/api/iot-dps/service/device-registration-state/get) and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/architecture/best-practices/transient-faults).
+>In some cases, depending on the device capabilities, itΓÇÖs possible to save the IoT Hub information directly on the device to connect directly to IoT Hub after the first-time provisioning using DPS occurred. If you choose to do this, make sure you implement a fallback mechanism in case you get specific [errors from Hub occur](../iot-hub/troubleshoot-message-routing.md#common-error-codes), for example, consider the following scenarios:
+> * Retry the Hub operation if the result code is 429 (Too Many Requests) or an error in the 5xx range. Do not retry for any other errors.
+> * For 429 errors, only retry after the time indicated in the Retry-After header.
+> * For 5xx errors, use exponential back-off, with the first retry at least 5 seconds after the response.
+> * On errors other than 429 and 5xx, re-register through DPS
+> * Ideally you should also support a [method](../iot-hub/iot-hub-devguide-direct-methods.md) to manually trigger provisioning on demand.
+>
+> We also recommend taking into account the service limits when planning activities like pushing updates to your fleet. For example, updating the fleet all at once could cause all devices to re-register through DPS (which could easily be above the registration quota limit) - For such scenarios, consider planning for device updates in phases instead of updating your entire fleet at the same time.
+
+>[!Note]
+> The [get device registration state API](/rest/api/iot-dps/service/device-registration-state/get) does not currently work for TPM devices (the API surface does not include enough information to authenticate the request).
++ ### Managing backwards compatibility Before September 2018, device assignments to IoT hubs had a sticky behavior. When a device went back through the provisioning process, it would only be assigned back to the same IoT hub.
iot-dps How To Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-reprovision.md
The following steps configure the allocation policy for a device's enrollment:
In order for devices to be reprovisioned based on the configuration changes made in the preceding sections, these devices must request reprovisioning.
-How often a device submits a provisioning request depends on the scenario. However, it is advised to program your devices to send a provisioning request to a provisioning service instance on reboot, and support a [method](../iot-hub/iot-hub-devguide-direct-methods.md) to manually trigger provisioning on demand. Provisioning could also be triggered by setting a [desired property](../iot-hub/iot-hub-devguide-device-twins.md#desired-property-example).
-
-The reprovisioning policy on an enrollment entry determines how the device provisioning service instance handles these provisioning requests, and if device state data should be migrated during reprovisioning. The same policies are available for individual enrollments and enrollment groups:
-
-For example code of sending provisioning requests from a device during a boot sequence, see [Auto-provisioning a simulated device](quick-create-simulated-device-tpm.md).
-
+How often a device submits a provisioning request depends on the scenario. When designing your solution and defining a reprovisioning logic there are a few things to consider. For example:
+
+* How often you expect your devices to restart
+* The [DPS quotas and limits](about-iot-dps.md#quotas-and-limits)
+* Expected deployment time for your fleet (phased rollout vs all at once)
+* Retry capability implemented on your client code, as described on the [Retry general guidance](/architecture/best-practices/transient-faults) at the Azure Architecture Center
+
+>[!TIP]
+> We recommend not provisioning on every reboot of the device, as this could cause some issues when reprovisioning several thousands or millions of devices at once. Instead you should attempt to [get the device registration state](/rest/api/iot-dps/service/device-registration-state/get) and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/architecture/best-practices/transient-faults).
+>In some cases, depending on the device capabilities, itΓÇÖs possible to save the IoT Hub information directly on the device to connect directly to IoT Hub after the first-time provisioning using DPS occurred. If you choose to do this, make sure you implement a fallback mechanism in case you get specific [errors from Hub occur](../iot-hub/troubleshoot-message-routing.md#common-error-codes), for example, consider the following scenarios:
+> * Retry the Hub operation if the result code is 429 (Too Many Requests) or an error in the 5xx range. Do not retry for any other errors.
+> * For 429 errors, only retry after the time indicated in the Retry-After header.
+> * For 5xx errors, use exponential back-off, with the first retry at least 5 seconds after the response.
+> * On errors other than 429 and 5xx, re-register through DPS
+> * Ideally you should also support a [method](../iot-hub/iot-hub-devguide-direct-methods.md) to manually trigger provisioning on demand.
+>
+> We also recommend taking into account the service limits when planning activities like pushing updates to your fleet. For example, updating the fleet all at once could cause all devices to re-register through DPS (which could easily be above the registration quota limit) - For such scenarios, consider planning for device updates in phases instead of updating your entire fleet at the same time.
+
+>[!Note]
+> The [get device registration state API](/rest/api/iot-dps/service/device-registration-state/get) does not currently work for TPM devices (the API surface does not include enough information to authenticate the request).
## Next steps
iot-dps Quick Create Simulated Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-symm-key.md
To update and run the provisioning sample with your device information:
pip install azure-iot-device ```
-6. Run the python sample code in *_provision_symmetric_key.py_*.
+6. Run the Python sample code in *_provision_symmetric_key.py_*.
```cmd python provision_symmetric_key.py
iot-dps Quick Create Simulated Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-tpm.md
In this section, you'll configure sample code to use the [Advanced Message Queui
cd azure-iot-sdk-python/provisioning_device_client/samples ```
-2. Using your Python IDE, edit the python script named **provisioning\_device\_client\_sample.py** (replace `{globalServiceEndpoint}` and `{idScope}` to the values that you previously copied). Also, make sure *SECURITY\_DEVICE\_TYPE* is set to `ProvisioningSecurityDeviceType.TPM`.
+2. Using your Python IDE, edit the Python script named **provisioning\_device\_client\_sample.py** (replace `{globalServiceEndpoint}` and `{idScope}` to the values that you previously copied). Also, make sure *SECURITY\_DEVICE\_TYPE* is set to `ProvisioningSecurityDeviceType.TPM`.
```python GLOBAL_PROV_URI = "{globalServiceEndpoint}"
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
Make sure that the user **iotedge** has read permissions for the directory holdi
```bash sudo update-ca-certificates ```
+ This command should output that one certificate was added to /etc/ssl/certs.
+ * **IoT Edge for Linux on Windows (EFLOW)** ```bash
Make sure that the user **iotedge** has read permissions for the directory holdi
For more information, check [CBL-Mariner SSL CA certificates management](https://github.com/microsoft/CBL-Mariner/blob/1.0/toolkit/docs/security/ca-certificates.md).
- This command should output that one certificate was added to /etc/ssl/certs.
- 1. Open the IoT Edge configuration file. ```bash
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
Unless you're developing your module in C, you also need the Python-based [Azure
> [!NOTE] >
-> If you have multiple Python including pre-installed python 2.7 (for example, on Ubuntu or macOS), make sure you are using the correct `pip` or `pip3` to install **iotedgehubdev**
+> If you have multiple Python including pre-installed Python 2.7 (for example, on Ubuntu or macOS), make sure you are using the correct `pip` or `pip3` to install **iotedgehubdev**
To test your module on a device, you'll need an active IoT hub with at least one IoT Edge device. To use your computer as an IoT Edge device, follow the steps in the quickstart for [Linux](quickstart-linux.md) or [Windows](quickstart.md). If you are running IoT Edge daemon on your development machine, you might need to stop EdgeHub and EdgeAgent before you move to next step.
iot-edge Tutorial Machine Learning Edge 01 Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-machine-learning-edge-01-intro.md
In this document, we use the following set of tools:
* An Azure IoT hub for data capture
-* Azure Notebooks as our main front end for data preparation and machine learning experimentation. Running python code in a notebook on a subset of the sample data is a great way to get fast iterative and interactive turnaround during data preparation. Jupyter notebooks can also be used to prepare scripts to run at scale in a compute backend.
+* Azure Notebooks as our main front end for data preparation and machine learning experimentation. Running Python code in a notebook on a subset of the sample data is a great way to get fast iterative and interactive turnaround during data preparation. Jupyter notebooks can also be used to prepare scripts to run at scale in a compute backend.
* Azure Machine Learning as a backend for machine learning at scale and for machine learning image generation. We drive the Azure Machine Learning backend using scripts prepared and tested in Jupyter notebooks.
iot-hub Iot Hub Device Management Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-device-management-visual-studio.md
Title: Azure IoT device management w/ Visual Studio Cloud Explorer description: Use the Cloud Explorer for Visual Studio for Azure IoT Hub device management, featuring the Direct methods and the Twin's desired properties management options.-+ Last updated 08/20/2019-++ # Use Cloud Explorer for Visual Studio for Azure IoT Hub device management
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-support.md
The possible status codes are:
| 429 | Too many requests (throttled), as per [IoT Hub throttling](iot-hub-devguide-quotas-throttling.md) | | 5** | Server errors |
-The python code snippet below, demonstrates the twin reported properties update process over MQTT (using Paho MQTT client):
+The Python code snippet below, demonstrates the twin reported properties update process over MQTT (using Paho MQTT client):
```python from paho.mqtt import client as mqtt
iot-hub Iot Hub Visual Studio Cloud Device Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-visual-studio-cloud-device-messaging.md
Title: Use VS Cloud Explorer to manage Azure IoT Hub device messaging description: Learn how to use Cloud Explorer for Visual Studio to monitor device to cloud messages and send cloud to device messages in Azure IoT Hub.-+ Last updated 08/20/2019-++ # Use Cloud Explorer for Visual Studio to send and receive messages between your device and IoT Hub
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
iot-hub Quickstart Control Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-control-device.md
Previously updated : 07/26/2021 Last updated : 02/25/2022 zone_pivot_groups: iot-hub-set1 #Customer intent: As a developer new to IoT Hub, I need to see how to use a service application to control a device connected to the hub.
iot-hub Quickstart Send Telemetry Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-send-telemetry-cli.md
Previously updated : 03/08/2022 Last updated : 02/23/2022 # Quickstart: Send telemetry from a device to an IoT hub and monitor it with the Azure CLI
Last updated 03/08/2022
IoT Hub is an Azure service that enables you to ingest high volumes of telemetry from your IoT devices into the cloud for storage or processing. In this quickstart, you use the Azure CLI to create an IoT Hub and a simulated device, send device telemetry to the hub, and send a cloud-to-device message. You also use the Azure portal to visualize device metrics. This is a basic workflow for developers who use the CLI to interact with an IoT Hub application. ## Prerequisites+ - If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Azure CLI. You can run all commands in this quickstart using the Azure Cloud Shell, an interactive CLI shell that runs in your browser. If you use the Cloud Shell, you don't need to install anything. If you prefer to use the CLI locally, this quickstart requires Azure CLI version 2.0.76 or later. Run az --version to find the version. To install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+- Azure CLI. You can run all commands in this quickstart using the Azure Cloud Shell, an interactive CLI shell that runs in your browser. If you use the Cloud Shell, you don't need to install anything. If you prefer to use the CLI locally, this quickstart requires Azure CLI version 2.0.76 or later. Run `az --version` to find the version. To install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
## Sign in to the Azure portal
-Sign in to the Azure portal at https://portal.azure.com.
-Regardless whether you run the CLI locally or in the Cloud Shell, keep the portal open in your browser. You use it later in this quickstart.
+Sign in to the [Azure portal](https://portal.azure.com).
+
+Regardless of whether you run the CLI locally or in the Cloud Shell, keep the portal open in your browser. You use it later in this quickstart.
## Launch the Cloud Shell+ In this section, you launch an instance of the Azure Cloud Shell. If you use the CLI locally, skip to the section [Prepare two CLI sessions](#prepare-two-cli-sessions). To launch the Cloud Shell:
-1. Select the **Cloud Shell** button on the top-right menu bar in the Azure portal.
+1. Select the **Cloud Shell** button on the top-right menu bar in the Azure portal.
![Azure portal Cloud Shell button](media/quickstart-send-telemetry-cli/cloud-shell-button.png) > [!NOTE]
- > If this is the first time you've used the Cloud Shell, it prompts you to create storage, which is required to use the Cloud Shell. Select a subscription to create a storage account and Microsoft Azure Files share.
+ > If this is the first time you've used the Cloud Shell, it prompts you to create storage, which is required to use the Cloud Shell. Select a subscription to create a storage account and Microsoft Azure Files share.
-2. Select your preferred CLI environment in the **Select environment** dropdown. This quickstart uses the **Bash** environment. All the following CLI commands work in the PowerShell environment too.
+2. Select your preferred CLI environment in the **Select environment** dropdown. This quickstart uses the **Bash** environment. All the following CLI commands work in the PowerShell environment too.
![Select CLI environment](media/quickstart-send-telemetry-cli/cloud-shell-environment.png)
In this section, you prepare two Azure CLI sessions. If you're using the Cloud S
Azure CLI requires you to be logged into your Azure account. All communication between your Azure CLI shell session and your IoT hub is authenticated and encrypted. As a result, this quickstart does not need additional authentication that you'd use with a real device, such as a connection string.
-* Run the [az extension add](/cli/azure/extension#az_extension_add) command to add the Microsoft Azure IoT Extension for Azure CLI to your CLI shell. The IOT Extension adds IoT Hub, IoT Edge, and IoT Device Provisioning Service (DPS) specific commands to Azure CLI.
+- Run the [az extension add](/cli/azure/extension#az_extension_add) command to add the Microsoft Azure IoT Extension for Azure CLI to your CLI shell. The IOT Extension adds IoT Hub, IoT Edge, and IoT Device Provisioning Service (DPS) specific commands to Azure CLI.
```azurecli az extension add --name azure-iot ```
-
- After you install the Azure IOT extension, you don't need to install it again in any Cloud Shell session.
+
+ After you install the Azure IOT extension, you don't need to install it again in any Cloud Shell session.
[!INCLUDE [iot-hub-cli-version-info](../../includes/iot-hub-cli-version-info.md)]
-* Open a second CLI session. If you're using the Cloud Shell, select **Open new session**. If you're using the CLI locally, open a second instance.
+- Open a second CLI session. If you're using the Cloud Shell, select **Open new session**. If you're using the CLI locally, open a second instance.
>[!div class="mx-imgBorder"] >![Open new Cloud Shell session](media/quickstart-send-telemetry-cli/cloud-shell-new-session.png)
-## Create an IoT Hub
-In this section, you use the Azure CLI to create a resource group and an IoT Hub. An Azure resource group is a logical container into which Azure resources are deployed and managed. An IoT Hub acts as a central message hub for bi-directional communication between your IoT application and the devices.
+## Create an IoT hub
+
+In this section, you use the Azure CLI to create a resource group and an IoT hub. An Azure resource group is a logical container into which Azure resources are deployed and managed. An IoT hub acts as a central message hub for bi-directional communication between your IoT application and the devices.
> [!TIP]
-> Optionally, you can create an Azure resource group, an IoT Hub, and other resources by using the [Azure portal](iot-hub-create-through-portal.md), [Visual Studio Code](iot-hub-create-use-iot-toolkit.md), or other programmatic methods.
+> Optionally, you can create an Azure resource group, an IoT hub, and other resources by using the [Azure portal](iot-hub-create-through-portal.md), [Visual Studio Code](iot-hub-create-use-iot-toolkit.md), or other programmatic methods.
-1. Run the [az group create](/cli/azure/group#az_group_create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *eastus* location.
+1. Run the [az group create](/cli/azure/group#az_group_create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *eastus* location.
```azurecli az group create --name MyResourceGroup --location eastus ```
-1. Run the [az iot hub create](/cli/azure/iot/hub#az_iot_hub_create) command to create an IoT hub. It might take a few minutes to create an IoT hub.
+1. Run the [az iot hub create](/cli/azure/iot/hub#az_iot_hub_create) command to create an IoT hub. It might take a few minutes to create an IoT hub.
- *YourIotHubName*. Replace this placeholder name and the curly brackets with the name you chose for your IoT hub. An IoT hub name must be globally unique in Azure. This placeholder is used in the rest of this quickstart to represent your IoT hub name.
+ *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub. An IoT hub name must be globally unique in Azure. This placeholder is used in the rest of this quickstart to represent your IoT hub name.
```azurecli az iot hub create --resource-group MyResourceGroup --name {YourIoTHubName} ``` ## Create and monitor a device+ In this section, you create a simulated device in the first CLI session. The simulated device sends device telemetry to your IoT hub. In the second CLI session, you monitor events and telemetry, and send a cloud-to-device message to the simulated device. To create and start a simulated device:
-1. Run the [az iot hub device-identity create](/cli/azure/iot/hub/device-identity#az_iot_hub_device_identity_create) command in the first CLI session. This creates the simulated device identity.
- *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
+1. Run the [az iot hub device-identity create](/cli/azure/iot/hub/device-identity#az_iot_hub_device_identity_create) command in the first CLI session. This creates the simulated device identity.
- *simDevice*. You can use this name directly for the simulated device in the rest of this quickstart. Optionally, use a different name.
+ *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
+
+ *simDevice*. You can use this name directly for the simulated device in the rest of this quickstart. Optionally, use a different name.
```azurecli az iot hub device-identity create --device-id simDevice --hub-name {YourIoTHubName}
To create and start a simulated device:
1. Run the [az iot device simulate](/cli/azure/iot/device#az_iot_device_simulate) command in the first CLI session. This starts the simulated device. The device sends telemetry to your IoT hub and receives messages from it.
- *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
+ *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
```azurecli az iot device simulate -d simDevice -n {YourIoTHubName} ``` To monitor a device:+ 1. In the second CLI session, run the [az iot hub monitor-events](/cli/azure/iot/hub#az_iot_hub_monitor_events) command. This starts monitoring the simulated device. The output shows telemetry that the simulated device sends to the IoT hub.
- *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
+ *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
```azurecli az iot hub monitor-events --output table --hub-name {YourIoTHubName}
To monitor a device:
![Cloud Shell monitor events](media/quickstart-send-telemetry-cli/cloud-shell-monitor.png)
-1. After you monitor the simulated device in the second CLI session, press Ctrl+C to stop monitoring.
+1. After you monitor the simulated device in the second CLI session, press Ctrl+C to stop monitoring.
## Use the CLI to send a message+ In this section, you use the second CLI session to send a message to the simulated device. 1. In the first CLI session, confirm that the simulated device is running. If the device has stopped, run the following command to start it:
- *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
+ *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
```azurecli az iot device simulate -d simDevice -n {YourIoTHubName}
In this section, you use the second CLI session to send a message to the simulat
1. In the second CLI session, run the [az iot device c2d-message send](/cli/azure/iot/device/c2d-message#az_iot_device_c2d-message-send) command. This sends a cloud-to-device message from your IoT hub to the simulated device. The message includes a string and two key-value pairs.
- *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
+ *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
```azurecli az iot device c2d-message send -d simDevice --data "Hello World" --props "key0=value0;key1=value1" -n {YourIoTHubName} ```
- Optionally, you can send cloud-to-device messages by using the Azure portal. To do this, browse to the overview page for your IoT Hub, select **IoT Devices**, select the simulated device, and select **Message to Device**.
-1. In the first CLI session, confirm that the simulated device received the message.
+ Optionally, you can send cloud-to-device messages by using the Azure portal. To do this, browse to the overview page for your IoT Hub, select **IoT Devices**, select the simulated device, and select **Message to Device**.
+
+1. In the first CLI session, confirm that the simulated device received the message.
![Cloud Shell cloud-to-device message](media/quickstart-send-telemetry-cli/cloud-shell-receive-message.png) 1. After you view the message, close the second CLI session. Keep the first CLI session open. You use it to clean up resources in a later step. ## View messaging metrics in the portal
-The Azure portal enables you to manage all aspects of your IoT Hub and devices. In a typical IoT Hub application that ingests telemetry from devices, you might want to monitor devices or view metrics on device telemetry.
+
+The Azure portal enables you to manage all aspects of your IoT hub and devices. In a typical IoT Hub application that ingests telemetry from devices, you might want to monitor devices or view metrics on device telemetry.
To visualize messaging metrics in the Azure portal:
-1. In the left navigation menu on the portal, select **All Resources**. This lists all resources in your subscription, including the IoT hub you created.
+
+1. In the left navigation menu on the portal, select **All Resources**. This lists all resources in your subscription, including the IoT hub you created.
1. Select the link on the IoT hub you created. The portal displays the overview page for the hub.
-1. Select **Metrics** in the left pane of your IoT Hub.
+1. Select **Metrics** in the left pane of your IoT Hub.
![IoT Hub messaging metrics](media/quickstart-send-telemetry-cli/iot-hub-portal-metrics.png)
-1. Enter your IoT hub name in **Scope**.
+1. In the **Scope** field, enter your IoT hub name.
-2. Select *Iot Hub Standard Metrics* in **Metric Namespace**.
+1. In the **Metric Namespace** field, select *Iot Hub Standard Metrics*.
-3. Select *Total number of messages used* in **Metric**.
+1. In the **Metric** field, select *Total number of messages used*.
-4. Hover your mouse pointer over the area of the timeline in which your device sent messages. The total number of messages at a point in time appears in the lower left corner of the timeline.
+1. Hover your mouse pointer over the area of the timeline in which your device sent messages. The total number of messages at a point in time appears in the lower left corner of the timeline.
![View Azure IoT Hub metrics](media/quickstart-send-telemetry-cli/iot-hub-portal-view-metrics.png)
-5. Optionally, use the **Metric** dropdown to display other metrics on your simulated device. For example, *C2d message deliveries completed* or *Total devices (preview)*.
+1. Optionally, use the **Metric** dropdown to display other metrics on your simulated device. For example, *C2d message deliveries completed* or *Total devices (preview)*.
## Clean up resources+ If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete them.
-If you continue to the next recommended article, you can keep the resources you've already created and reuse them.
+If you continue to the next recommended article, you can keep the resources you've already created and reuse them.
> [!IMPORTANT]
-> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources.
+> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources.
To delete a resource group by name:+ 1. Run the [az group delete](/cli/azure/group#az_group_delete) command. This removes the resource group, the IoT Hub, and the device registration you created. ```azurecli az group delete --name MyResourceGroup ```+ 1. Run the [az group list](/cli/azure/group#az_group_list) command to confirm the resource group is deleted. ```azurecli
To delete a resource group by name:
``` ## Next steps+ In this quickstart, you used the Azure CLI to create an IoT hub, create a simulated device, send telemetry, monitor telemetry, send a cloud-to-device message, and clean up resources. You used the Azure portal to visualize messaging metrics on your device. If you are a device developer, the suggested next step is to see the telemetry quickstart that uses the Azure IoT Device SDK for C. Optionally, see one of the available Azure IoT Hub telemetry quickstart articles in your preferred language or SDK.
key-vault Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/private-link-service.md
Aliases: <your-key-vault-name>.vault.azure.net
## Limitations and Design Considerations
-> [!NOTE]
-> The number of key vaults with private endpoints enabled per subscription is an adjustable limit. The limit shown below is the default limit. If you would like to request a limit increase for your service, please send an email to azurekeyvault@microsoft.com. We will approve these requests on a case by case basis.
+**Limits**: See [Azure Private Link limits](https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#private-link-limits)
-**Pricing**: For pricing information, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
+**Pricing**: See [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
-**Limitations**: Private Endpoint for Azure Key Vault is only available in Azure public regions.
-
-**Maximum Number of Private Endpoints per Key Vault**: 64.
-
-**Default Number of Key Vaults with Private Endpoints per Subscription**: 400.
-
-For more, see [Azure Private Link service: Limitations](../../private-link/private-link-service-overview.md#limitations)
+**Limitations**: See [Azure Private Link service: Limitations](../../private-link/private-link-service-overview.md#limitations)
## Next Steps
key-vault Hsm Protected Keys Ncipher https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys-ncipher.md
The toolset includes:
* A Key Exchange Key (KEK) package that has a name beginning with **BYOK-KEK-pkg-.** * A Security World package that has a name beginning with **BYOK-SecurityWorld-pkg-.**
-* A python script named **verifykeypackage.py.**
+* A Python script named **verifykeypackage.py.**
* A command-line executable file named **KeyTransferRemote.exe** and associated DLLs. * A Visual C++ Redistributable Package, named **vcredist_x64.exe.**
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
key-vault Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Key Vault description: Sample Azure Resource Graph queries for Azure Key Vault showing use of resource types and tables to access Azure Key Vault related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
load-balancer Manage Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-inbound-nat-rules.md
+
+ Title: Manage inbound NAT rules for Azure Load Balancer
+description: In this article, you'll learn how to add and remove and inbound NAT rule in the Azure portal.
++++ Last updated : 03/10/2022++
+# Manage inbound NAT rules for Azure Load Balancer using the Azure portal
+
+An inbound NAT rule is used to forward traffic from a load balancer frontend to one or more instances in the backend pool.
+
+There are two types of inbound NAT rule:
+
+* Single virtual machine - An inbound NAT rule that targets a single machine in the backend pool of the load balancer
+
+* Multiple virtual machines - An inbound NAT rule that targets multiple virtual machines in the backend pool of the load balancer
+
+In this article, you'll learn how to add and remove an inbound NAT rule for both types. You'll learn how to change the frontend port allocation in a multiple instance inbound NAT rule.
+++
+- This quickstart requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+- A standard public load balancer in your subscription. For more information on creating an Azure Load Balancer, see [Quickstart: Create a public load balancer to load balance VMs using the Azure portal](quickstart-load-balancer-standard-public-portal.md). The load balancer name for the examples in this article is **myLoadBalancer**.
+
+## Add a single VM inbound NAT rule
+
+# [**Portal**](#tab/inbound-nat-rule-portal)
+
+In this example, you'll create an inbound NAT rule to forward port 500 to backend port 443.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Inbound NAT rules** in **Settings**.
+
+5. Select **+ Add** in **Inbound NAT rules** to add the rule.
+
+ :::image type="content" source="./media/manage-inbound-nat-rules/add-rule.png" alt-text="Screenshot of the inbound NAT rules page for Azure Load Balancer":::
+
+6. Enter or select the following information in **Add inbound NAT rule**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myInboundNATrule**. |
+ | Type | Select **Azure Virtual Machine**. |
+ | Target virtual machine | Select the virtual machine that you wish to forward the port to. In this example, it's **myVM1**. |
+ | Network IP configuration | Select the IP configuration of the virtual machine. In this example, it's **ipconfig1(10.1.0.4)**. |
+ | Frontend IP address | Select **myFrontend**. |
+ | Frontend Port | Enter **500**. |
+ | Service Tag | Leave the default of **Custom**. |
+ | Backend port | Enter **443**. |
+ | Protocol | Select **TCP**. |
+
+7. Leave the rest of the settings at the defaults and select **Add**.
+
+ :::image type="content" source="./media/manage-inbound-nat-rules/add-single-instance-rule.png" alt-text="Screenshot of the create inbound NAT rule page":::
+
+# [**CLI**](#tab/inbound-nat-rule-cli)
+
+In this example, you'll create an inbound NAT rule to forward port 500 to backend port 443.
+
+Use [az network lb inbound-nat-rule create](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-create) to create the NAT rule.
+
+```azurecli
+ az network lb inbound-nat-rule create \
+ --backend-port 443 \
+ --lb-name myLoadBalancer \
+ --name myInboundNATrule \
+ --protocol Tcp \
+ --resource-group myResourceGroup \
+ --backend-pool-name myBackendPool \
+ --frontend-ip-name myFrontend \
+ --frontend-port 500
+```
++
+## Add a multiple VMs inbound NAT rule
+
+# [**Portal**](#tab/inbound-nat-rule-portal)
+
+In this example, you'll create an inbound NAT rule to forward a range of ports starting at port 500 to backend port 443.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Inbound NAT rules** in **Settings**.
+
+5. Select **+ Add** in **Inbound NAT rules** to add the rule.
+
+ :::image type="content" source="./media/manage-inbound-nat-rules/add-rule.png" alt-text="Screenshot of the inbound NAT rules page for Azure Load Balancer":::
+
+6. Enter or select the following information in **Add inbound NAT rule**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myInboundNATrule**. |
+ | Type | Select **Backend pool**. |
+ | Target backend pool | Select your backend pool. In this example, it's **myBackendPool**. |
+ | Frontend IP address | Select your frontend IP address. In this example, it's **myFrontend**. |
+ | Frontend port range start | Enter **500**. |
+ | Maximum number of machines in backend pool | Enter **1000**. |
+ | Backend port | Enter **443**. |
+ | Protocol | Select **TCP**. |
+
+7. Leave the rest at the defaults and select **Add**.
+
+ :::image type="content" source="./media/manage-inbound-nat-rules/add-inbound-nat-rule.png" alt-text="Screenshot of the add inbound NAT rules page":::
+
+# [**CLI**](#tab/inbound-nat-rule-cli)
+
+In this example, you'll create an inbound NAT rule to forward a range of ports starting at port 500 to backend port 443.
+
+Use [az network lb inbound-nat-rule create](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-create) to create the NAT rule.
+
+```azurecli
+ az network lb inbound-nat-rule create \
+ --backend-port 443 \
+ --lb-name myLoadBalancer \
+ --name myInboundNATrule \
+ --protocol Tcp \
+ --resource-group myResourceGroup \
+ --backend-pool-name myBackendPool \
+ --frontend-ip-name myFrontend \
+ --frontend-port-range-end 1000 \
+ --frontend-port-range-start 500
+
+```
+++
+## Change frontend port allocation for a multiple VM rule
+
+# [**Portal**](#tab/inbound-nat-rule-portal)
+
+To accommodate more virtual machines in the backend pool in a multiple instance rule, change the frontend port allocation in the inbound NAT rule. In this example, you'll change the frontend port allocation from 500 to 1000.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Inbound NAT rules** in **Settings**.
+
+5. Select the inbound NAT rule you wish to change. In this example, it's **myInboundNATrule**.
+
+ :::image type="content" source="./media/manage-inbound-nat-rules/select-inbound-nat-rule.png" alt-text="Screenshot of inbound NAT rule overview.":::
+
+6. In the properties of the inbound NAT rule, change the value in **Frontend port range start** to **1000**.
+
+7. Select **Save**.
+
+ :::image type="content" source="./media/manage-inbound-nat-rules/change-frontend-ports.png" alt-text="Screenshot of inbound NAT rule properties page.":::
+
+# [**CLI**](#tab/inbound-nat-rule-cli)
+
+To accommodate more virtual machines in the backend pool, change the frontend port allocation in the inbound NAT rule. In this example, you'll change the frontend port allocation from 500 to 1000.
+
+Use [az network lb inbound-nat-rule update](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-update) to change the frontend port allocation.
+
+```azurecli
+ az network lb inbound-nat-rule update \
+ --frontend-port-range-start 1000 \
+ --lb-name myLoadBalancer \
+ --name myInboundNATrule \
+ --resource-group myResourceGroup
+
+```
+++
+## Remove an inbound NAT rule
+
+# [**Portal**](#tab/inbound-nat-rule-portal)
+
+In this example, you'll remove an inbound NAT rule.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page in, select **Inbound NAT rules** in **Settings**.
+
+5. Select the three dots next to the rule you want to remove.
+
+6. Select **Delete**.
+
+ :::image type="content" source="./media/manage-inbound-nat-rules/remove-inbound-nat-rule.png" alt-text="Screenshot of inbound NAT rule removal.":::
+
+# [**CLI**](#tab/inbound-nat-rule-cli)
+
+In this example, you'll remove an inbound NAT rule.
+
+Use [az network lb inbound-nat-rule delete](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-delete) to remove the NAT rule.
+
+```azurecli
+ az network lb inbound-nat-rule delete \
+ --lb-name myLoadBalancer \
+ --name myInboundNATrule \
+ --resource-group myResourceGroup
+```
+++
+## Next steps
+
+In this article, you learned how to manage inbound NAT rules for an Azure Load Balancer.
+
+For more information about Azure Load Balancer, see:
+- [What is Azure Load Balancer?](load-balancer-overview.md)
+- [Frequently asked questions - Azure Load Balancer](load-balancer-faqs.yml)
load-balancer Load Balancer Linux Cli Load Balance Multiple Websites Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-load-balance-multiple-websites-vm.md
ms.devlang: azurecli Previously updated : 04/20/2018 Last updated : 03/04/2022
This Azure CLI script sample creates a virtual network with two virtual machines (VM) that are members of an availability set. A load balancer directs traffic for two separate IP addresses to the two VMs. After running the script, you could deploy web server software to the VMs and host multiple web sites, each with its own IP address. - [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script +
+### Run the script
-[!code-azurecli-interactive[main](../../../cli_scripts/load-balancer/load-balance-multiple-web-sites-vm/load-balance-multiple-web-sites-vm.sh "Load balance multiple web sites")]
-## Clean up deployment
+## Clean up resources
-Run the following command to remove the resource group, VM, and all related resources.
```azurecli
-az group delete --name myResourceGroup --yes
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, virtual network, load balancer, and all related resources. Each command in the table links to command specific documentation.
This script uses the following commands to create a resource group, virtual netw
For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-Additional networking CLI script samples can be found in the [Azure Networking Overview documentation](../cli-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
+Additional networking CLI script samples can be found in the [Azure Networking Overview documentation](../cli-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
load-balancer Load Balancer Linux Cli Sample Nlb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-sample-nlb.md
ms.devlang: azurecli Previously updated : 04/20/2018 Last updated : 03/04/2022 # Azure CLI script example: Load balance traffic to VMs for high availability
-This Azure CLI script example creates everything needed to run several Ubuntu virtual machines configured in a highly available and load balanced configuration. After running the script, you will have three virtual machines, joined to an Azure Availability Set, and accessible through an Azure Load Balancer.
-
+This Azure CLI script example creates everything needed to run several Ubuntu virtual machines configured in a highly available and load balanced configuration. After running the script, you will have three virtual machines, joined to an Azure Availability Set, and accessible through an Azure Load Balancer.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-## Clean up deployment
+
+### Run the script
++
+## Clean up resources
-Run the following command to remove the resource group, VM, and all related resources.
```azurecli
-az group delete --name myResourceGroup
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, virtual machine, availability set, load balancer, and all related resources. Each command in the table links to command specific documentation.
This script uses the following commands to create a resource group, virtual mach
For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
+Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
load-balancer Load Balancer Linux Cli Sample Zonal Frontend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-sample-zonal-frontend.md
ms.devlang: azurecli
Previously updated : 06/14/2018 Last updated : 03/04/2022 # Azure CLI script example: Load balance traffic to VMs within a specific availability zone
-This Azure CLI script example creates everything needed to run several Ubuntu virtual machines configured in a highly available and load balanced configuration within a specific availability zone. After running the script, you will have three virtual machines in a single availability zones within a region that are accessible through an Azure Standard Load Balancer.
-
+This Azure CLI script example creates everything needed to run several Ubuntu virtual machines configured in a highly available and load balanced configuration within a specific availability zone. After running the script, you will have three virtual machines in a single availability zones within a region that are accessible through an Azure Standard Load Balancer.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-```azurecli-interactive
- #!/bin/bash
-
- # Create a resource group.
- az group create \
- --name myResourceGroup \
- --location westeurope
-
- # Create a virtual network.
- az network vnet create \
- --resource-group myResourceGroup \
- --location westeurope \
- --name myVnet \
- --subnet-name mySubnet
-
- # Create a zonal Standard public IP address.
- az network public-ip create \
- --resource-group myResourceGroup \
- --name myPublicIP \
- --sku Standard
- --zone 1
-
- # Create an Azure Load Balancer.
- az network lb create \
- --resource-group myResourceGroup \
- --name myLoadBalancer \
- --public-ip-address myPublicIP \
- --frontend-ip-name myFrontEndPool \
- --backend-pool-name myBackEndPool \
- --sku Standard
-
- # Creates an LB probe on port 80.
- az network lb probe create \
- --resource-group myResourceGroup \
- --lb-name myLoadBalancer \
- --name myHealthProbe \
- --protocol tcp \
- --port 80
-
- # Creates an LB rule for port 80.
- az network lb rule create \
- --resource-group myResourceGroup \
- --lb-name myLoadBalancer \
- --name myLoadBalancerRuleWeb \
- --protocol tcp \
- --frontend-port 80 \
- --backend-port 80 \
- --frontend-ip-name myFrontEndPool \
- --backend-pool-name myBackEndPool \
- --probe-name myHealthProbe
-
- # Create three NAT rules for port 22.
- for i in `seq 1 3`; do
- az network lb inbound-nat-rule create \
- --resource-group myResourceGroup \
- --lb-name myLoadBalancer \
- --name myLoadBalancerRuleSSH$i \
- --protocol tcp \
- --frontend-port 422$i \
- --backend-port 22 \
- --frontend-ip-name myFrontEndPool
- done
-
- # Create a network security group
- az network nsg create \
- --resource-group myResourceGroup \
- --name myNetworkSecurityGroup
-
- # Create a network security group rule for port 22.
- az network nsg rule create \
- --resource-group myResourceGroup \
- --nsg-name myNetworkSecurityGroup \
- --name myNetworkSecurityGroupRuleSSH \
- --protocol tcp \
- --direction inbound \
- --source-address-prefix '*' \
- --source-port-range '*' \
- --destination-address-prefix '*' \
- --destination-port-range 22 \
- --access allow \
- --priority 1000
-
- # Create a network security group rule for port 80.
- az network nsg rule create \
- --resource-group myResourceGroup \
- --nsg-name myNetworkSecurityGroup \
- --name myNetworkSecurityGroupRuleHTTP \
- --protocol tcp \
- --direction inbound \
- --source-address-prefix '*' \
- --source-port-range '*' \
- --destination-address-prefix '*' \
- --destination-port-range 80 \
- --access allow \
- --priority 2000
-
- # Create three virtual network cards and associate with public IP address and NSG.
- for i in `seq 1 3`; do
- az network nic create \
- --resource-group myResourceGroup \
- --name myNic$i \
- --vnet-name myVnet \
- --subnet mySubnet \
- --network-security-group myNetworkSecurityGroup \
- --lb-name myLoadBalancer \
- --lb-address-pools myBackEndPool \
- --lb-inbound-nat-rules myLoadBalancerRuleSSH$i
- done
-
-# Create three virtual machines, this creates SSH keys if not present.
-for i in `seq 1 3`; do
- az vm create \
- --resource-group myResourceGroup \
- --name myVM$i \
- --zone 1 \
- --nics myNic$i \
- --image UbuntuLTS \
- --generate-ssh-keys \
- --no-wait
-done
-```
+### Run the script
+
-## Clean up deployment
+## Clean up resources
-Run the following command to remove the resource group, VM, and all related resources.
```azurecli
-az group delete --name myResourceGroup
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, virtual machine, availability set, load balancer, and all related resources. Each command in the table links to command specific documentation.
This script uses the following commands to create a resource group, virtual mach
For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
+Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
load-balancer Load Balancer Linux Cli Sample Zone Redundant Frontend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-sample-zone-redundant-frontend.md
# Azure CLI script example: Load balance VMs across availability zones
-This Azure CLI script example creates everything needed to run several Ubuntu virtual machines configured in a highly available and load balanced configuration. After running the script, you will have three virtual machines across all availability zones within a region that are accessible through an Azure Standard Load Balancer.
-
+This Azure CLI script example creates everything needed to run several Ubuntu virtual machines configured in a highly available and load balanced configuration. After running the script, you will have three virtual machines across all availability zones within a region that are accessible through an Azure Standard Load Balancer.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-```azurecli-interactive
- #!/bin/bash
-
- # Create a resource group.
- az group create \
- --name myResourceGroup \
- --location westeurope
-
- # Create a virtual network.
- az network vnet create \
- --resource-group myResourceGroup \
- --location westeurope \
- --name myVnet \
- --subnet-name mySubnet
-
- # Create a zonal Standard public IP address.
- az network public-ip create \
- --resource-group myResourceGroup \
- --name myPublicIP \
- --sku Standard
-
- # Create an Azure Load Balancer.
- az network lb create \
- --resource-group myResourceGroup \
- --name myLoadBalancer \
- --public-ip-address myPublicIP \
- --frontend-ip-name myFrontEndPool \
- --backend-pool-name myBackEndPool \
- --sku Standard
-
- # Creates an LB probe on port 80.
- az network lb probe create \
- --resource-group myResourceGroup \
- --lb-name myLoadBalancer \
- --name myHealthProbe \
- --protocol tcp \
- --port 80
-
- # Creates an LB rule for port 80.
- az network lb rule create \
- --resource-group myResourceGroup \
- --lb-name myLoadBalancer \
- --name myLoadBalancerRuleWeb \
- --protocol tcp \
- --frontend-port 80 \
- --backend-port 80 \
- --frontend-ip-name myFrontEndPool \
- --backend-pool-name myBackEndPool \
- --probe-name myHealthProbe
-
- # Create three NAT rules for port 22.
- for i in `seq 1 3`; do
- az network lb inbound-nat-rule create \
- --resource-group myResourceGroup \
- --lb-name myLoadBalancer \
- --name myLoadBalancerRuleSSH$i \
- --protocol tcp \
- --frontend-port 422$i \
- --backend-port 22 \
- --frontend-ip-name myFrontEndPool
- done
-
- # Create a network security group
- az network nsg create \
- --resource-group myResourceGroup \
- --name myNetworkSecurityGroup
-
- # Create a network security group rule for port 22.
- az network nsg rule create \
- --resource-group myResourceGroup \
- --nsg-name myNetworkSecurityGroup \
- --name myNetworkSecurityGroupRuleSSH \
- --protocol tcp \
- --direction inbound \
- --source-address-prefix '*' \
- --source-port-range '*' \
- --destination-address-prefix '*' \
- --destination-port-range 22 \
- --access allow \
- --priority 1000
-
- # Create a network security group rule for port 80.
- az network nsg rule create \
- --resource-group myResourceGroup \
- --nsg-name myNetworkSecurityGroup \
- --name myNetworkSecurityGroupRuleHTTP \
- --protocol tcp \
- --direction inbound \
- --source-address-prefix '*' \
- --source-port-range '*' \
- --destination-address-prefix '*' \
- --destination-port-range 80 \
- --access allow \
- --priority 2000
-
- # Create three virtual network cards and associate with load balancer and NSG.
- for i in `seq 1 3`; do
- az network nic create \
- --resource-group myResourceGroup \
- --name myNic$i \
- --vnet-name myVnet \
- --subnet mySubnet \
- --network-security-group myNetworkSecurityGroup \
- --lb-name myLoadBalancer \
- --lb-address-pools myBackEndPool \
- --lb-inbound-nat-rules myLoadBalancerRuleSSH$i
- done
-
-# Create three virtual machines, this creates SSH keys if not present.
-for i in `seq 1 3`; do
- az vm create \
- --resource-group myResourceGroup \
- --name myVM$i \
- --zone $i \
- --nics myNic$i \
- --image UbuntuLTS \
- --generate-ssh-keys \
- --no-wait
-done
-```
+### Run the script
+
-## Clean up deployment
+## Clean up resources
-Run the following command to remove the resource group, VM, and all related resources.
```azurecli
-az group delete --name myResourceGroup
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, virtual machine, availability set, load balancer, and all related resources. Each command in the table links to command specific documentation.
This script uses the following commands to create a resource group, virtual mach
For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
+Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
load-balancer Tutorial Nat Rule Multi Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-nat-rule-multi-instance-portal.md
+
+ Title: "Tutorial: Create a multiple instance inbound NAT rule - Azure portal"
+
+description: This tutorial shows how to configure port forwarding using Azure Load Balancer to create a connection to multiple virtual machines in an Azure virtual network.
++++ Last updated : 03/10/2022+++
+# Tutorial: Create a multiple instance inbound NAT rule using the Azure portal
+
+Inbound NAT rules allow you to connect to virtual machines (VMs) in an Azure virtual network by using an Azure Load Balancer public IP address and port number.
+
+For more information about Azure Load Balancer rules, see [Manage rules for Azure Load Balancer using the Azure portal](manage-rules-how-to.md).
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a virtual network and virtual machines
+> * Create a standard SKU public load balancer with frontend IP, health probe, backend configuration, and load-balancing rule
+> * Create a multiple instance inbound NAT rule
+> * Create a NAT gateway for outbound internet access for the backend pool
+> * Install and configure a web server on the VMs to demonstrate the port forwarding and load-balancing rules
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Create virtual network and virtual machines
+
+A virtual network and subnet is required for the resources in the tutorial. In this section, you'll create a virtual network and virtual machines for the later steps.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+3. In **Virtual machines**, select **+ Create** > **+ Virtual machine**.
+
+4. In **Create a virtual machine**, enter or select the following values in the **Basics** tab:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **Create new**. </br> Enter **TutorialLBPF-rg**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM1**. |
+ | Region | Enter **(US) West US 2**. |
+ | Availability options | Select **Availability zone**. |
+ | Availability zone | Enter **1**. |
+ | Security type | Select **Standard**. |
+ | Image | Select **Ubuntu Server 20.04 LTS - Gen2**. |
+ | Azure Spot instance | Leave the default of unchecked. |
+ | Size | Select a VM size. |
+ | **Administrator account** | |
+ | Authentication type | Select **SSH public key**. |
+ | Username | Enter **azureuser**. |
+ | SSH public key source | Select **Generate new key pair**. |
+ | Key pair name | Enter **myKey**. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
+
+5. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+
+6. In the **Networking** tab, enter or select the following information.
+
+ | Setting | Value |
+ | - | -- |
+ | **Network interface** | |
+ | Virtual network | Select **Create new**. </br> Enter **myVNet** in **Name**. </br> In **Address space**, under **Address range**, enter **10.1.0.0/16**. </br> In **Subnets**, under **Subnet name**, enter **myBackendSubnet**. </br> In **Address range**, enter **10.1.0.0/24**. </br> Select **OK**. |
+ | Subnet | Select **myBackendSubnet**. |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Advanced**. |
+ | Configure network security group | Select **Create new**. </br> Enter **myNSG** in **Name**. </br> Select **+ Add an inbound rule** under **Inbound rules**. </br> In **Service**, select **HTTP**. </br> Enter **100** in **Priority**. </br> Enter **myNSGRule** for **Name**. </br> Select **Add**. </br> Select **OK**. |
+
+7. Select the **Review + create** tab, or select the **Review + create** button at the bottom of the page.
+
+8. Select **Create**.
+
+9. At the **Generate new key pair** prompt, select **Download private key and create resource**. Your key file will be downloaded as myKey.pem. Ensure you know where the .pem file was downloaded, you'll need the path to the key file in later steps.
+
+8. Follow the steps 1 through 8 to create another VM with the following values and all the other settings the same as **myVM1**:
+
+ | Setting | VM 2 |
+ | - | -- |
+ | **Basics** | |
+ | **Instance details** | |
+ | Virtual machine name | **myVM2** |
+ | Availability zone | **2** |
+ | **Administrator account** | |
+ | Authentication type | **SSH public key** |
+ | SSH public key source | Select **Use existing key stored in Azure**. |
+ | Stored Keys | Select **myKey**. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
+ | **Networking** | |
+ | **Network interface** | |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Advanced**. |
+ | Configure network security group | Select the existing **myNSG** |
+
+## Create load balancer
+
+You'll create a load balancer in this section. The frontend IP, backend pool, load-balancing, and inbound NAT rules are configured as part of the creation.
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. In the **Load balancer** page, select **Create**.
+
+3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
+
+ | Setting | Value |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialLBPF-rg**. |
+ | **Instance details** | |
+ | Name | Enter **myLoadBalancer** |
+ | Region | Select **West US 2**. |
+ | SKU | Leave the default **Standard**. |
+ | Type | Select **Public**. |
+ | Tier | Leave the default **Regional**. |
+
+4. Select **Next: Frontend IP configuration** at the bottom of the page.
+
+5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
+
+6. Enter **myFrontend** in **Name**.
+
+7. Select **IPv4** or **IPv6** for the **IP version**.
+
+ > [!NOTE]
+ > IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier).
+
+8. Select **IP address** for the **IP type**.
+
+ > [!NOTE]
+ > For more information on IP prefixes, see [Azure Public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md).
+
+9. Select **Create new** in **Public IP address**.
+
+10. In **Add a public IP address**, enter **myPublicIP** for **Name**.
+
+11. Select **Zone-redundant** in **Availability zone**.
+
+ > [!NOTE]
+ > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
+
+12. Leave the default of **Microsoft Network** for **Routing preference**.
+
+13. Select **OK**.
+
+14. Select **Add**.
+
+15. Select **Next: Backend pools** at the bottom of the page.
+
+16. In the **Backend pools** tab, select **+ Add a backend pool**.
+
+17. Enter or select the following information in **Add backend pool**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myBackendPool**. |
+ | Virtual network | Select **myVNet (TutorialLBPF-rg)**. |
+ | Backend Pool Configuration | Select **NIC**. |
+ | IP version | Select **IPv4**. |
+
+18. Select **+ Add** in **Virtual machines**.
+
+19. Select the checkboxes next to **myVM1** and **myVM2** in **Add virtual machines to backend pool**.
+
+20. Select **Add**.
+
+21. Select **Add**.
+
+22. Select the **Next: Inbound rules** button at the bottom of the page.
+
+23. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+
+24. In **Add load balancing rule**, enter or select the following information.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHTTPRule** |
+ | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
+ | Frontend IP address | Select **myFrontend**. |
+ | Backend pool | Select **myBackendPool**. |
+ | Protocol | Select **TCP**. |
+ | Port | Enter **80**. |
+ | Backend port | Enter **80**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Session persistence | Select **None**. |
+ | Idle timeout (minutes) | Enter or select **15**. |
+ | TCP reset | Select **Enabled**. |
+ | Floating IP | Select **Disabled**. |
+ | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
+
+ For more information about load-balancing rules, see [Load-balancing rules](manage-rules-how-to.md#load-balancing-rules).
+
+25. Select **Add**.
+
+26. Select the blue **Review + create** button at the bottom of the page.
+
+27. Select **Create**.
+
+## Create multiple instance inbound NAT rule
+
+In this section, you'll create a multiple instance inbound NAT rule to the backend pool of the load balancer.
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. Select **myLoadBalancer**.
+
+3. In **myLoadBalancer**, select **Inbound NAT rules** in settings.
+
+4. Select **+ Add** in **Inbound NAT rules**.
+
+5. Enter or select the following information in **Add inbound NAT rule**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myNATRule-SSH**. |
+ | Type | Select **Backend pool**. |
+ | Target backend pool | Select **myBackendPool**. |
+ | Frontend IP address | Select **myFrontend**. |
+ | Frontend port range start | Enter **221**. |
+ | Maximum number of machines in backend pool | Enter **500**. |
+ | Backend port | Enter **22**. |
+ | Protocol | Select **TCP**. |
+
+6. Leave the rest at the default and select **Add**.
+
+## Create NAT gateway
+
+In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
+
+For more information about outbound connections and Azure Virtual Network NAT, see [Using Source Network Address Translation (SNAT) for outbound connections](load-balancer-outbound-connections.md) and [What is Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md).
+
+1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
+
+2. In **NAT gateways**, select **+ Create**.
+
+3. In **Create network address translation (NAT) gateway**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialLBPF-rg**. |
+ | **Instance details** | |
+ | NAT gateway name | Enter **myNATgateway**. |
+ | Region | Select **West US 2**. |
+ | Availability zone | Select **None**. |
+ | Idle timeout (minutes) | Enter **15**. |
+
+4. Select the **Outbound IP** tab or select the **Next: Outbound IP** button at the bottom of the page.
+
+5. In **Outbound IP**, select **Create a new public IP address** next to **Public IP addresses**.
+
+6. Enter **myNATGatewayIP** in **Name** in **Add a public IP address**.
+
+7. Select **OK**.
+
+8. Select the **Subnet** tab or select the **Next: Subnet** button at the bottom of the page.
+
+9. In **Virtual network** in the **Subnet** tab, select **myVNet**.
+
+10. Select **myBackendSubnet** under **Subnet name**.
+
+11. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab.
+
+12. Select **Create**.
+
+## Install web server
+
+In this section, you'll SSH to the virtual machines through the inbound NAT rules and install a web server.
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. Select **myLoadBalancer**.
+
+3. Select **Fronted IP configuration** in **Settings**.
+
+3. In the **Frontend IP configuration**, make note of the **IP address** for **myFrontend**. In this example, it's **20.99.165.176**.
+
+ :::image type="content" source="./media/tutorial-nat-rule-multi-instance-portal/get-public-ip.png" alt-text="Screenshot of public IP in Azure portal.":::
+
+4. If you're using a Mac or Linux computer, open a Bash prompt. If you're using a Windows computer, open a PowerShell prompt.
+
+5. At your prompt, open an SSH connection to **myVM1**. Replace the IP address with the address you retrieved in the previous step and port **221** you used for the myVM1 inbound NAT rule. Replace the path to the .pem with the path to where the key file was downloaded.
+
+ ```console
+ ssh -i .\Downloads\myKey.pem azureuser@20.99.165.176 -p 221
+ ```
+
+ > [!TIP]
+ > The SSH key you created can be used the next time your create a VM in Azure. Just select the **Use a key stored in Azure** for **SSH public key source** the next time you create a VM. You already have the private key on your computer, so you won't need to download anything.
+
+6. From your SSH session, update your package sources and then install the latest NGINX package.
+
+ ```bash
+ sudo apt-get -y update
+ sudo apt-get -y install nginx
+ ```
+
+7. Enter `Exit` to leave the SSH session
+
+8. At your prompt, open an SSH connection to **myVM2**. Replace the IP address with the address you retrieved in the previous step and port **222** you used for the myVM2 inbound NAT rule. Replace the path to the .pem with the path to where the key file was downloaded.
+
+ ```console
+ ssh -i .\Downloads\myKey.pem azureuser@20.99.165.176 -p 222
+ ```
+
+9. From your SSH session, update your package sources and then install the latest NGINX package.
+
+ ```bash
+ sudo apt-get -y update
+ sudo apt-get -y install nginx
+ ```
+
+10. Enter `Exit` to leave the SSH session.
+
+## Test the web server
+
+You'll open your web browser in this section and enter the IP address for the load balancer you retrieved in the previous step.
+
+1. Open your web browser.
+
+2. In the address bar, enter the IP address for the load balancer. In this example, it's **20.99.165.176**.
+
+3. The default NGINX website is displayed.
+
+ :::image type="content" source="./media/tutorial-nat-rule-multi-instance-portal/web-server-test.png" alt-text="Screenshot of testing the NGINX web server.":::
+
+## Clean up resources
+
+If you're not going to continue to use this application, delete
+the virtual machines and load balancer with the following steps:
+
+1. In the search box at the top of the portal, enter **Resource group**. Select **Resource groups** in the search results.
+
+2. Select **TutorialLBPF-rg** in **Resource groups**.
+
+3. Select **Delete resource group**.
+
+4. Enter **TutorialLBPF-rg** in **TYPE THE RESOURCE GROUP NAME:**. Select **Delete**.
+
+## Next steps
+
+Advance to the next article to learn how to create a cross-region load balancer:
+
+> [!div class="nextstepaction"]
+> [Create a cross-region load balancer using the Azure portal](tutorial-cross-region-portal.md)
load-balancer Upgrade Basicinternal Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basicInternal-standard.md
This article introduces a PowerShell script which creates a Standard Load Balanc
An Azure PowerShell script is available that does the following:
-* Creates a Standard Internal SKU Load Balancer in the location that you specify. Note that no [outbound connection](./load-balancer-outbound-connections.md) will not be provided by the Standard Internal Load Balancer.
+* Creates a Standard Internal SKU Load Balancer in the location that you specify. Note that [outbound connection](./load-balancer-outbound-connections.md) will not be provided by the Standard Internal Load Balancer.
* Seamlessly copies the configurations of the Basic SKU Load Balancer to the newly created Standard Load Balancer. * Seamlessly move the private IPs from Basic Load Balancer to the newly created Standard Load Balancer. * Seamlessly move the VMs from backend pool of the Basic Load Balancer to the backend pool of the Standard Load Balancer
load-testing How To Compare Multiple Test Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-compare-multiple-test-runs.md
Use the client-side metrics, such as requests per second or response time, on th
## Identify the root cause
-When there's a performance issue, you can use the server-side metrics to analyze what the root cause of the problem is. Azure Load Testing can [capture server-side resource metrics](./how-to-update-rerun-test.md) for Azure-hosted applications.
+When there's a performance issue, you can use the server-side metrics to analyze what the root cause of the problem is. Azure Load Testing can [capture server-side resource metrics](./how-to-monitor-server-side-metrics.md) for Azure-hosted applications.
1. Hover over the server-side metrics graphs to compare the values across the different test runs.
load-testing How To Find Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-find-download-logs.md
In this section, you retrieve and download the Azure Load Testing logs from the
## Next steps -- Learn how to [Monitor server-side application metrics](./how-to-update-rerun-test.md).
+- Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).
- Learn how to [Get detailed insights for Azure App Service based applications](./how-to-appservice-insights.md).
load-testing How To High Scale Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-high-scale-load.md
In the Apache JMeter script, you define the number of parallel threads. This num
For example, to simulate 1,000 threads (or virtual users), set the number of threads in the Apache JMeter script to 250. Then configure the test with four test engine instances (that is, 4 x 250 threads).
+The location of the Azure Load Testing resource determines the location of the test engine instances. All test engine instances within a Load Testing resource are hosted in the same Azure region.
+ > [!IMPORTANT] > For preview release, Azure Load Testing supports up to 45 engine instances for a test run.
In this section, you configure the scaling settings of your load test.
:::image type="content" source="media/how-to-high-scale-load/configure-test.png" alt-text="Screenshot that shows the 'Configure' and 'Test' buttons on the test details page.":::
-1. On the **Edit test** page, select the **Load** tab. In the **Engine instances** box, enter the number of test engines required to run your test.
+1. On the **Edit test** page, select the **Load** tab. Use the **Engine instances** slider control to update the number of test engine instances, or enter the value directly in the input box.
:::image type="content" source="media/how-to-high-scale-load/edit-test-load.png" alt-text="Screenshot of the 'Load' tab on the 'Edit test' pane.":::
load-testing How To Monitor Server Side Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-monitor-server-side-metrics.md
+
+ Title: Monitor server-side application metrics for load testing
+
+description: Learn how to configure a load test to monitor server-side application metrics by using Azure Load Testing.
++++ Last updated : 02/08/2022+++
+# Monitor server-side application metrics by using Azure Load Testing Preview
+
+You can monitor server-side application metrics for Azure-hosted applications when running a load test with Azure Load Testing Preview. In this article, you'll learn how to configure app components and metrics for your load test.
+
+To capture metrics during your load test, you'll first [select the Azure components](#select-azure-application-components) that make up your application. Optionally, you can then [configure the list of server-side metrics](#select-server-side-resource-metrics) for each Azure component.
+
+Azure Load Testing integrates with Azure Monitor to capture server-side resource metrics for Azure-hosted applications. Read more about which [Azure resource types that Azure Load Testing supports](./resource-supported-azure-resource-types.md).
+
+> [!IMPORTANT]
+> Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure Load Testing resource with at least one completed test run. If you need to create an Azure Load Testing resource, see [Tutorial: Run a load test to identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md).
+
+## Select Azure application components
+
+To monitor resource metrics for an Azure-hosted application, you need to specify the list of Azure application components in your load test. Azure Load Testing automatically captures a set of relevant resource metrics for each selected component. When your load test finishes, you can view the server-side metrics in the dashboard.
+
+For the list of Azure components that Azure Load Testing supports, see [Supported Azure resource types](./resource-supported-azure-resource-types.md).
+
+Use the following steps to configure the Azure components for your load test:
+
+1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+
+1. On the left pane, select **Tests**, and then select your load test from the list.
+
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/select-test.png" alt-text="Screenshot that shows a list of load tests to select from.":::
+
+1. On the test runs page, select **Configure**, and then select **App Components** to add or remove Azure resources to monitor during the load test.
+
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/configure-app-components.png" alt-text="Screenshot that shows the 'App Components' button for displaying app components to configure for a load test.":::
+
+1. Select or clear the checkboxes next to the Azure resources you want to add or remove, and then select **Apply**.
+
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/modify-app-components.png" alt-text="Screenshot that shows how to add or remove app components from a load test configuration.":::
+
+ When you run the load test, Azure Load Testing will display the default resource metrics in the test run dashboard.
+
+You can change the list of resource metrics at any time. In the next section, you'll view and configure the list of resource metrics.
+
+## Select server-side resource metrics
+
+For each Azure application component, you can select the resource metrics to monitor during your load test.
+
+Use the following steps to view and update the list of resource metrics:
+
+1. On the test runs page, select **Configure**, and then select **Metrics** to select the specific resource metrics to capture during the load test.
+
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/configure-metrics.png" alt-text="Screenshot that shows the 'Metrics' button to configure metrics for a load test.":::
+
+1. Update the list of metrics you want to capture, and then select **Apply**.
+
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/modify-metrics.png" alt-text="Screenshot that shows a list of resource metrics to configure for a load test.":::
+
+ Alternatively, you can update the app components and metrics from the page that shows test result details.
+
+1. Select **Run** to run the load test with the new configuration settings.
+
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/run-load-test.png" alt-text="Screenshot that shows the 'Run' button for running the load test from the test runs page.":::
+
+ Notice that the test result dashboard now shows the updated server-side metrics.
+
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/dashboard-updated-metrics.png" alt-text="Screenshot that shows the updated server-side metrics on the test result dashboard.":::
+
+When you update the configuration of a load test, all future test runs will use that configuration. On the other hand, if you update a test run, the new configuration will only apply to that test run.
+
+## Next steps
+
+- Learn how you can [identify performance problems by comparing metrics across multiple test runs](./how-to-compare-multiple-test-runs.md).
+
+- Learn how to [set up a high-scale load test](./how-to-high-scale-load.md).
+
+- Learn how to [configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
load-testing Resource Supported Azure Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-supported-azure-resource-types.md
Last updated 01/04/2022
Learn which Azure resource types Azure Load Testing Preview supports for server-side monitoring. You can select specific metrics for each resource type to track and report on for a load test.
-To learn how to configure your load test, see [Monitor server-side application metrics](./how-to-update-rerun-test.md).
+To learn how to configure your load test, see [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
This section lists the Azure resource types that Azure Load Testing supports for
## Next steps
-* Learn how to [Monitor server-side application metrics](./how-to-update-rerun-test.md).
+* Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).
* Learn how to [Get more insights from App Service diagnostics](./how-to-appservice-insights.md). * Learn how to [Compare multiple test runs](./how-to-compare-multiple-test-runs.md).
load-testing Tutorial Cicd Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-cicd-azure-pipelines.md
# Tutorial: Identify performance regressions with Azure Load Testing Preview and Azure Pipelines
-This tutorial describes how to automate performance regression testing by using Azure Load Testing Preview and Azure Pipelines. You'll configure an Azure Pipelines continuous integration and continuous delivery (CI/CD) workflow to run a load test for a sample web application. You'll then use the test results to identify performance regressions.
+This tutorial describes how to automate performance regression testing by using Azure Load Testing Preview and Azure Pipelines. You'll configure an Azure Pipelines CI/CD workflow with the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing?view=azure-devops) to run a load test for a sample web application. You'll then use the test results to identify performance regressions.
If you're using GitHub Actions for your CI/CD workflows, see the corresponding [GitHub Actions tutorial](./tutorial-cicd-github-actions.md).
You'll learn how to:
* An Azure DevOps organization and project. If you don't have an Azure DevOps organization, you can [create one for free](/azure/devops/pipelines/get-started/pipelines-sign-up?view=azure-devops&preserve-view=true). If you need help with getting started with Azure Pipelines, see [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline?preserve-view=true&view=azure-devops&tabs=java%2Ctfs-2018-2%2Cbrowser). * A GitHub account, where you can create a repository. If you don't have one, you can [create one for free](https://github.com/).
-## Set up your repository
+## Set up the sample application repository
To get started, you need a GitHub repository with the sample web application. You'll use this repository to configure an Azure Pipelines workflow to run the load test.
To access Azure resources, create a service connection in Azure DevOps and use r
```azurecli az role assignment create --assignee "<sp-object-id>" \ --role "Load Test Contributor" \
+ --scope /subscriptions/<subscription-name-or-id>/resourceGroups/<resource-group-name> \
--subscription "<subscription-name-or-id>" ``` ## Configure the Azure Pipelines workflow to run a load test
-In this section, you'll set up an Azure Pipelines workflow that triggers the load test. The sample application repository contains a pipelines definition file. The pipeline first deploys the sample web application to Azure App Service, and then invokes the load test. The pipeline uses an environment variable to pass the URL of the web application to the Apache JMeter script.
+In this section, you'll set up an Azure Pipelines workflow that triggers the load test.
-First, you'll install the Azure Load Testing extension from the Azure DevOps Marketplace, create a new pipeline, and then connect it to the sample application's forked repository.
+The sample application repository already contains a pipelines definition file. This pipeline first deploys the sample web application to Azure App Service, and then invokes the load test by using the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing?view=azure-devops). The pipeline uses an environment variable to pass the URL of the web application to the Apache JMeter script.
-1. Install the Azure Load Testing task extension from the Azure DevOps Marketplace.
+1. Install the **Azure Load Testing** task extension from the Azure DevOps Marketplace.
:::image type="content" source="./media/tutorial-cicd-azure-pipelines/browse-marketplace.png" alt-text="Screenshot that shows how to browse the Visual Studio Marketplace for extensions.":::
First, you'll install the Azure Load Testing extension from the Azure DevOps Mar
:::image type="content" source="./media/tutorial-cicd-azure-pipelines/create-pipeline-select-repo.png" alt-text="Screenshot that shows how to select the sample application's GitHub repository.":::
- The repository contains an *azure-pipeline.yml* pipeline definition file. You'll now modify this definition to connect to your Azure Load Testing service.
+ The repository contains an *azure-pipeline.yml* pipeline definition file. The following snippet shows how to use the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing?view=azure-devops) in Azure Pipelines:
+
+ ```yml
+ - task: AzureLoadTest@1
+ inputs:
+ azureSubscription: $(serviceConnection)
+ loadTestConfigFile: 'SampleApp.yaml'
+ resourceGroup: $(loadTestResourceGroup)
+ loadTestResource: $(loadTestResource)
+ env: |
+ [
+ {
+ "name": "webapp",
+ "value": "$(webAppName).azurewebsites.net"
+ }
+ ]
+ ```
+
+ You'll now modify the pipeline to connect to your Azure Load Testing service.
1. On the **Review** tab, replace the following placeholder text in the YAML code:
In this tutorial, you'll reconfigure the sample application to accept only secur
The Azure Load Testing task securely passes the secret from the pipeline to the test engine. The secret parameter is used only while you're running the load test, and then the value is discarded from memory.
-## Configure and use the Azure Load Testing task
-
-This section describes the Azure Load Testing task for Azure Pipelines. The task is cross-platform and runs on Windows, Linux, or Mac agents.
-
-You can use the following parameters to configure the Azure Load Testing task:
-
-|Parameter |Description |
-|||
-|`azureSubscription` | *Required*. Name of the Azure Resource Manager service connection. |
-|`loadTestConfigFile` | *Required*. Path to the YAML configuration file for the load test. The path is fully qualified or relative to the default working directory. |
-|`resourceGroup` | *Required*. Name of the resource group that contains the Azure Load Testing resource. |
-|`loadTestResource` | *Required*. Name of an existing Azure Load Testing resource. |
-|`secrets` | Array of JSON objects that consist of the name and value for each secret. The name should match the secret name used in the Apache JMeter test script. |
-|`env` | Array of JSON objects that consist of the name and value for each environment variable. The name should match the variable name used in the Apache JMeter test script. |
-
-The following YAML code snippet describes how to use the task in an Azure Pipelines CI/CD workflow:
-
-```yaml
-- task: AzureLoadTest@1
- inputs:
- azureSubscription: '<Azure service connection>'
- loadTestConfigFile: '< YAML File path>'
- loadTestResource: '<name of the load test resource>'
- resourceGroup: '<name of the resource group of your load test resource>'
- secrets: |
- [
- {
- "name": "<Name of the secret>",
- "value": "$(mySecret1)"
- },
- {
- "name": "<Name of the secret>",
- "value": "$(mySecret1)"
- }
- ]
- env: |
- [
- {
- "name": "<Name of the variable>",
- "value": "<Value of the variable>"
- },
- {
- "name": "<Name of the variable>",
- "value": "<Value of the variable>"
- }
- ]
-```
- ## Clean up resources [!INCLUDE [alt-delete-resource-group](../../includes/alt-delete-resource-group.md)]
The following YAML code snippet describes how to use the task in an Azure Pipeli
You've now created an Azure Pipelines CI/CD workflow that uses Azure Load Testing for automatically running load tests. By using pass/fail criteria, you can set the status of the CI/CD workflow. With parameters, you can make the running of load tests configurable.
-* For more information about parameterizing load tests, see [Parameterize a load test](./how-to-parameterize-load-tests.md).
-* For more information about defining test pass/fail criteria, see [Define test criteria](./how-to-define-test-criteria.md).
+* Learn more about the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing?view=azure-devops).
+* Learn more about [Parameterizing a load test](./how-to-parameterize-load-tests.md).
+* Learn more [Define test pass/fail criteria](./how-to-define-test-criteria.md).
+* Learn more about [Configuring server-side monitoring](./how-to-monitor-server-side-metrics.md).
load-testing Tutorial Cicd Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-cicd-github-actions.md
First, you'll create an Azure Active Directory [service principal](../active-dir
```azurecli az role assignment create --assignee "<sp-object-id>" \ --role "Load Test Contributor" \
+ --scope /subscriptions/<subscription-id>/resourceGroups/<resource-group-name> \
--subscription "<subscription-id>" ```
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
Before you can create your logic app, create a local project so that you can man
![Screenshot that shows the "Create new Stateful Workflow (3/4)" box and "Fabrikam-Stateful-Workflow" as the workflow name.](./media/create-single-tenant-workflows-visual-studio-code/name-your-workflow.png)
+ > [!NOTE]
+ > You might get an error named **azureLogicAppsStandard.createNewProject** with the error message,
+ > **Unable to write to Workspace Settings because azureFunctions.suppressProject is not a registered configuration**.
+ > If you do, try installing the [Azure Functions extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions), either directly from the Visual Studio Marketplace or from inside Visual Studio Code.
+ Visual Studio Code finishes creating your project, and opens the **workflow.json** file for your workflow in the code editor. > [!NOTE]
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022 ms.suite: integration
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes.md
__RSS feed__: Get notified when this page is updated by copying and pasting the
+ **azureml-synapse** + Fix the issue that magic widget is disappeared. + **azureml-train-automl-runtime**
- + Updating AutoML dependencies to support python 3.8. This change will break compatibility with models trained with SDK 1.37 or below due to newer Pandas interfaces being saved in the model.
+ + Updating AutoML dependencies to support Python 3.8. This change will break compatibility with models trained with SDK 1.37 or below due to newer Pandas interfaces being saved in the model.
+ Automl training now supports numpy version 1.19 + Fix automl reset index logic for ensemble models in automl_setup_model_explanations API + In automl, use lightgbm surrogate model instead of linear surrogate model for sparse case after latest lightgbm version upgrade
Learn more about [image instance segmentation labeling](how-to-label-data.md).
+ Throw exception and clean up workspace and dependent resources if workspace private endpoint creation fails. + Support workspace sku upgrade in workspace update method. + **azureml-datadrift**
- + Update matplotlib version from 3.0.2 to 3.2.1 to support python 3.8.
+ + Update matplotlib version from 3.0.2 to 3.2.1 to support Python 3.8.
+ **azureml-dataprep** + Added support of web url data sources with `Range` or `Head` request. + Improved stability for file dataset mount and download.
Learn more about [image instance segmentation labeling](how-to-label-data.md).
+ **azureml-datadrift** + Data Drift results query from the SDK had a bug that didn't differentiate the minimum, maximum, and mean feature metrics, resulting in duplicate values. We have fixed this bug by prefixing target or baseline to the metric names. Before: duplicate min, max, mean. After: target_min, target_max, target_mean, baseline_min, baseline_max, baseline_mean. + **azureml-dataprep**
- + Improve handling of write restricted python environments when ensuring .NET Dependencies required for data delivery.
+ + Improve handling of write restricted Python environments when ensuring .NET Dependencies required for data delivery.
+ Fixed Dataflow creation on file with leading empty records. + Added error handling options for `to_partition_iterator` similar to `to_pandas_dataframe`. + **azureml-interpret**
Access the following web-based authoring tools from the studio:
### Azure Machine Learning SDK for Python v1.2.0 + **Breaking changes**
- + Drop support for python 2.7
+ + Drop support for Python 2.7
+ **Bug fixes and improvements** + **azure-cli-ml**
Access the following web-based authoring tools from the studio:
+ **Feature deprecation** + **Python 2.7**
- + Last version to support python 2.7
+ + Last version to support Python 2.7
+ **Breaking changes** + **Semantic Versioning 2.0.0**
Access the following web-based authoring tools from the studio:
+ **azureml-contrib-interpret** + Removed text explainers from azureml-contrib-interpret as text explanation has been moved to the interpret-text repo that will be released soon. + **azureml-core**
- + Dataset: usages for file dataset no longer depend on numpy and pandas to be installed in the python env.
+ + Dataset: usages for file dataset no longer depend on numpy and pandas to be installed in the Python env.
+ Changed LocalWebservice.wait_for_deployment() to check the status of the local Docker container before trying to ping its health endpoint, greatly reducing the amount of time it takes to report a failed deployment. + Fixed the initialization of an internal property used in LocalWebservice.reload() when the service object is created from an existing deployment using the LocalWebservice() constructor. + Edited error message for clarification.
The Experiment tab in the [new workspace portal](https://ml.azure.com) has been
+ Exception will be thrown out when either coarse grain or fine grained timestamp column is not included in keep columns list with indication for user that keeping can be done after either including timestamp column in keep column list or call with_time_stamp with None value to release timestamp columns. + Added logging for the size of a registered model. + **azureml-explain-model**
- + Fixed warning printed to console when "packaging" python package is not installed: "Using older than supported version of lightgbm, please upgrade to version greater than 2.2.1"
+ + Fixed warning printed to console when "packaging" Python package is not installed: "Using older than supported version of lightgbm, please upgrade to version greater than 2.2.1"
+ Fixed download model explanation with sharding for global explanations with many features + Fixed mimic explainer missing initialization examples on output explanation + Fixed immutable error on set properties when uploading with explanation client using two different types of models
At the time, of this release, the following browsers are supported: Chrome, Fire
+ **azureml-pipeline-core** + Added support to create, update, and use PipelineDrafts - can be used to maintain mutable pipeline definitions and use them interactively to run + **azureml-train-automl**
- + Created feature to install specific versions of gpu-capable pytorch v1.1.0, :::no-loc text="cuda"::: toolkit 9.0, pytorch-transformers, which is required to enable BERT/ XLNet in the remote python runtime environment.
+ + Created feature to install specific versions of gpu-capable pytorch v1.1.0, :::no-loc text="cuda"::: toolkit 9.0, pytorch-transformers, which is required to enable BERT/ XLNet in the remote Python runtime environment.
+ **azureml-train-core** + Early failure of some hyperparameter space definition errors directly in the sdk instead of server side.
At the time, of this release, the following browsers are supported: Chrome, Fire
+ Improve reliability of API calls be expanding retries to common requests library exceptions. + Add support for submitting runs from a submitted run. + Fixed expiring SAS token issue in FileWatcher, which caused files to stop being uploaded after their initial token had expired.
- + Supported importing HTTP csv/tsv files in dataset python SDK.
+ + Supported importing HTTP csv/tsv files in dataset Python SDK.
+ Deprecated the Workspace.setup() method. Warning message shown to users suggests using create() or get()/from_config() instead.
- + Added Environment.add_private_pip_wheel(), which enables uploading private custom python packages `whl`to the workspace and securely using them to build/materialize the environment.
+ + Added Environment.add_private_pip_wheel(), which enables uploading private custom Python packages `whl`to the workspace and securely using them to build/materialize the environment.
+ You can now update the TLS/SSL certificate for the scoring endpoint deployed on AKS cluster both for Microsoft generated and customer certificate. + **azureml-explain-model** + Added parameter to add a model ID to explanations on upload.
machine-learning Designer Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/designer-error-codes.md
To get more help, we recommend that you post the detailed message that accompani
## Execute Python Script component
-Search **in azureml_main** in **70_driver_logs** of **Execute Python Script component** and you could find which line occurred error. For example, "File "/tmp/tmp01_ID/user_script.py", line 17, in azureml_main" indicates that the error occurred in the 17 line of your python script.
+Search **in azureml_main** in **70_driver_logs** of **Execute Python Script component** and you could find which line occurred error. For example, "File "/tmp/tmp01_ID/user_script.py", line 17, in azureml_main" indicates that the error occurred in the 17 line of your Python script.
## Distributed training
machine-learning Execute Python Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/execute-python-script.md
The Execute Python Script component contains sample Python code that you can use
> [!IMPORTANT] > Please use unique and meaningful name for files in the script bundle since some common words (like `test`, `app` and etc) are reserved for built-in services.
- Following is a script bundle example, which contains a python script file and a txt file:
+ Following is a script bundle example, which contains a Python script file and a txt file:
> [!div class="mx-imgBorder"] > ![Script bundle example](media/module/python-script-bundle.png)
The Execute Python Script component contains sample Python code that you can use
# Execution logic goes here print(f'Input pandas.DataFrame #1: {dataframe1}')
- # Test the custom defined python function
+ # Test the custom defined Python function
dataframe1 = my_func(dataframe1) # Test to read custom uploaded files by relative path
The Execute Python Script component contains sample Python code that you can use
If the component is completed, check the output if as expected.
- If the component is failed, you need to do some troubleshooting. Select the component, and open **Outputs+logs** in the right pane. Open **70_driver_log.txt** and search **in azureml_main**, then you could find which line caused the error. For example, "File "/tmp/tmp01_ID/user_script.py", line 17, in azureml_main" indicates that the error occurred in the 17 line of your python script.
+ If the component is failed, you need to do some troubleshooting. Select the component, and open **Outputs+logs** in the right pane. Open **70_driver_log.txt** and search **in azureml_main**, then you could find which line caused the error. For example, "File "/tmp/tmp01_ID/user_script.py", line 17, in azureml_main" indicates that the error occurred in the 17 line of your Python script.
## Results
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-register-datasets.md
partition_keys = new_dataset.partition_keys # ['country']
After you're done wrangling your data, you can [register](#register-datasets) your dataset, and then load it into your notebook for data exploration prior to model training.
-For FileDatasets, you can either **mount** or **download** your dataset, and apply the python libraries you'd normally use for data exploration. [Learn more about mount vs download](how-to-train-with-datasets.md#mount-vs-download).
+For FileDatasets, you can either **mount** or **download** your dataset, and apply the Python libraries you'd normally use for data exploration. [Learn more about mount vs download](how-to-train-with-datasets.md#mount-vs-download).
```python # download the dataset
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
The following table provides an overview of scenarios to help you choose what wo
| Scenario | Inference HTTP Server | Local endpoint | |--|--|--|
-| Update local python environment, **without** Docker image rebuild | Yes | No |
+| Update local Python environment, **without** Docker image rebuild | Yes | No |
| Update scoring script | Yes | Yes | | Update deployment configurations (deployment, environment, code, model) | No | Yes | | VS Code Debugger integration | Yes | Yes |
machine-learning How To Debug Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-pipelines.md
run.log("scalar_value", 0.95)
# Python print statement print("I am a python print statement, I will be sent to the driver logs.")
-# Initialize python logger
+# Initialize Python logger
logger = logging.getLogger(__name__) logger.setLevel(args.log_level)
-# Plain python logging statements
+# Plain Python logging statements
logger.debug("I am a plain debug statement, I will be sent to the driver logs.") logger.info("I am a plain info statement, I will be sent to the driver logs.")
machine-learning How To Deploy Inferencing Gpus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-inferencing-gpus.md
The conda environment file specifies the dependencies for the service. It includ
```yaml name: project_environment dependencies:
- # The python interpreter version.
+ # The Python interpreter version.
# Currently Azure ML only supports 3.5.2 and later. - python=3.6.2
machine-learning How To Designer Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-designer-python.md
This article uses the sample dataset, **Automobile price data (Raw)**.
1. Connect the output port of the dataset to the top-left input port of the **Execute Python Script** component. The designer exposes the input as a parameter to the entry point script.
- The right input port is reserved for zipped python libraries.
+ The right input port is reserved for zipped Python libraries.
![Connect datasets](media/how-to-designer-python/connect-dataset.png)
machine-learning How To Differential Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-differential-privacy.md
Use pip to install the [SmartNoise Python packages](https://pypi.org/project/ope
`pip install opendp-smartnoise`
-To verify that the packages are installed, launch a python prompt and type:
+To verify that the packages are installed, launch a Python prompt and type:
```python import opendp.smartnoise.core
machine-learning How To Inference Server Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-server-http.md
The following table contains the parameters accepted by the server:
The following steps explain how the Azure Machine Learning inference HTTP server works handles incoming requests:
-1. A python CLI wrapper sits around the server's network stack and is used to start the server.
+1. A Python CLI wrapper sits around the server's network stack and is used to start the server.
1. A client sends a request to the server. 1. When a request is received, it goes through the [WSGI](https://www.fullstackpython.com/wsgi-servers.html) server and is then dispatched to one of the workers. - [Gunicorn](https://docs.gunicorn.org/) is used on __Linux__.
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-environments.md
The examples in this article show how to:
* Use an environment for training. * Use an environment for web service deployment.
-For a high-level overview of how environments work in Azure Machine Learning, see [What are ML environments?](concept-environments.md) For information about managing environments in the Azure ML studio, see [Manage environments in the studio](how-to-manage-environments-in-studio.md). For information about configuring development environments, see [here](how-to-configure-environment.md).
+For a high-level overview of how environments work in Azure Machine Learning, see [What are ML environments?](concept-environments.md) For information about managing environments in the Azure ML studio, see [Manage environments in the studio](how-to-manage-environments-in-studio.md). For information about configuring development environments, see [Set up a Python development environment for Azure ML](how-to-configure-environment.md).
## Prerequisites
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Title: Built-in policy definitions for Azure Database for MariaDB description: Lists Azure Policy built-in policy definitions for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
media-services Limits Quotas Constraints Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/limits-quotas-constraints-reference.md
This article lists some of the most common Microsoft Azure Media Services limits
## Storage limits
-Azure Storage block blog limits apply to storage accounts used with Media Services. See [Azure Blob Storage limits](/azure/azure-resource-manager/management/azure-subscription-service-limits.md#azure-blob-storage-limits).
+Azure Storage block blog limits apply to storage accounts used with Media Services. See [Azure Blob Storage limits](/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-blob-storage-limits).
These limit includes the total stored data storage size of the files that you upload for encoding and the file sizes of the encoded files. The limit for file size for encoding is a different limit. See [File size for encoding](#file-size-for-encoding-limit).
media-services Migrate V 2 V 3 Migration Scenario Based Publishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-publishing.md
After migration, you should avoid making any calls to the v2 API to modify strea
### How to guides - [Manage streaming endpoints with Media Services v3](stream-manage-streaming-endpoints-how-to.md)-- [CLI example: Publish an asset](cli-publish-asset.md) - [Create a streaming locator and build URLs](create-streaming-locator-build-url.md) - [Download the results of a job](job-download-results-how-to.md) - [Signal descriptive audio tracks](signal-descriptive-audio-howto.md)
migrate Concepts Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-assessment-calculation.md
For an Azure VM Assessment, the assessment reviews the following properties of a
Property | Details | Azure readiness status | |
-**Boot type** | Azure supports VMs with a boot type of BIOS, not UEFI. | Conditionally ready if the boot type is UEFI
+**Boot type** | Azure supports UEFI boot type for OS mentioned [here](./common-questions-server-migration.md#which-operating-systems-are-supported-for-migration-of-uefi-based-machines-to-azure)| Not ready if the boot type is UEFI and Operating System running on the VM is: Windows Server 2003/Windows Server 2003 R2/Windows Server 2008/Windows Server 2008 R2
**Cores** | Each server must have no more than 128 cores, which is the maximum number an Azure VM supports.<br/><br/> If performance history is available, Azure Migrate considers the utilized cores for comparison. If the assessment settings specify a comfort factor, the number of utilized cores is multiplied by the comfort factor.<br/><br/> If there's no performance history, Azure Migrate uses the allocated cores to apply the comfort factor. | Ready if the number of cores is within the limit **RAM** | Each server must have no more than 3,892 GB of RAM, which is the maximum size an Azure M-series Standard_M128m&nbsp;<sup>2</sup> VM supports. [Learn more](../virtual-machines/sizes.md).<br/><br/> If performance history is available, Azure Migrate considers the utilized RAM for comparison. If a comfort factor is specified, the utilized RAM is multiplied by the comfort factor.<br/><br/> If there's no history, the allocated RAM is used to apply a comfort factor.<br/><br/> | Ready if the amount of RAM is within the limit **Storage disk** | The allocated size of a disk must be no more than 64 TB.<br/><br/> The number of disks attached to the server, including the OS disk, must be 65 or fewer. | Ready if the disk size and number are within the limits
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
migrate Troubleshoot Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-assessment.md
This table lists help for fixing the following assessment readiness issues.
**Issue** | **Fix** |
-Unsupported boot type | Azure doesn't support VMs with an EFI boot type. Convert the boot type to BIOS before you run a migration. <br/><br/>You can use Azure Migrate Server Migration to handle the migration of such VMs. It will convert the boot type of the VM to BIOS during the migration.
+Unsupported boot type | Azure does not support UEFI boot type for VMs with the Operating Systems: Windows Server 2003/Windows Server 2003 R2/Windows Server 2008/Windows Server 2008 R2. Check list of OS that support UEFI-based machines [here](./common-questions-server-migration.md#which-operating-systems-are-supported-for-migration-of-uefi-based-machines-to-azure)
Conditionally supported Windows operating system | The operating system has passed its end-of-support date and needs a Custom Support Agreement for [support in Azure](/troubleshoot/azure/virtual-machines/server-software-support). Consider upgrading before you migrate to Azure. Review information about [preparing servers running Windows Server 2003](prepare-windows-server-2003-migration.md) for migration to Azure. Unsupported Windows operating system | Azure supports only [selected Windows OS versions](/troubleshoot/azure/virtual-machines/server-software-support). Consider upgrading the server before you migrate to Azure. Conditionally endorsed Linux OS | Azure endorses only [selected Linux OS versions](../virtual-machines/linux/endorsed-distros.md). Consider upgrading the server before you migrate to Azure. For more information, see [this website](#linux-vms-are-conditionally-ready-in-an-azure-vm-assessment).
mysql How To Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-cli.md
[[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-> [!IMPORTANT]
-> Read replicas in Azure Database for MySQL - Flexible Server is in preview.
- In this article, you will learn how to create and manage read replicas in the Azure Database for MySQL flexible server using the Azure CLI. To learn more about read replicas, see the [overview](concepts-read-replicas.md). > [!Note]
mysql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-portal.md
Last updated 06/17/2021
[[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-> [!IMPORTANT]
-> Read replicas in Azure Database for MySQL - Flexible Server is in preview.
- In this article, you will learn how to create and manage read replicas in the Azure Database for MySQL flexible server using the Azure portal. > [!Note]
mysql Howto Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-common-errors.md
Last updated 5/21/2021
-# Commonly encountered errors during or post migration to Azure Database for MySQL
+# Troubleshoot errors commonly encountered during or post migration to Azure Database for MySQL
[!INCLUDE[applies-to-mysql-single-flexible-server](includes/applies-to-mysql-single-flexible-server.md)]
mysql Howto Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-query-performance.md
Title: Troubleshoot query performance - Azure Database for MySQL
-description: Learn how to use EXPLAIN to troubleshoot query performance in Azure Database for MySQL.
+ Title: Profile query performance - Azure Database for MySQL
+description: Learn how to profile query performance in Azure Database for MySQL by using EXPLAIN.
Previously updated : 3/18/2020 Last updated : 3/10/2022
-# How to use EXPLAIN to profile query performance in Azure Database for MySQL
+# Profile query performance in Azure Database for MySQL using EXPLAIN
[!INCLUDE[applies-to-mysql-single-flexible-server](includes/applies-to-mysql-single-flexible-server.md)]
possible_keys: NULL
Extra: Using where ```
-As can be seen from this example, the value of *key* is NULL. This output means MySQL cannot find any indexes optimized for the query and it performs a full table scan. Let's optimize this query by adding an index on the **ID** column.
+As can be seen from this example, the value of *key* is NULL. This output means MySQL can't find any indexes optimized for the query and it performs a full table scan. Let's optimize this query by adding an index on the **ID** column.
```sql mysql> ALTER TABLE tb1 ADD KEY (id);
possible_keys: NULL
Extra: Using where; Using temporary; Using filesort ```
-As can be seen from the output, MySQL does not use any indexes because no proper indexes are available. It also shows *Using temporary; Using file sort*, which means MySQL creates a temporary table to satisfy the **GROUP BY** clause.
+As can be seen from the output, MySQL doesn't use any indexes because no proper indexes are available. It also shows *Using temporary; Using file sort*, which means MySQL creates a temporary table to satisfy the **GROUP BY** clause.
Creating an index on column **c2** alone makes no difference, and MySQL still needs to create a temporary table:
possible_keys: NULL
Extra: Using where; Using index ```
-The EXPLAIN now shows that MySQL is able to use combined index to avoid additional sorting since the index is already sorted.
+The EXPLAIN now shows that MySQL can use a combined index to avoid additional sorting since the index is already sorted.
## Conclusion
-Using EXPLAIN and different type of Indexes can increase performance significantly. Having an index on the table does not necessarily mean MySQL would be able to use it for your queries. Always validate your assumptions using EXPLAIN and optimize your queries using indexes.
+Using EXPLAIN and different type of Indexes can increase performance significantly. Having an index on the table doesn't necessarily mean MySQL would be able to use it for your queries. Always validate your assumptions using EXPLAIN and optimize your queries using indexes.
## Next steps
mysql Howto Troubleshoot Sys Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-sys-schema.md
Title: Utilize sys_schema - Azure Database for MySQL
-description: Learn how to use sys_schema to find performance issues and maintain database in Azure Database for MySQL.
+ Title: Use the sys_schema - Azure Database for MySQL
+description: Learn how to use the sys_schema to find performance issues and maintain databases in Azure Database for MySQL.
Previously updated : 3/30/2020 Last updated : 3/10/2022
-# How to use sys_schema for performance tuning and database maintenance in Azure Database for MySQL
+# Tune performance and maintain databases in Azure Database for MySQL using the sys_schema
[!INCLUDE[applies-to-mysql-single-flexible-server](includes/applies-to-mysql-single-flexible-server.md)]
-The MySQL performance_schema, first available in MySQL 5.5, provides instrumentation for many vital server resources such as memory allocation, stored programs, metadata locking, etc. However, the performance_schema contains more than 80 tables, and getting the necessary information often requires joining tables within the performance_schema, as well as tables from the information_schema. Building on both performance_schema and information_schema, the sys_schema provides a powerful collection of [user-friendly views](https://dev.mysql.com/doc/refman/5.7/en/sys-schema-views.html) in a read-only database and is fully enabled in Azure Database for MySQL version 5.7.
+The MySQL performance_schema, first available in MySQL 5.5, provides instrumentation for many vital server resources such as memory allocation, stored programs, metadata locking, etc. However, the performance_schema contains more than 80 tables, and getting the necessary information often requires joining tables within the performance_schema, and tables from the information_schema. Building on both performance_schema and information_schema, the sys_schema provides a powerful collection of [user-friendly views](https://dev.mysql.com/doc/refman/5.7/en/sys-schema-views.html) in a read-only database and is fully enabled in Azure Database for MySQL version 5.7.
:::image type="content" source="./media/howto-troubleshoot-sys-schema/sys-schema-views.png" alt-text="views of sys_schema":::
To troubleshoot database performance issues, it may be beneficial to identify th
:::image type="content" source="./media/howto-troubleshoot-sys-schema/summary-by-statement.png" alt-text="summary by statement":::
-In this example Azure Database for MySQL spent 53 minutes flushing the slog query log 44579 times. That's a long time and many IOs. You can reduce this activity by either disable your slow query log or decrease the frequency of slow query login Azure portal.
+In this example Azure Database for MySQL spent 53 minutes flushing the slog query log 44579 times. That's a long time and many IOs. You can reduce this activity by either disabling your slow query log or decreasing the frequency of slow query login to the Azure portal.
## Database maintenance
In this example Azure Database for MySQL spent 53 minutes flushing the slog quer
[!IMPORTANT] > Querying this view can impact performance. It is recommended to perform this troubleshooting during off-peak business hours.
-The InnoDB buffer pool resides in memory and is the main cache mechanism between the DBMS and storage. The size of the InnoDB buffer pool is tied to the performance tier and cannot be changed unless a different product SKU is chosen. As with memory in your operating system, old pages are swapped out to make room for fresher data. To find out which tables consume most of the InnoDB buffer pool memory, you can query the *sys.innodb_buffer_stats_by_table* view.
+The InnoDB buffer pool resides in memory and is the main cache mechanism between the DBMS and storage. The size of the InnoDB buffer pool is tied to the performance tier and canΓÇÖt be changed unless a different product SKU is chosen. As with memory in your operating system, old pages are swapped out to make room for fresher data. To find out which tables consume most of the InnoDB buffer pool memory, you can query the *sys.innodb_buffer_stats_by_table* view.
:::image type="content" source="./media/howto-troubleshoot-sys-schema/innodb-buffer-status.png" alt-text="InnoDB buffer status":::
-In the graphic above, it is apparent that other than system tables and views, each table in the mysqldatabase033 database, which hosts one of my WordPress sites, occupies 16 KB, or 1 page, of data in memory.
+In the graphic above, it's apparent that other than system tables and views, each table in the mysqldatabase033 database, which hosts one of my WordPress sites, occupies 16 KB, or 1 page, of data in memory.
### *Sys.schema_unused_indexes* & *sys.schema_redundant_indexes*
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/policy-reference.md
Title: Built-in policy definitions for Azure Database for MySQL description: Lists Azure Policy built-in policy definitions for Azure Database for MySQL. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
networking Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure networking description: Sample Azure Resource Graph queries for Azure networking showing use of resource types and tables to access Azure networking related resources and properties. Previously updated : 02/16/2022 Last updated : 03/08/2022
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
openshift Howto Create A Storageclass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-a-storageclass.md
ARO_RESOURCE_GROUP=aro-rg
CLUSTER=cluster ARO_SERVICE_PRINCIPAL_ID=$(az aro show -g $ARO_RESOURCE_GROUP -n $CLUSTER --query servicePrincipalProfile.clientId -o tsv)
-az role assignment create --role Contributor --assignee $ARO_SERVICE_PRINCIPAL_ID -g $AZURE_FILES_RESOURCE_GROUP
+az role assignment create --role Contributor --scope /subscriptions/mySubscriptionID/resourceGroups/myResourceGroupName --assignee $ARO_SERVICE_PRINCIPAL_ID -g $AZURE_FILES_RESOURCE_GROUP
``` ### Set ARO cluster permissions
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/policy-reference.md
Title: Built-in policy definitions for Azure Database for PostgreSQL description: Lists Azure Policy built-in policy definitions for Azure Database for PostgreSQL. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
purview Catalog Lineage User Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-lineage-user-guide.md
This article provides an overview of the data lineage features in Azure Purview
One of the platform features of Azure Purview is the ability to show the lineage between datasets created by data processes. Systems like Data Factory, Data Share, and Power BI capture the lineage of data as it moves. Custom lineage reporting is also supported via Atlas hooks and REST API. ## Lineage collection
- Metadata collected in Azure Purview from enterprise data systems are stitched across to show an end to end data lineage. Data systems that collect lineage into Azure Purview are broadly categorized into following three types.
+ Metadata collected in Azure Purview from enterprise data systems are stitched across to show an end to end data lineage. Data systems that collect lineage into Azure Purview are broadly categorized into following three types:
+
+ - [Data processing systems](#data-processing-systems)
+ - [Data storage systems](#data-storage-systems)
+ - [Data analytics and reporting systems](#data-analytics-and-reporting-systems)
+
+Each system supports a different level of lineage scope. Check the sections below, or your system's individual lineage article, to confirm the scope of lineage currently available.
-### Data processing system
+### Data processing systems
Data integration and ETL tools can push lineage in to Azure Purview at execution time. Tools such as Data Factory, Data Share, Synapse, Azure Databricks, and so on, belong to this category of data systems. The data processing systems reference datasets as source from different databases and storage solutions to create target datasets. The list of data processing systems currently integrated with Azure Purview for lineage are listed in below table. | Data processing system | Supported scope | | - | | | Azure Data Factory | [Copy activity](how-to-link-azure-data-factory.md#copy-activity-support) <br> [Data flow activity](how-to-link-azure-data-factory.md#data-flow-support) <br> [Execute SSIS package activity](how-to-link-azure-data-factory.md#execute-ssis-package-support) | | Azure Synapse Analytics | [Copy activity](how-to-lineage-azure-synapse-analytics.md#copy-activity-support) <br> [Data flow activity](how-to-lineage-azure-synapse-analytics.md#data-flow-support) |
+| Azure SQL Database (Preview) | [Lineage extraction](register-scan-azure-sql-database.md?tabs=sql-authentication#lineagepreview) |
| Azure Data Share | [Share snapshot](how-to-link-azure-data-share.md) | ### Data storage systems
Databases & storage solutions such as Oracle, Teradata, and SAP have query engin
|| [SAP ECC](register-scan-sapecc-source.md)| || [SAP S/4HANA](register-scan-saps4hana-source.md) |
-### Data analytics & reporting systems
+### Data analytics and reporting systems
Data systems like Azure ML and Power BI report lineage into Azure Purview. These systems will use the datasets from storage systems and process through their meta model to create BI Dashboard, ML experiments and so on. | Data analytics & reporting system | Supported scope |
purview Catalog Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-managed-vnet.md
Currently, the following data sources are supported to have a managed private en
- Azure Blob Storage - Azure Data Lake Storage Gen 2 - Azure SQL Database -- Azure SQL Database Managed Instances
+- Azure SQL Database Managed Instance
- Azure Cosmos DB - Azure Synapse Analytics - Azure Files
purview How To Lineage Spark Atlas Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-lineage-spark-atlas-connector.md
The connectors require a version of Spark 2.4.0+. But Spark version 3 is not sup
| spark.sql.queryExecutionListeners | 2.3.0 | | spark.sql.streaming.streamingQueryListeners | 2.4.0 |
-If the Spark cluster version is below 2.4.0, Stream query lineage and most of the query lineage will not be captured.
+>[!IMPORTANT]
+> * If the Spark cluster version is below 2.4.0, Stream query lineage and most of the query lineage will not be captured.
+>
+> * Spark version 3 is not supported.
### Step 1. Prepare Spark Atlas connector package The following steps are documented based on DataBricks as an example:
purview Register Scan Azure Sql Database Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database-managed-instance.md
This article outlines how to register and Azure SQL Database Managed Instance, a
* [Configure public endpoint in Azure SQL Managed Instance](../azure-sql/managed-instance/public-endpoint-configure.md) > [!Note]
- > We now support scanning Azure SQL Database Managed Instances that are configured with private endpoints using Azure Purview ingestion private endpoints and a self-hosted integration runtime VM.
+ > We now support scanning Azure SQL Database Managed Instances over the private connection using Azure Purview ingestion private endpoints and a self-hosted integration runtime VM.
> For more information related to prerequisites, see [Connect to your Azure Purview and scan data sources privately and securely](./catalog-private-link-end-to-end.md) ## Register
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
Select your method of authentication from the tabs below for steps to authentica
> [!Note] > Only the server-level principal login (created by the provisioning process) or members of the `loginmanager` database role in the master database can create new logins. It takes about **15 minutes** after granting permission, the Azure Purview account should have the appropriate permissions to be able to scan the resource(s).
-You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1) to create a sign in for Azure SQL Database. You'll' need **username** and **password** for the next steps.
+1. You'll need a SQL login with at least `db_datareader` permissions to be able to access the information Azure Purview needs to scan the database. You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1) to create a sign in for Azure SQL Database. You'll need to save the **username** and **password** for the next steps.
-1. Navigate to your key vault in the Azure portal
+1. Navigate to your key vault in the Azure portal.
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-key-vault.png" alt-text="Screenshot that shows the key vault.":::
You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-l
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-secret.png" alt-text="Screenshot that shows the key vault option to generate a secret.":::
-1. Enter the **Name** and **Value** as the *password* from your Azure SQL Database
+1. Enter the **Name** and **Value** as the *password* from your Azure SQL Database.
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-secret-sql.png" alt-text="Screenshot that shows the key vault option to enter the sql secret values.":::
You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-l
1. If your key vault isn't connected to Azure Purview yet, you'll need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
-1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the key to set up your scan
+1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the key to set up your scan.
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-credentials.png" alt-text="Screenshot that shows the key vault option to set up credentials.":::
purview Tutorial Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-resource-group.md
The limit for Azure Purview policies that can be enforced by Storage accounts is
Check blog, demo and related tutorials * [Blog: resource group-level governance can significantly reduce effort](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-resource-group-level-governance-can/ba-p/3096314)
-* [Demo of data owner access policies for Azure Storage](https://docs.microsoft.com/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
+* [Demo of data owner access policies for Azure Storage](https://docs.microsoft.com/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4.)
* [Fine-grain data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Previously updated : 02/14/2022 Last updated : 03/10/2022
The following table provides a brief description of each built-in role. Click th
> | [Log Analytics Reader](#log-analytics-reader) | Log Analytics Reader can view and search all monitoring data as well as and view monitoring settings, including viewing the configuration of Azure diagnostics on all Azure resources. | 73c42c96-874c-492b-b04d-ab87d138a893 | > | [Schema Registry Contributor (Preview)](#schema-registry-contributor-preview) | Read, write, and delete Schema Registry groups and schemas. | 5dffeca3-4936-4216-b2bc-10343a5abb25 | > | [Schema Registry Reader (Preview)](#schema-registry-reader-preview) | Read and list Schema Registry groups and schemas. | 2c56ea50-c6b3-40a6-83c0-9d98858bc7d2 |
+> | [Stream Analytics Query Tester](#stream-analytics-query-tester) | Lets you perform query testing without creating a stream analytics job first | 1ec5b3c1-b17e-4e25-8312-2acb3c3c5abf |
> | **AI + machine learning** | | | > | [AzureML Data Scientist](#azureml-data-scientist) | Can perform all actions within an Azure Machine Learning workspace, except for creating or deleting compute resources and modifying the workspace itself. | f6c7c914-8db3-469d-8ca1-694a8f32e121 | > | [Cognitive Services Contributor](#cognitive-services-contributor) | Lets you create, read, update, delete and manage keys of Cognitive Services. | 25fbc0a9-bd7c-42a3-aa1a-3b75d497ee68 |
The following table provides a brief description of each built-in role. Click th
> | [Logic App Contributor](#logic-app-contributor) | Lets you manage logic apps, but not change access to them. | 87a39d53-fc1b-424a-814c-f7e04687dc9e | > | [Logic App Operator](#logic-app-operator) | Lets you read, enable, and disable logic apps, but not edit or update them. | 515c2055-d9d4-4321-b1b9-bd0c9a0f79fe | > | **Identity** | | |
+> | [Domain Services Contributor](#domain-services-contributor) | Can manage Azure AD Domain Services and related network configurations | eeaeda52-9324-47f6-8069-5d5bade478b2 |
+> | [Domain Services Reader](#domain-services-reader) | Can view Azure AD Domain Services and related network configurations | 361898ef-9ed1-48c2-849c-a832951106bb |
> | [Managed Identity Contributor](#managed-identity-contributor) | Create, Read, Update, and Delete User Assigned Identity | e40ec5ca-96e0-45a2-b4ff-59039f2c2b59 | > | [Managed Identity Operator](#managed-identity-operator) | Read and Assign User Assigned Identity | f1a07417-d97a-45cb-824c-7a7467783830 | > | **Security** | | |
Read and list Schema Registry groups and schemas.
} ```
+### Stream Analytics Query Tester
+
+Lets you perform query testing without creating a stream analytics job first
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.StreamAnalytics](resource-provider-operations.md#microsoftstreamanalytics)/locations/TestQuery/action | Test Query for Stream Analytics Resource Provider |
+> | [Microsoft.StreamAnalytics](resource-provider-operations.md#microsoftstreamanalytics)/locations/OperationResults/read | Read Stream Analytics Operation Result |
+> | [Microsoft.StreamAnalytics](resource-provider-operations.md#microsoftstreamanalytics)/locations/SampleInput/action | Sample Input for Stream Analytics Resource Provider |
+> | [Microsoft.StreamAnalytics](resource-provider-operations.md#microsoftstreamanalytics)/locations/CompileQuery/action | Compile Query for Stream Analytics Resource Provider |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Lets you perform query testing without creating a stream analytics job first",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/1ec5b3c1-b17e-4e25-8312-2acb3c3c5abf",
+ "name": "1ec5b3c1-b17e-4e25-8312-2acb3c3c5abf",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.StreamAnalytics/locations/TestQuery/action",
+ "Microsoft.StreamAnalytics/locations/OperationResults/read",
+ "Microsoft.StreamAnalytics/locations/SampleInput/action",
+ "Microsoft.StreamAnalytics/locations/CompileQuery/action"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Stream Analytics Query Tester",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ## AI + machine learning
Lets you read, enable, and disable logic apps, but not edit or update them. [Lea
## Identity
+### Domain Services Contributor
+
+Can manage Azure AD Domain Services and related network configurations [Learn more](../active-directory-domain-services/tutorial-create-instance.md)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/read | Gets or lists deployments. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/write | Creates or updates an deployment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/delete | Deletes a deployment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/cancel/action | Cancels a deployment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/validate/action | Validates an deployment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/whatIf/action | Predicts template deployment changes. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/exportTemplate/action | Export template for a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operationstatuses/read | Gets or lists deployment operation statuses. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Write | Create or update a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Delete | Delete a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Read | Read a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Activated/Action | Classic metric alert activated |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Resolved/Action | Classic metric alert resolved |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Throttled/Action | Classic metric alert rule throttled |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Incidents/Read | Read a classic metric alert incident |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/register/action | Register Domain Service |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/unregister/action | Unregister Domain Service |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/read | Read Domain Services |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/write | Write Domain Service |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/delete | Delete Domain Service |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for Domain Service |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the Domain Service resource |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for Domain Service |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/oucontainer/read | Read Ou Containers |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/oucontainer/write | Write Ou Container |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/oucontainer/delete | Delete Ou Container |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/register/action | Registers the subscription |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/unregister/action | Unregisters the subscription |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/read | Get the virtual network definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/write | Creates a virtual network or updates an existing virtual network |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/delete | Deletes a virtual network |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/peer/action | Peers a virtual network with another virtual network |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/join/action | Joins a virtual network. Not Alertable. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/subnets/read | Gets a virtual network subnet definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/subnets/write | Creates a virtual network subnet or updates an existing virtual network subnet |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/subnets/delete | Deletes a virtual network subnet |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/subnets/join/action | Joins a virtual network. Not Alertable. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/virtualNetworkPeerings/read | Gets a virtual network peering definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/virtualNetworkPeerings/write | Creates a virtual network peering or updates an existing virtual network peering |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/virtualNetworkPeerings/delete | Deletes a virtual network peering |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/providers/Microsoft.Insights/diagnosticSettings/read | Get the diagnostic settings of Virtual Network |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/providers/Microsoft.Insights/metricDefinitions/read | Gets available metrics for the PingMesh |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/azureFirewalls/read | Get Azure Firewall |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/ddosProtectionPlans/read | Gets a DDoS Protection Plan |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/ddosProtectionPlans/join/action | Joins a DDoS Protection Plan. Not alertable. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/read | Gets a load balancer definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/delete | Deletes a load balancer |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/*/read | |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/backendAddressPools/join/action | Joins a load balancer backend address pool. Not Alertable. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/inboundNatRules/join/action | Joins a load balancer inbound nat rule. Not Alertable. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/natGateways/join/action | Joins a NAT Gateway |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkInterfaces/read | Gets a network interface definition. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkInterfaces/write | Creates a network interface or updates an existing network interface. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkInterfaces/delete | Deletes a network interface |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkInterfaces/join/action | Joins a Virtual Machine to a network interface. Not Alertable. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/defaultSecurityRules/read | Gets a default security rule definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/read | Gets a network security group definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/write | Creates a network security group or updates an existing network security group |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/delete | Deletes a network security group |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/join/action | Joins a network security group. Not Alertable. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/securityRules/read | Gets a security rule definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/securityRules/write | Creates a security rule or updates an existing security rule |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/securityRules/delete | Deletes a security rule |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/read | Gets a route table definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/write | Creates a route table or Updates an existing route table |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/delete | Deletes a route table definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/join/action | Joins a route table. Not Alertable. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/routes/read | Gets a route definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/routes/write | Creates a route or Updates an existing route |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/routes/delete | Deletes a route definition |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Can manage Azure AD Domain Services and related network configurations",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/eeaeda52-9324-47f6-8069-5d5bade478b2",
+ "name": "eeaeda52-9324-47f6-8069-5d5bade478b2",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/deployments/read",
+ "Microsoft.Resources/deployments/write",
+ "Microsoft.Resources/deployments/delete",
+ "Microsoft.Resources/deployments/cancel/action",
+ "Microsoft.Resources/deployments/validate/action",
+ "Microsoft.Resources/deployments/whatIf/action",
+ "Microsoft.Resources/deployments/exportTemplate/action",
+ "Microsoft.Resources/deployments/operations/read",
+ "Microsoft.Resources/deployments/operationstatuses/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Insights/AlertRules/Write",
+ "Microsoft.Insights/AlertRules/Delete",
+ "Microsoft.Insights/AlertRules/Read",
+ "Microsoft.Insights/AlertRules/Activated/Action",
+ "Microsoft.Insights/AlertRules/Resolved/Action",
+ "Microsoft.Insights/AlertRules/Throttled/Action",
+ "Microsoft.Insights/AlertRules/Incidents/Read",
+ "Microsoft.AAD/register/action",
+ "Microsoft.AAD/unregister/action",
+ "Microsoft.AAD/domainServices/read",
+ "Microsoft.AAD/domainServices/write",
+ "Microsoft.AAD/domainServices/delete",
+ "Microsoft.AAD/domainServices/providers/Microsoft.Insights/diagnosticSettings/read",
+ "Microsoft.AAD/domainServices/providers/Microsoft.Insights/diagnosticSettings/write",
+ "Microsoft.AAD/domainServices/providers/Microsoft.Insights/logDefinitions/read",
+ "Microsoft.AAD/domainServices/oucontainer/read",
+ "Microsoft.AAD/domainServices/oucontainer/write",
+ "Microsoft.AAD/domainServices/oucontainer/delete",
+ "Microsoft.Network/register/action",
+ "Microsoft.Network/unregister/action",
+ "Microsoft.Network/virtualNetworks/read",
+ "Microsoft.Network/virtualNetworks/write",
+ "Microsoft.Network/virtualNetworks/delete",
+ "Microsoft.Network/virtualNetworks/peer/action",
+ "Microsoft.Network/virtualNetworks/join/action",
+ "Microsoft.Network/virtualNetworks/subnets/read",
+ "Microsoft.Network/virtualNetworks/subnets/write",
+ "Microsoft.Network/virtualNetworks/subnets/delete",
+ "Microsoft.Network/virtualNetworks/subnets/join/action",
+ "Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read",
+ "Microsoft.Network/virtualNetworks/virtualNetworkPeerings/write",
+ "Microsoft.Network/virtualNetworks/virtualNetworkPeerings/delete",
+ "Microsoft.Network/virtualNetworks/providers/Microsoft.Insights/diagnosticSettings/read",
+ "Microsoft.Network/virtualNetworks/providers/Microsoft.Insights/metricDefinitions/read",
+ "Microsoft.Network/azureFirewalls/read",
+ "Microsoft.Network/ddosProtectionPlans/read",
+ "Microsoft.Network/ddosProtectionPlans/join/action",
+ "Microsoft.Network/loadBalancers/read",
+ "Microsoft.Network/loadBalancers/delete",
+ "Microsoft.Network/loadBalancers/*/read",
+ "Microsoft.Network/loadBalancers/backendAddressPools/join/action",
+ "Microsoft.Network/loadBalancers/inboundNatRules/join/action",
+ "Microsoft.Network/natGateways/join/action",
+ "Microsoft.Network/networkInterfaces/read",
+ "Microsoft.Network/networkInterfaces/write",
+ "Microsoft.Network/networkInterfaces/delete",
+ "Microsoft.Network/networkInterfaces/join/action",
+ "Microsoft.Network/networkSecurityGroups/defaultSecurityRules/read",
+ "Microsoft.Network/networkSecurityGroups/read",
+ "Microsoft.Network/networkSecurityGroups/write",
+ "Microsoft.Network/networkSecurityGroups/delete",
+ "Microsoft.Network/networkSecurityGroups/join/action",
+ "Microsoft.Network/networkSecurityGroups/securityRules/read",
+ "Microsoft.Network/networkSecurityGroups/securityRules/write",
+ "Microsoft.Network/networkSecurityGroups/securityRules/delete",
+ "Microsoft.Network/routeTables/read",
+ "Microsoft.Network/routeTables/write",
+ "Microsoft.Network/routeTables/delete",
+ "Microsoft.Network/routeTables/join/action",
+ "Microsoft.Network/routeTables/routes/read",
+ "Microsoft.Network/routeTables/routes/write",
+ "Microsoft.Network/routeTables/routes/delete"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Domain Services Contributor",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Domain Services Reader
+
+Can view Azure AD Domain Services and related network configurations
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/read | Gets or lists deployments. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operationstatuses/read | Gets or lists deployment operation statuses. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Read | Read a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Incidents/Read | Read a classic metric alert incident |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/read | Read Domain Services |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/oucontainer/read | Read Ou Containers |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/OutboundNetworkDependenciesEndpoints/read | Get the network endpoints of all outbound dependencies |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for Domain Service |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for Domain Service |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/read | Get the virtual network definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/subnets/read | Gets a virtual network subnet definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/virtualNetworkPeerings/read | Gets a virtual network peering definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/providers/Microsoft.Insights/diagnosticSettings/read | Get the diagnostic settings of Virtual Network |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/providers/Microsoft.Insights/metricDefinitions/read | Gets available metrics for the PingMesh |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/azureFirewalls/read | Get Azure Firewall |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/ddosProtectionPlans/read | Gets a DDoS Protection Plan |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/read | Gets a load balancer definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/*/read | |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/natGateways/read | Gets a Nat Gateway Definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkInterfaces/read | Gets a network interface definition. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/defaultSecurityRules/read | Gets a default security rule definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/read | Gets a network security group definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/securityRules/read | Gets a security rule definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/read | Gets a route table definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/routeTables/routes/read | Gets a route definition |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Can view Azure AD Domain Services and related network configurations",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/361898ef-9ed1-48c2-849c-a832951106bb",
+ "name": "361898ef-9ed1-48c2-849c-a832951106bb",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/deployments/read",
+ "Microsoft.Resources/deployments/operations/read",
+ "Microsoft.Resources/deployments/operationstatuses/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Insights/AlertRules/Read",
+ "Microsoft.Insights/AlertRules/Incidents/Read",
+ "Microsoft.AAD/domainServices/read",
+ "Microsoft.AAD/domainServices/oucontainer/read",
+ "Microsoft.AAD/domainServices/OutboundNetworkDependenciesEndpoints/read",
+ "Microsoft.AAD/domainServices/providers/Microsoft.Insights/diagnosticSettings/read",
+ "Microsoft.AAD/domainServices/providers/Microsoft.Insights/logDefinitions/read",
+ "Microsoft.Network/virtualNetworks/read",
+ "Microsoft.Network/virtualNetworks/subnets/read",
+ "Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read",
+ "Microsoft.Network/virtualNetworks/providers/Microsoft.Insights/diagnosticSettings/read",
+ "Microsoft.Network/virtualNetworks/providers/Microsoft.Insights/metricDefinitions/read",
+ "Microsoft.Network/azureFirewalls/read",
+ "Microsoft.Network/ddosProtectionPlans/read",
+ "Microsoft.Network/loadBalancers/read",
+ "Microsoft.Network/loadBalancers/*/read",
+ "Microsoft.Network/natGateways/read",
+ "Microsoft.Network/networkInterfaces/read",
+ "Microsoft.Network/networkSecurityGroups/defaultSecurityRules/read",
+ "Microsoft.Network/networkSecurityGroups/read",
+ "Microsoft.Network/networkSecurityGroups/securityRules/read",
+ "Microsoft.Network/routeTables/read",
+ "Microsoft.Network/routeTables/routes/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Domain Services Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### Managed Identity Contributor Create, Read, Update, and Delete User Assigned Identity [Learn more](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md)
Lets you manage managed HSM pools, but not access to them. [Learn more](../key-v
> | Actions | Description | > | | | > | [Microsoft.KeyVault](resource-provider-operations.md#microsoftkeyvault)/managedHSMs/* | |
+> | [Microsoft.KeyVault](resource-provider-operations.md#microsoftkeyvault)/deletedManagedHsms/read | View the properties of a deleted managed hsm |
+> | [Microsoft.KeyVault](resource-provider-operations.md#microsoftkeyvault)/locations/deletedManagedHsms/read | View the properties of a deleted managed hsm |
+> | [Microsoft.KeyVault](resource-provider-operations.md#microsoftkeyvault)/locations/deletedManagedHsms/purge/action | Purge a soft deleted managed hsm |
+> | [Microsoft.KeyVault](resource-provider-operations.md#microsoftkeyvault)/locations/managedHsmOperationResults/read | Check the result of a long run operation |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Lets you manage managed HSM pools, but not access to them. [Learn more](../key-v
"permissions": [ { "actions": [
- "Microsoft.KeyVault/managedHSMs/*"
+ "Microsoft.KeyVault/managedHSMs/*",
+ "Microsoft.KeyVault/deletedManagedHsms/read",
+ "Microsoft.KeyVault/locations/deletedManagedHsms/read",
+ "Microsoft.KeyVault/locations/deletedManagedHsms/purge/action",
+ "Microsoft.KeyVault/locations/managedHsmOperationResults/read"
], "notActions": [], "dataActions": [],
Can read all monitoring data and edit monitoring settings. See also [Get started
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/activityLogAlerts/* | | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/* | Create and manage a classic metric alert | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/components/* | Create and manage Insights components |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/createNotifications/* | |
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/dataCollectionEndpoints/* | | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/dataCollectionRules/* | | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/dataCollectionRuleAssociations/* | |
Can read all monitoring data and edit monitoring settings. See also [Get started
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/metricalerts/* | | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/MetricDefinitions/* | Read metric definitions (list of available metric types for a resource). | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/Metrics/* | Read metrics for a resource. |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/notificationStatus/* | |
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/Register/Action | Register the Microsoft Insights provider | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/scheduledqueryrules/* | | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/webtests/* | Create and manage Insights web tests |
Can read all monitoring data and edit monitoring settings. See also [Get started
> | [Microsoft.AlertsManagement](resource-provider-operations.md#microsoftalertsmanagement)/smartDetectorAlertRules/* | | > | [Microsoft.AlertsManagement](resource-provider-operations.md#microsoftalertsmanagement)/actionRules/* | | > | [Microsoft.AlertsManagement](resource-provider-operations.md#microsoftalertsmanagement)/smartGroups/* | |
+> | [Microsoft.AlertsManagement](resource-provider-operations.md#microsoftalertsmanagement)/migrateFromSmartDetection/* | |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Can read all monitoring data and edit monitoring settings. See also [Get started
"Microsoft.Insights/activityLogAlerts/*", "Microsoft.Insights/AlertRules/*", "Microsoft.Insights/components/*",
+ "Microsoft.Insights/createNotifications/*",
"Microsoft.Insights/dataCollectionEndpoints/*", "Microsoft.Insights/dataCollectionRules/*", "Microsoft.Insights/dataCollectionRuleAssociations/*",
Can read all monitoring data and edit monitoring settings. See also [Get started
"Microsoft.Insights/metricalerts/*", "Microsoft.Insights/MetricDefinitions/*", "Microsoft.Insights/Metrics/*",
+ "Microsoft.Insights/notificationStatus/*",
"Microsoft.Insights/Register/Action", "Microsoft.Insights/scheduledqueryrules/*", "Microsoft.Insights/webtests/*",
Can read all monitoring data and edit monitoring settings. See also [Get started
"Microsoft.WorkloadMonitor/monitors/*", "Microsoft.AlertsManagement/smartDetectorAlertRules/*", "Microsoft.AlertsManagement/actionRules/*",
- "Microsoft.AlertsManagement/smartGroups/*"
+ "Microsoft.AlertsManagement/smartGroups/*",
+ "Microsoft.AlertsManagement/migrateFromSmartDetection/*"
], "notActions": [], "dataActions": [],
role-based-access-control Conditions Role Assignments Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-cli.md
To add a role assignment condition, use [az role assignment create](/cli/azure/r
The following example shows how to assign the [Storage Blob Data Reader](built-in-roles.md#storage-blob-data-reader) role with a condition. The condition checks whether container name equals 'blobs-example-container'. ```azurecli
-az role assignment create --role "Storage Blob Data Reader" --assignee "user1@contoso.com" --resource-group {resourceGroup} \
+az role assignment create --role "Storage Blob Data Reader" --scope /subscriptions/mySubscriptionID/resourceGroups/myResourceGroupName --scope /subscriptions/mySubscriptionID/resourceGroups/myResourceGroupName --assignee "user1@contoso.com" --resource-group {resourceGroup} \
--description "Read access if container name equals blobs-example-container" \ --condition "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'))" \ --condition-version "2.0"
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/15/2022 Last updated : 03/08/2022
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Previously updated : 02/14/2022 Last updated : 03/10/2022
Azure service: [Virtual Machines](../virtual-machines/index.yml), [Virtual Machi
> | Microsoft.Compute/capacityReservationGroups/read | Get the properties of a capacity reservation group | > | Microsoft.Compute/capacityReservationGroups/write | Creates a new capacity reservation group or updates an existing capacity reservation group | > | Microsoft.Compute/capacityReservationGroups/delete | Deletes the capacity reservation group |
+> | Microsoft.Compute/capacityReservationGroups/deploy/action | Deploy a new VM/VMSS using Capacity Reservation Group |
> | Microsoft.Compute/capacityReservationGroups/capacityReservations/read | Get the properties of a capacity reservation | > | Microsoft.Compute/capacityReservationGroups/capacityReservations/write | Creates a new capacity reservation or updates an existing capacity reservation | > | Microsoft.Compute/capacityReservationGroups/capacityReservations/delete | Deletes the capacity reservation |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/virtualHubs/read | Get a Virtual Hub | > | Microsoft.Network/virtualHubs/write | Create or update a Virtual Hub | > | Microsoft.Network/virtualHubs/effectiveRoutes/action | Gets effective route configured on Virtual Hub |
-> | Microsoft.Network/virtualHubs/migrateRouteService/action | Migrate the route service of a virtual hub from traditional CloudService To Virtual Machine Scale Set |
+> | Microsoft.Network/virtualHubs/migrateRouteService/action | Validate or execute the hub router migration |
> | Microsoft.Network/virtualHubs/inboundRoutes/action | Gets routes learnt from a virtual wan connection | > | Microsoft.Network/virtualHubs/outboundRoutes/action | Get Routes advertised by a virtual wan connection | > | Microsoft.Network/virtualHubs/bgpConnections/read | Gets a Hub Bgp Connection child resource of Virtual Hub |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/virtualNetworks/peer/action | Peers a virtual network with another virtual network | > | Microsoft.Network/virtualNetworks/join/action | Joins a virtual network. Not Alertable. | > | Microsoft.Network/virtualNetworks/BastionHosts/action | Gets Bastion Host references in a Virtual Network. |
+> | Microsoft.Network/virtualNetworks/listNetworkManagerEffectiveConnectivityConfigurations/action | List Network Manager Effective Connectivity Configurations |
+> | Microsoft.Network/virtualNetworks/listNetworkManagerEffectiveSecurityAdminRules/action | List Network Manager Effective Security Admin Rules |
> | Microsoft.Network/virtualNetworks/bastionHosts/default/action | Gets Bastion Host references in a Virtual Network. | > | Microsoft.Network/virtualNetworks/checkIpAddressAvailability/read | Check if Ip Address is available at the specified virtual network | > | Microsoft.Network/virtualNetworks/customViews/read | Get definition of a custom view of Virtual Network |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/vpnGateways/write | Puts a VpnGateway. | > | Microsoft.Network/vpnGateways/delete | Deletes a VpnGateway. | > | microsoft.network/vpngateways/reset/action | Resets a VpnGateway |
+> | microsoft.network/vpngateways/getbgppeerstatus/action | Gets bgp peer status of a VpnGateway |
+> | microsoft.network/vpngateways/getlearnedroutes/action | Gets learned routes of a VpnGateway |
+> | microsoft.network/vpngateways/getadvertisedroutes/action | Gets advertised routes of a VpnGateway |
> | microsoft.network/vpngateways/startpacketcapture/action | Start Vpn gateway Packet Capture with according resource | > | microsoft.network/vpngateways/stoppacketcapture/action | Stop Vpn gateway Packet Capture with sasURL | > | microsoft.network/vpngateways/listvpnconnectionshealth/action | Gets connection health for all or a subset of connections on a VpnGateway |
Azure service: [Azure NetApp Files](../azure-netapp-files/index.yml)
> | Microsoft.NetApp/locations/checkinventory/action | Checks ReservedCapacity inventory. | > | Microsoft.NetApp/locations/operationresults/read | Reads an operation result resource. | > | Microsoft.NetApp/locations/quotaLimits/read | Reads a Quotalimit resource type. |
+> | Microsoft.NetApp/locations/RegionInfo/read | Reads a regionInfo resource. |
> | Microsoft.NetApp/netAppAccounts/read | Reads an account resource. | > | Microsoft.NetApp/netAppAccounts/write | Writes an account resource. | > | Microsoft.NetApp/netAppAccounts/delete | Deletes an account resource. |
Azure service: [Azure NetApp Files](../azure-netapp-files/index.yml)
> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/subvolumes/write | Write a subvolume resource. | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/subvolumes/delete | | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/subvolumes/GetMetadata/action | Read subvolume metadata resource. |
+> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/volumeQuotaRules/read | Reads a Volume quota rule resource. |
+> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/volumeQuotaRules/write | Writes Volume quota rule resource. |
+> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/volumeQuotaRules/delete | Deletes a Volume quota rule resource. |
> | Microsoft.NetApp/netAppAccounts/snapshotPolicies/read | Reads a snapshot policy resource. | > | Microsoft.NetApp/netAppAccounts/snapshotPolicies/write | Writes a snapshot policy resource. | > | Microsoft.NetApp/netAppAccounts/snapshotPolicies/delete | Deletes a snapshot policy resource. |
Azure service: [Azure Spring Cloud](../spring-cloud/index.yml)
> | Microsoft.AppPlatform/Spring/stop/action | Stop a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/start/action | Start a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/configServers/action | Validate the config server settings for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/read | Get the API portal for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/write | Create or update the API portal for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/delete | Delete the API portal for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/validateDomain/action | Validate the API portal domain for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/domains/read | Get the API portal domain for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/domains/write | Create or update the API portal domain for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/domains/delete | Delete the API portal domain for a specific Azure Spring Cloud service instance |
> | Microsoft.AppPlatform/Spring/apps/write | Create or update the application for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/apps/delete | Delete the application for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/apps/read | Get the applications for a specific Azure Spring Cloud service instance | > | Microsoft.AppPlatform/Spring/apps/getResourceUploadUrl/action | Get the resource upload URL of a specific Microsoft Azure Spring Cloud application | > | Microsoft.AppPlatform/Spring/apps/validateDomain/action | Validate the custom domain for a specific application |
+> | Microsoft.AppPlatform/Spring/apps/setActiveDeployments/action | Set active deployments for a specific Microsoft Azure Spring Cl