Updates from: 04/27/2022 01:06:47
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Social Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/social-transformations.md
# Social accounts claims transformations
-In Azure Active Directory B2C (Azure AD B2C), social account identities are stored in a `userIdentities` attribute of a **alternativeSecurityIdCollection** claim type. Each item in the **alternativeSecurityIdCollection** specifies the issuer (identity provider name, such as facebook.com) and the `issuerUserId`, which is a unique user identifier for the issuer.
+In Azure Active Directory B2C (Azure AD B2C), social account identities are stored in a `alternativeSecurityIds` attribute of a **alternativeSecurityIdCollection** claim type. Each item in the **alternativeSecurityIdCollection** specifies the issuer (identity provider name, such as facebook.com) and the `issuerUserId`, which is a unique user identifier for the issuer.
```json
-"userIdentities": [{
+"alternativeSecurityIds": [{
"issuer": "google.com", "issuerUserId": "MTA4MTQ2MDgyOTI3MDUyNTYzMjcw" },
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
For the next test scenario, configure the authentication policy where the **poli
- The **Additional Details** tab shows **User certificate subject name** as the attribute name but it is actually "User certificate binding identifier". It is the value of the certificate field that username binding is configured to use.
+- There is a double prompt for iOS because iOS only supports pushing certificates to a device storage. When an organization pushes user certificates to an iOS device through Mobile Device Management (MDM) or when a user accesses first-party or native apps, there is no access to device storage. Only Safari can access device storage.
+
+ When an iOS client sees a client TLS challenge and the user clicks **Sign in with certificate**, iOS client knows it cannot handle it and sends a completely new authorization request using the Safari browser. The user clicks **Sign in with certificate** again, at which point Safari which has access to certificates for authentication in device storage. This requires users to click **Sign in with certificate** twice, once in appΓÇÖs WKWebView and once in SafariΓÇÖs System WebView.
+
+ We are aware of the UX experience issue and are working to fix this on iOS and to have a seamless UX experience.
+ ## Next steps - [Overview of Azure AD CBA](concept-certificate-based-authentication.md)
active-directory Fido2 Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/fido2-compatibility.md
This table shows support for authenticating Azure Active Directory (Azure AD) an
|::|::|::|::|::|::|::|::|::|::|::|::|::| | | USB | NFC | BLE | USB | NFC | BLE | USB | NFC | BLE | USB | NFC | BLE | | **Windows** | ![Chrome supports USB on Windows for Azure AD accounts.][y] | ![Chrome supports NFC on Windows for Azure AD accounts.][y] | ![Chrome supports BLE on Windows for Azure AD accounts.][y] | ![Edge supports USB on Windows for Azure AD accounts.][y] | ![Edge supports NFC on Windows for Azure AD accounts.][y] | ![Edge supports BLE on Windows for Azure AD accounts.][y] | ![Firefox supports USB on Windows for Azure AD accounts.][y] | ![Firefox supports NFC on Windows for Azure AD accounts.][y] | ![Firefox supports BLE on Windows for Azure AD accounts.][y] | ![Safari supports USB on Windows for Azure AD accounts.][n] | ![Safari supports NFC on Windows for Azure AD accounts.][n] | ![Safari supports BLE on Windows for Azure AD accounts.][n] |
-| **macOS** | ![Chrome supports USB on macOS for Azure AD accounts.][y] | ![Chrome supports NFC on macOS for Azure AD accounts.][n] | ![Chrome supports BLE on macOS for Azure AD accounts.][n] | ![Edge supports USB on macOS for Azure AD accounts.][y] | ![Edge supports NFC on macOS for Azure AD accounts.][n] | ![Edge supports BLE on macOS for Azure AD accounts.][n] | ![Firefox supports USB on macOS for Azure AD accounts.][y] | ![Firefox supports NFC on macOS for Azure AD accounts.][n] | ![Firefox supports BLE on macOS for Azure AD accounts.][n] | ![Safari supports USB on macOS for Azure AD accounts.][y] | ![Safari supports NFC on macOS for Azure AD accounts.][n] | ![Safari supports BLE on macOS for Azure AD accounts.][n] |
+| **macOS** | ![Chrome supports USB on macOS for Azure AD accounts.][y] | ![Chrome supports NFC on macOS for Azure AD accounts.][n] | ![Chrome supports BLE on macOS for Azure AD accounts.][n] | ![Edge supports USB on macOS for Azure AD accounts.][y] | ![Edge supports NFC on macOS for Azure AD accounts.][n] | ![Edge supports BLE on macOS for Azure AD accounts.][n] | ![Firefox supports USB on macOS for Azure AD accounts.][y] | ![Firefox supports NFC on macOS for Azure AD accounts.][n] | ![Firefox supports BLE on macOS for Azure AD accounts.][n] | ![Safari supports USB on macOS for Azure AD accounts.][n] | ![Safari supports NFC on macOS for Azure AD accounts.][n] | ![Safari supports BLE on macOS for Azure AD accounts.][n] |
| **ChromeOS** | ![Chrome supports USB on ChromeOS for Azure AD accounts.][y] | ![Chrome supports NFC on ChromeOS for Azure AD accounts.][n] | ![Chrome supports BLE on ChromeOS for Azure AD accounts.][n] | ![Edge supports USB on ChromeOS for Azure AD accounts.][n] | ![Edge supports NFC on ChromeOS for Azure AD accounts.][n] | ![Edge supports BLE on ChromeOS for Azure AD accounts.][n] | ![Firefox supports USB on ChromeOS for Azure AD accounts.][n] | ![Firefox supports NFC on ChromeOS for Azure AD accounts.][n] | ![Firefox supports BLE on ChromeOS for Azure AD accounts.][n] | ![Safari supports USB on ChromeOS for Azure AD accounts.][n] | ![Safari supports NFC on ChromeOS for Azure AD accounts.][n] | ![Safari supports BLE on ChromeOS for Azure AD accounts.][n] | | **Linux** | ![Chrome supports USB on Linux for Azure AD accounts.][y] | ![Chrome supports NFC on Linux for Azure AD accounts.][n] | ![Chrome supports BLE on Linux for Azure AD accounts.][n] | ![Edge supports USB on Linux for Azure AD accounts.][n] | ![Edge supports NFC on Linux for Azure AD accounts.][n] | ![Edge supports BLE on Linux for Azure AD accounts.][n] | ![Firefox supports USB on Linux for Azure AD accounts.][n] | ![Firefox supports NFC on Linux for Azure AD accounts.][n] | ![Firefox supports BLE on Linux for Azure AD accounts.][n] | ![Safari supports USB on Linux for Azure AD accounts.][n] | ![Safari supports NFC on Linux for Azure AD accounts.][n] | ![Safari supports BLE on Linux for Azure AD accounts.][n] | | **iOS** | ![Chrome supports USB on iOS for Azure AD accounts.][n] | ![Chrome supports NFC on iOS for Azure AD accounts.][n] | ![Chrome supports BLE on iOS for Azure AD accounts.][n] | ![Edge supports USB on iOS for Azure AD accounts.][n] | ![Edge supports NFC on Linux for Azure AD accounts.][n] | ![Edge supports BLE on Linux for Azure AD accounts.][n] | ![Firefox supports USB on Linux for Azure AD accounts.][n] | ![Firefox supports NFC on iOS for Azure AD accounts.][n] | ![Firefox supports BLE on iOS for Azure AD accounts.][n] | ![Safari supports USB on iOS for Azure AD accounts.][n] | ![Safari supports NFC on iOS for Azure AD accounts.][n] | ![Safari supports BLE on iOS for Azure AD accounts.][n] |
active-directory How To Migrate Mfa Server To Azure Mfa With Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-with-federation.md
description: Step-by-step guidance to move from Azure MFA Server on-premises to
Previously updated : 04/07/2022 Last updated : 04/21/2022
This section covers final steps before migrating user phone numbers.
### Set federatedIdpMfaBehavior to enforceMfaByFederatedIdp
-For federated domains, MFA may be enforced by Azure AD Conditional Access or by the on-premises federation provider. Each federated domain has a Microsoft Graph PowerShell security setting named **federatedIdpMfaBehavior**. You can set **federatedIdpMfaBehavior** to `enforceMfaByFederatedIdp` so Azure AD accepts MFA that's performed by the federated identity provider. If the federated identity provider didn't perform MFA, Azure AD redirects the request to the federated identity provider to perform MFA. For more information, see [federatedIdpMfaBehavior](/graph/api/resources/federatedIdpMfaBehavior?view=graph-rest-beta&preserve-view=true).
+For federated domains, MFA may be enforced by Azure AD Conditional Access or by the on-premises federation provider. Each federated domain has a Microsoft Graph PowerShell security setting named **federatedIdpMfaBehavior**. You can set **federatedIdpMfaBehavior** to `enforceMfaByFederatedIdp` so Azure AD accepts MFA that's performed by the federated identity provider. If the federated identity provider didn't perform MFA, Azure AD redirects the request to the federated identity provider to perform MFA. For more information, see [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values).
>[!NOTE] > The **federatedIdpMfaBehavior** setting is an evolved version of the **SupportsMfa** property of the [Set-MsolDomainFederationSettings MSOnline v1 PowerShell cmdlet](/powershell/module/msonline/set-msoldomainfederationsettings).
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
To ensure that the token size doesn't exceed HTTP header size limits, Azure AD l
```JSON {
- ...
- "_claim_names": {
- "groups": "src1"
+ ...
+ "_claim_names": {
+ "groups": "src1"
},
- {
- "_claim_sources": {
- "src1": {
- "endpoint":"[Url to get this user's group membership from]"
- }
- }
- }
- ...
+ "_claim_sources": {
+ "src1": {
+ "endpoint": "[Url to get this user's group membership from]"
+ }
+ }
+ ...
} ```
Check out [Primary Refresh Tokens](../devices/concept-primary-refresh-token.md)
## Next steps * Learn about [`id_tokens` in Azure AD](id-tokens.md).
-* Learn about permission and consent ( [v1.0](../azuread-dev/v1-permissions-consent.md), [v2.0](v2-permissions-and-consent.md)).
+* Learn about permission and consent ( [v1.0](../azuread-dev/v1-permissions-consent.md), [v2.0](v2-permissions-and-consent.md)).
active-directory External Collaboration Settings Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-collaboration-settings-configure.md
Previously updated : 04/11/2022 Last updated : 04/26/2022
External collaboration settings let you specify what roles in your organization
For B2B collaboration with other Azure AD organizations, you should also review your [cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md) to ensure your inbound and outbound B2B collaboration and scope access to specific users, groups, and applications.
-### To configure external collaboration settings:
+## Configure settings in the portal
1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account and open the **Azure Active Directory** service. 1. Select **External Identities** > **External collaboration settings**.
For B2B collaboration with other Azure AD organizations, you should also review
1. Under **Collaboration restrictions**, you can choose whether to allow or deny invitations to the domains you specify and enter specific domain names in the text boxes. For multiple domains, enter each domain on a new line. For more information, see [Allow or block invitations to B2B users from specific organizations](allow-deny-list.md). ![Screenshot showing Collaboration restrictions settings.](./media/external-collaboration-settings-configure/collaboration-restrictions.png)+
+## Configure settings with Microsoft Graph
+
+External collaboration settings can be configured by using the Microsoft Graph API:
+
+- For **Guest user access restrictions** and **Guest invite restrictions**, use the [authorizationPolicy](/graph/api/resources/authorizationpolicy?view=graph-rest-1.0&preserve-view=true) resource type.
+- For the **Enable guest self-service sign up via user flows** setting, use [authenticationFlowsPolicy](/graph/api/resources/authenticationflowspolicy?view=graph-rest-1.0&preserve-view=true) resource type.
+- For email one-time passcode settings (now on the **All identity providers** page in the Azure portal), use the [emailAuthenticationMethodConfiguration](/graph/api/resources/emailAuthenticationMethodConfiguration?view=graph-rest-1.0&preserve-view=true) resource type.
+ ## Assign the Guest Inviter role to a user With the Guest Inviter role, you can give individual users the ability to invite guests without assigning them a global administrator or other admin role. Assign the Guest inviter role to individuals. Then make sure you set **Admins and users in the guest inviter role can invite** to **Yes**.
active-directory One Time Passcode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/one-time-passcode.md
Previously updated : 03/31/2022 Last updated : 04/26/2022
Guest user teri@gmail.com is invited to Fabrikam, which doesn't have Google fede
1. Under **Email one-time passcode for guests**, select one of the following:
- - **Automatically enable email one-time passcode for guests starting \<date\>** if you don't want to enable the feature immediately and want to wait for the automatic enablement date.
+ - **Automatically enable email one-time passcode for guests starting October 2021** if you don't want to enable the feature immediately and want to wait for the automatic enablement date.
- **Enable email one-time passcode for guests effective now** to enable the feature now. - **Yes** to enable the feature now if you see a Yes/No toggle (this toggle appears if the feature was previously disabled).
Guest user teri@gmail.com is invited to Fabrikam, which doesn't have Google fede
1. Select **Save**.
+> [!NOTE]
+> Email one-time passcode settings can also be configured with the [emailAuthenticationMethodConfiguration](/graph/api/resources/emailauthenticationmethodconfiguration) resource type in the Microsoft Graph API.
+ ## Disable email one-time passcode We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can disable it. Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
We've begun rolling out a change to turn on the email one-time passcode feature
## Note for public preview customers
-If you've previously opted in to the email one-time passcode public preview, automatic feature enablement doesn't apply to you, so your related business processes won't be affected. Additionally, in the Azure portal, under the **Email one-time passcode for guests** properties, you won't see the option to **Automatically enable email one-time passcode for guests starting \<date\>**. Instead, you'll see the following **Yes** or **No** toggle:
+If you've previously opted in to the email one-time passcode public preview, automatic feature enablement doesn't apply to you, so your related business processes won't be affected. Additionally, in the Azure portal, under the **Email one-time passcode for guests** properties, you won't see the option to **Automatically enable email one-time passcode for guests starting October 2021**. Instead, you'll see the following **Yes** or **No** toggle:
![Email one-time passcode opted in](media/one-time-passcode/enable-email-otp-opted-in.png)
However, if you'd prefer to opt out of the feature and allow it to be automatica
![Enable Email one-time passcode opted in](media/one-time-passcode/email-otp-options.png) -- **Automatically enable email one-time passcode for guests starting \<date\>**. (Default) If the email one-time passcode feature isn't already enabled for your tenant, it will be automatically turned on. No further action is necessary if you want the feature enabled at that time. If you've already enabled or disabled the feature, this option will be unavailable.
+- **Automatically enable email one-time passcode for guests starting October 2021**. (Default) If the email one-time passcode feature isn't already enabled for your tenant, it will be automatically turned on. No further action is necessary if you want the feature enabled at that time. If you've already enabled or disabled the feature, this option will be unavailable.
- **Enable email one-time passcode for guests effective now**. Turns on the email one-time passcode feature for your tenant.
active-directory Self Service Sign Up User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-sign-up-user-flow.md
Previously updated : 07/26/2021 Last updated : 04/26/2022
User attributes are values collected from the user during self-service sign-up.
Before you can add a self-service sign-up user flow to your applications, you need to enable the feature for your tenant. After it's enabled, controls become available in the user flow that let you associate the user flow with an application.
+> [!NOTE]
+> This setting can also be configured with the [authenticationFlowsPolicy](/graph/api/resources/authenticationflowspolicy?view=graph-rest-1.0&preserve-view=true) resource type in the Microsoft Graph API.
+ 1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator. 2. Under **Azure services**, select **Azure Active Directory**. 3. Select **User settings**, and then under **External users**, select **Manage external collaboration settings**.
active-directory 2 Secure Access Current State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/2-secure-access-current-state.md
Title: Discover the current state of external collaboration with Azure Active Directory description: Learn methods to discover the current state of your collaboration. -+ Last updated 12/18/2020-+
# Discover the current state of external collaboration in your organization
-Before discovering the current state of your external collaboration, you should [determine your desired security posture](1-secure-access-posture.md). You'll considered your organizationΓÇÖs needs for centralized vs. delegated control, and any relevant governance, regulatory, and compliance targets.
+Before discovering the current state of your external collaboration, you should [determine your desired security posture](1-secure-access-posture.md). You'll consider your organizationΓÇÖs needs for centralized vs. delegated control, and any relevant governance, regulatory, and compliance targets.
Individuals in your organization are probably already collaborating with users from other organizations. Collaboration can be through features in productivity applications like Microsoft 365, by emailing, or by otherwise sharing resources with external users. The pillars of your governance plan will form as you discover:
To find users who are currently collaborating, review the [Microsoft 365 audit l
External users may be [Azure AD B2B users](../external-identities/what-is-b2b.md) (preferable) with partner-managed credentials, or external users with locally provisioned credentials. These users are typically (but not always) marked with a UserType of Guest. You can enumerate guest users through the [Microsoft Graph API](/graph/api/user-list?tabs=http), [PowerShell](/graph/api/user-list?tabs=http), or the [Azure portal](../enterprise-users/users-bulk-download.md).
+There are also tools specifically designed to identify existing Azure AD B2B collaboration such as identifying external Azure AD tenants, and which external users are accessing what applications. These tools include a [PowerShell module](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MSIDCrossTenantAccessActivity) and an [Azure Monitor workbook](../reports-monitoring/workbook-cross-tenant-access-activity.md).
+ ### Use email domains and companyName property External organizations can be determined by the domain names of external user email addresses. If consumer identity providers such as Google are supported, this may not be possible. In this case we recommend that you write the companyName attribute to clearly identify the userΓÇÖs external organization.
-### Use allow or deny lists
+### Use allow or blocklists
-Consider whether your organization wants to allow collaboration with only specific organizations. Alternatively, consider if your organization wants to block collaboration with specific organizations. At the tenant level, there is an [allow or deny list](../external-identities/allow-deny-list.md), which can be used to control overall B2B invitations and redemptions regardless of source (such as Microsoft Teams, Microsoft SharePoint, or the Azure portal).
+Consider whether your organization wants to allow collaboration with only specific organizations. Alternatively, consider if your organization wants to block collaboration with specific organizations. At the tenant level, there is an [allow or blocklist](../external-identities/allow-deny-list.md), which can be used to control overall B2B invitations and redemptions regardless of source (such as Microsoft Teams, Microsoft SharePoint, or the Azure portal).
If youΓÇÖre using entitlement management, you can also scope access packages to a subset of your partners by using the Specific connected organizations setting as shown below.
-![Screenshot of allowlisting or deny listing in creating a new access package.](media/secure-external-access/2-new-access-package.png)
+![Screenshot of allowlisting or blocklisting in creating a new access package.](media/secure-external-access/2-new-access-package.png)
## Find access being granted to external users
active-directory 3 Secure Access Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/3-secure-access-plan.md
Title: Create a security plan for external access to Azure Active Directory description: Plan the security for external access to your organization's resources.. -+ Last updated 12/18/2020-+
-# 3. Create a security plan for external access
+# Create a security plan for external access
Now that you have [determined your desired security posture security posture for external access](1-secure-access-posture.md) and [discovered your current collaboration state](2-secure-access-current-state.md), you can create an external user security and governance plan.
There are multiple ways to group resources for access.
* Microsoft Teams groups files, conversation threads, and other resources in one place. You should formulate an external access strategy for Microsoft Teams. See [Secure access to Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md).
-* Entitlement Management Access Packages enable you to create a single package of applications and other resources to which you can grant access.
+* Entitlement Management Access Packages enable you to create and delegate management of packages of Applications, Groups, Teams, SharePoint sites, and other resources to which you can grant access.
* Conditional Access policies can be applied to up to 250 applications with the same access requirements.
+* Cross Tenant Access Settings Inbound Access can define what application groups of external users are allowed to access.
+ However you will manage access, you must document which applications should be grouped together. Considerations should include: * **Risk profile**. What is the risk to your business if a bad actor gained access to an application? Consider coding each application as high, medium, or low risk. Be cautious about grouping high-risk applications with low-risk ones.
For each grouping of applications and resources that you want to make accessible
This type of governance plan can and should also be completed for internal access as well.
-## Document sign-in conditions for external users.
+## Document sign-in conditions for external users
As part of your plan you must determine the sign-in requirements for your external users as they access resources. Sign-in requirements are often based on the risk profile of the resources, and the risk assessment of the usersΓÇÖ sign-in.
Sign-in conditions are configured in [Azure AD Conditional Access](../conditiona
| High risk| Require MFA always for external users |
-Today, you can [enforce multi-factor authentication for B2B users in your tenant](../external-identities/b2b-tutorial-require-mfa.md).
+Today, you can [enforce multi-factor authentication for B2B users in your tenant](../external-identities/b2b-tutorial-require-mfa.md). You can also trust the MFA from external tenants to satisfy your MFA requirements using [Cross Tenant Access Settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md#modify-inbound-access-settings).
**User- and device-based sign in conditions**.
Today, you can [enforce multi-factor authentication for B2B users in your tenant
| Identity protection shows high risk| Require user to change password | | Network location| Require sign in from a specific IP address range to highly confidential projects |
-Today, to use device state as an input to a policy, the device must be registered or joined to your tenant.
+Today, to use device state as an input to a policy, the device must be either be registered or joined to your tenant or [Cross Tenant Access Settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md#modify-inbound-access-settings) must be configured to trust the device claims from the home tenant.
[Identity Protection risk-based policies](../conditional-access/howto-conditional-access-policy-risk.md) can be used. However, issues must be mitigated in the userΓÇÖs home tenant.
While your policies will be highly customized to your needs, consider the follow
* Assess access needs and take action at the end of every project with external users.
-
- ## Determine your access control methods Now that you know what you want to control access to, how those assets should be grouped for common access, and required sign-in and access review policies, you can decide on how to accomplish your plan.
See the following articles on securing external access to resources. We recommen
8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
-9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
+9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
active-directory 5 Secure Access B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/5-secure-access-b2b.md
Title: Transition to governed collaboration with Azure Active Directory B2B Collaboration description: Move to governed collaboration with Azure Ad B2B collaboration. -+ Last updated 12/18/2020-+
Once youΓÇÖve done those things, you're ready to move into controlled collaborat
## Control who your organization collaborates with
-You must decide whether to limit which organizations your users can collaborate with, and who within your organization can initiate collaboration. Most organizations take the approach of permitting business units to decide with whom they collaborate, and delegating the approval and oversight as needed. For example, some government, education, and financial services organizations don't permit open collaboration. You may wish to use the Azure AD features to scope collaboration, as discussed in the rest of this section.
+You can decide whether to limit which organizations your users can collaborate with (inbound and outbound), and who within your organization can invite guests. Most organizations take the approach of permitting business units to decide with whom they collaborate, and delegating the approval and oversight as needed. For example, some government, education, and financial services organizations don't permit open collaboration. You may wish to use the Azure AD features to scope collaboration, as discussed in the rest of this section.
+
+You have several options on how to control who is allowed to access your tenant. These options include:
+
+- **External Collaboration Settings** ΓÇô Restrict the email domains that invitations can be sent to.
+
+- **Cross Tenant Access Settings** ΓÇô Control what applications can be accessed by guests on a per user/group/tenant basis (inbound). Also controls what external Azure AD tenants and applications your own users can access (outbound).
+
+- **Connected Organizations** ΓÇô Control what organizations are allowed to request Access Packages in Entitlement Management.
+
+Depending on the requirements of your organization, you may need to deploy one or more of these solutions.
### Determine collaboration partners
-First, ensure you've documented the organizations you're currently collaborating with, and the domains for those organizations' users. One collaboration partner may have multiple domains. For example, a partner may have multiple business units with separate domains.
+First, ensure you have documented the organizations you are currently collaborating with, and if necessary, the domains for those organizations' users. Note that domain-based restrictions may be impractical, since one collaboration partner may have multiple domains, and a partner could add domains at any time. For example, a partner may have multiple business units with separate domains and add more domains as they configure more synchronization.
+
+If your users have already started using Azure AD B2B, you can discover what external Azure AD tenants your users are currently collaborating with via the sign-in logs, [PowerShell](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MSIDCrossTenantAccessActivity), or a [built-in workbook](../reports-monitoring/workbook-cross-tenant-access-activity.md).
Next, determine if you want to enable future collaboration with
-* any domain (most inclusive)
+- any external organization (most inclusive)
-* all domains except those explicitly denied
+- all external organizations except those explicitly denied
-* only specific domains (most restrictive)
+- only specific external organizations (most restrictive)
> [!NOTE] > The more restrictive your collaboration settings, the more likely that your users will go outside of your approved collaboration framework. We recommend enabling the broadest collaboration your security needs will allow, and closely reviewing that collaboration rather than being overly restrictive. Also note that limiting to a single domain may inadvertently prevent authorized collaboration with organizations, which have other unrelated domains for their users. For example, if doing business with an organization Contoso, the initial point of contact with Contoso might be one of their US-based employees who has an email with a ".com" domain. However if you only allow the ".com" domain you may inadvertently omit their Canadian employees who have ".ca" domain.
-There are circumstances in which you would want to only allow specific collaboration partners. For example, a university system may only want to allow their own faculty access to a resource tenant. Or a conglomerate may only want to allow specific subsidiaries to collaborate with each other to achieve compliance with a required framework.
+There are circumstances in which you would want to only allow specific collaboration partners for a subset of users. For example, a university may want to restrict student accounts from accessing external tenants but need to allow faculty to collaborate with external organizations.
-#### Using allow and deny lists
+### Using allow and blocklists with External Collaboration Settings
-You can use an allow list or deny list to [restrict invitations to B2B users](../external-identities/allow-deny-list.md) from specific organizations. You can use only an allow or a deny list, not both.
+You can use an allowlist or blocklist to [restrict invitations to B2B users](../external-identities/allow-deny-list.md) from specific organizations. You can use only an allow or a blocklist, not both.
-* An [allow list](../external-identities/allow-deny-list.md) limits collaboration to only those domains listed; all other domains are effectively on the deny list.
+* An [allowlist](../external-identities/allow-deny-list.md) limits collaboration to only those domains listed; all other domains are effectively on the blocklist.
-* A [deny list](../external-identities/allow-deny-list.md) allows collaboration with any domain not on the deny list.
+* A [blocklist](../external-identities/allow-deny-list.md) allows collaboration with any domain not on the blocklist.
+
+> [!NOTE]
+> Limiting to a predefined domain may inadvertently prevent authorized collaboration with organizations, which have other domains for their users. For example, if doing business with an organization Contoso, the initial point of contact with Contoso might be one of their US-based employees who has an email with a ".com" domain. However, if you only allow the ".com" domain you may inadvertently omit their Canadian employees who have ".ca" domain.
> [!IMPORTANT]
-> These lists do not apply to users who are already in your directory. They also do not apply to OneDrive for Business and SharePoint allow deny lists which are separate.
+> These lists do not apply to users who are already in your directory. By default, they also do not apply to OneDrive for Business and SharePoint allow/blocklists which are separate unless you enable the [SharePoint/OneDrive B2B integration](https://docs.microsoft.com/sharepoint/sharepoint-azureb2b-integration).
+
+Some organizations use a list of known ΓÇÿbad actorΓÇÖ domains provided by their managed security provider for their blocklist. For example, if the organization is legitimately doing business with Contoso and using a .com domain, there may be an unrelated organization that has been using the Contoso .org domain and attempting a phishing attack to impersonate Contoso employees.
+
+### Using Cross Tenant Access Settings
+
+You can control both inbound and outbound access using Cross Tenant Access Settings. In addition, you can trust MFA, Compliant device, and hybrid Azure Active Directory joined device (HAADJ) claims from all or a subset of external Azure AD tenants. When you configure an organization specific policy, it applies to the entire Azure AD tenant and will cover all users from that tenant regardless of the userΓÇÖs domain suffix.
-Some organizations use a list of known ΓÇÿbad actorΓÇÖ domains provided by their managed security provider for their deny list. For example, if the organization is legitimately doing business with Contoso and using a .com domain, there may be an unrelated organization that has been using the Contoso .org domain and attempting a phishing attack to impersonate Contoso employees.
+If you wish to allow inbound access to only specific tenants (allowlist), you can set the default policy to block access and then create organization policies to granularly allow access on a per user, group, and application basis.
+
+If you wish to block access to specific tenants (blocklist), you can set the default policy as allow and then create organization policies that block access to those specific tenants.
+
+> [!NOTE]
+> Cross Tenant Access Settings Inbound Access does not prevent the invitations from being sent or redeemed. However, it does control what applications can be accessed and whether a token is issued to the guest user or not. Even if the guest can redeem an invitation, if the policy blocks access to all applications, the user will not have access to anything.
+
+If you wish to control what external organizations your users can access, you can configure outbound access policies following the same pattern as inbound access ΓÇô allow/blocklist. Configure the default and organization-specific policies as desired. [Learn more about configuring inbound and outbound access policies](../external-identities/cross-tenant-access-settings-b2b-collaboration.md).
+
+> [!NOTE]
+> Cross Tenant Access Settings only applies to Azure AD tenants. If you need to control access to partners who do not use Azure AD, you must use External Collaboration Settings.
+
+### Using Entitlement Management and Connected Organizations
+
+If you want to use Entitlement Management to ensure guest lifecycle is governed automatically, you can create Access Packages and publish them to any external user or only to Connected Organizations. Connected Organizations support Azure AD tenants and any other domain. When you create an Access Package you can restrict access only to specific Connected Organizations. This is covered in greater detail in the next section. [Learn more about Entitlement Management](../governance/entitlement-management-overview.md).
## Control how external users gain access There are many ways to collaborate with external partners using Azure AD B2B. To begin collaboration, you invite or otherwise enable your partner to access your resources. Users can gain access by responding to :
-* Redeeming [an invitation sent via an email](../external-identities/redemption-experience.md), or [a direct link to share](../external-identities/redemption-experience.md) a resource.
+* Redeeming [an invitation sent via an email](../external-identities/redemption-experience.md), or [a direct link to share](../external-identities/redemption-experience.md) a resource. Users can gain access by:
* Requesting access [through an application](../external-identities/self-service-sign-up-overview.md) you create * Requesting access through the [My Access](../governance/entitlement-management-request-access.md) portal
-When you enable Azure AD B2B, you enable the ability to invite guest users via direct links and email invitations by default. Invitations via Email OTP and a self-service portal are currently in preview and must be enabled within the External Identities | External collaboration settings in the Azure AD portal.
+When you enable Azure AD B2B, you enable the ability to invite guest users via direct links and email invitations by default. Self Service sign-up and publishing Access Packages to the My Access portal require additional configuration.
+
+> [NOTE]
+> Self Service sign-up does not enforce the allow/blocklist in External Collaboration Settings. Cross Tenant Access Settings will apply. You can also integrate your own allow/blocklist with Self Service sign-up using [custom API connectors](../external-identities/self-service-sign-up-add-api-connector.md).
### Control who can invite guest users
Determine who can invite guest users to access resources.
![Screenshot of guest invitation settings.](media/secure-external-access/5-guest-invite-settings.png)
-
- ### Collect additional information about external users If you use Azure AD entitlement management, you can configure questions for external users to answer. The questions will then be shown to approvers to help them make a decision. You can configure different sets of questions for each [access package policy](../governance/entitlement-management-access-package-approval-policy.md) so that approvers can have relevant information for the access they're approving. For example, if one access package is intended for vendor access, then the requestor may be asked for their vendor contract number. A different access package intended for suppliers, may ask for their country of origin.
If you use a self-service portal, you can use [API connectors](../external-ident
There are three instances when invited guest users from a collaboration partner using Azure AD will have trouble redeeming an invitation.
-* If using an allow list and the userΓÇÖs domain isn't included in an allow list.
+* If using an allowlist and the userΓÇÖs domain isn't included in an allowlist.
* If the collaboration partnerΓÇÖs home tenant has tenant restrictions that prevent collaboration with external users..
active-directory Active Directory Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-deployment-plans.md
Title: Deployment plans - Azure Active Directory | Microsoft Docs description: Guidance about how to deploy many Azure Active Directory capabilities. -+
Last updated 12/01/2020-+
From any of the plan pages, use your browser's Print to PDF capability to create
| Capability | Description| | -| -|
-| [Azure AD multifactor authentication](../authentication/howto-mfa-getstarted.md)| Azure AD Multi-Factor Authentication (MFA) is Microsoft's two-step verification solution. Using admin-approved authentication methods, Azure AD MFA helps safeguard access to your data and applications while meeting the demand for a simple sign in process. Watch this video on [How to configure and enforce multi-factor authentication in your tenant](https://www.youtube.com/watch?v=qNndxl7gqVM)|
+| [Azure AD multifactor authentication](../authentication/howto-mfa-getstarted.md)| Azure AD Multi-Factor Authentication (MFA) is Microsoft's two-step verification solution. Using admin-approved authentication methods, Azure AD MFA helps safeguard access to your data and applications while meeting the demand for a simple sign-in process. Watch this video on [How to configure and enforce multi-factor authentication in your tenant](https://www.youtube.com/watch?v=qNndxl7gqVM)|
| [Conditional Access](../conditional-access/plan-conditional-access.md)| With Conditional Access, you can implement automated access control decisions for who can access your cloud apps, based on conditions. | | [Self-service password reset](../authentication/howto-sspr-deployment.md)| Self-service password reset helps your users reset their passwords without administrator intervention, when and where they need to. | | [Passwordless](../authentication/howto-authentication-passwordless-deployment.md) | Implement passwordless authentication using the Microsoft Authenticator app or FIDO2 Security keys in your organization |
From any of the plan pages, use your browser's Print to PDF capability to create
| -| -| | [User provisioning](../app-provisioning/plan-auto-user-provisioning.md)| Azure AD helps you automate the creation, maintenance, and removal of user identities in cloud (SaaS) applications, such as Dropbox, Salesforce, ServiceNow, and more. | | [Cloud HR user provisioning](../app-provisioning/plan-cloud-hr-provision.md)| Cloud HR user provisioning to Active Directory creates a foundation for ongoing identity governance and enhances the quality of business processes that rely on authoritative identity data. Using this feature with your cloud HR product, such as Workday or Successfactors, you can seamlessly manage the identity lifecycle of employees and contingent workers by configuring rules that map Joiner-Mover-Leaver processes (such as New Hire, Terminate, Transfer) to IT provisioning actions (such as Create, Enable, Disable) |
+| [Azure AD B2B collaboration](../fundamentals/secure-external-access-resources.md)| Azure AD enables you to collaborate with any external user, allowing them to securely gain access to SaaS and Line-of-Business (LoB) applications. |
## Deploy governance and reporting
A pilot allows you to test with a small group before turning on a capability for
In your first wave, target IT, usability, and other appropriate users who can test and provide feedback. Use this feedback to further develop the communications and instructions you send to your users, and to give insights into the types of issues your support staff may see.
-Widening the rollout to larger groups of users should be carried out by increasing the scope of the group(s) targeted. This can be done through [dynamic group membership](../enterprise-users/groups-dynamic-membership.md), or by manually adding users to the targeted group(s).
+Widening the rollout to larger groups of users should be carried out by increasing the scope of the group(s) targeted. This can be done through [dynamic group membership](../enterprise-users/groups-dynamic-membership.md), or by manually adding users to the targeted group(s).
active-directory Secure External Access Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-external-access-resources.md
Title: Securing external collaboration in Azure Active Directory
-description: A guide for architects and IT administrators on securing external access to internal resources
+ Title: Plan an Azure Active Directory B2B collaboration deployment
+description: A guide for architects and IT administrators on securing and governing external access to internal resources
-+ Last updated 12/18/2020-+
-# Securing external collaboration in Azure Active Directory and Microsoft 365
+# Plan an Azure Active Directory B2B collaboration deployment
-Secure collaboration with external partners ensures that the right external partners have appropriate access to internal resources for the right length of time. Through a holistic governance approach, you can reduce security risks, meet compliance goals, and ensure that you know who has access.
+Secure collaboration with external partners ensures that the right external partners have appropriate access to internal resources for the right length of time. Through a holistic security and governance approach, you can reduce security risks, meet compliance goals, and ensure that you know who has access.
Ungoverned collaboration leads to a lack of clarity on ownership of access, and the possibility of sensitive resources being exposed. Moving to secure and governed collaboration can ensure that there are clear lines of ownership and accountability for external usersΓÇÖ access. This includes:
Ungoverned collaboration leads to a lack of clarity on ownership of access, and
* Ensuring that access is appropriate, reviewed, and time bound where appropriate.
-* Empowering business owners to manage collaboration within IT-created guard rails.
+* Empowering business owners to manage collaboration within IT-created guard rails via delegation.
+
+Where you have a compliance requirement, governed collaboration enables you to attest to the appropriateness of access.
+
+Traditionally, organizations have used one of the two methods to collaborate:
+
+1. Creating locally managed credentials for external users, or
+2. Establishing federations with partner Identity Providers.
+
+Both methods have significant drawbacks in themselves.
+
+| Area of concern | Local credentials | Federation |
+|:--|:-|:-|
+| Security | - Access continues after external user terminated<br> - Usertype is ΓÇ£memberΓÇ¥ by default which grants too much default access | - No user level visibility <br> - Unknown partner security posture|
+| Expense | - Password + Multi-Factor Authentication management<br> - Onboarding process<br> - Identity cleanup<br> - Overhead of running a separate directory | - Small partners cannot afford the infrastructure<br> - Small partners do not have the expertise<br> - Small Partners might only have consumer emails (none IT) |
+| Complexity | - Partner users need to manage an additional set of credentials | - Complexity grows with each new partner<br> - Complexity grows on partnersΓÇÖ side as well |
-If you must meet compliance frameworks, governed collaboration enables you to attest to the appropriateness of access.
Microsoft offers comprehensive suites of tools for secure external access. Azure Active Directory (Azure AD) B2B Collaboration is at the center of any external collaboration plan. Azure AD B2B can integrate with other tools in Azure AD, and tools in Microsoft 365 services, to help secure and manage your external access.
+Azure AD B2B simplifies collaboration, reduces expense, and increases security compared to traditional collaboration methods. Benefits of Azure AD B2B include:
+
+- External users cannot access resources if the home identity is disabled or deleted.
+
+- Authentication and credential management are handled by the userΓÇÖs home identity provider.
+
+- Resource tenant controls all access and authorization of guest users.
+
+- Can collaborate with any user who has an email address without need for partner infrastructure.
+
+- No need for IT departments to connect out-of-band to set up access/federation.
+
+- Guest user access is protected by the same enterprise-grade security as internal users.
+
+- Easy end user experience with no additional credentials needed.
+
+- Users can collaborate easily with partners without needing their IT departments involvement.
+
+- No need for Guest default permissions in the Azure AD directory can be limited or highly restricted.
+ This document set is designed to enable you to move from ad hoc or loosely governed external collaboration to a more secure state. ## Next steps
active-directory How To Managed Identity Regional Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-managed-identity-regional-move.md
Title: Move managed identities to another region - Azure AD description: Steps involved in getting a managed identity recreated in another region - na Previously updated : 04/13/2022 Last updated : 04/26/2022
+#Customer intent: As an Azure administrator, I want to move a solution using managed identities from one Azure region to another one.
active-directory Atlassian Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* [An Atlassian Cloud tenant](https://www.atlassian.com/licensing/cloud)
+* [An Atlassian Cloud tenant](https://www.atlassian.com/licensing/cloud) with an Atlassian Access subscription.
* A user account in Atlassian Cloud with Admin permissions. > [!NOTE]
active-directory Floqast Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/floqast-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with FloQast | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with FloQast'
description: Learn how to configure single sign-on between Azure Active Directory and FloQast.
Previously updated : 08/10/2021 Last updated : 04/26/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with FloQast
+# Tutorial: Azure AD SSO integration with FloQast
In this tutorial, you'll learn how to integrate FloQast with Azure Active Directory (Azure AD). When you integrate FloQast with Azure AD, you can:
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following step:
- In the **Identifier** text box, type the URL:
- `https://go.floqast.com/`
+ a. In the **Identifier** text box, type one of the following URLs:
+
+ | Identifier |
+ | - |
+ | `https://go.floqast.com/` |
+ | `https://eu.floqast.app/` |
+ |
+
+ b. In the **Reply URL** text box, type one of the following URLs:
+
+ | Reply URL |
+ | - |
+ | `https://go.floqast.com/api/sso/saml/azure` |
+ | ` ttps://eu.floqast.app/api/sso/saml/azure` |
+ |
+ 1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- In the **Sign-on URL** text box, type the URL:
- `https://go.floqast.com/login/sso`
+ In the **Sign-on URL** text box, type one of the following URLs:
+
+ | Sign-on URL |
+ | - |
+ | `https://go.floqast.com/login/sso` |
+ | `https://eu.floqast.app/login/sso` |
+ |
+ 1. FloQast application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
To configure single sign-on on **FloQast** side, you need to send the downloaded
### Create FloQast test user
-In this section, you create a user called B.Simon in FloQast. Work with [FloQast support team](mailto:support@floqast.com) to add the users in the FloQast platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called B.Simon in FloQast. Work with [FloQast support team](mailto:support@floqast.com) to add the users in the FloQast platform. Users must be created and activated before you use single sign-on.
## Test SSO
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure FloQast you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure FloQast you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
Previously updated : 10/08/2021 Last updated : 04/26/2022 # Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
The following diagram illustrates the Azure AD Verifiable Credentials architectu
## Create a storage account
-Azure Blob Storage is an object storage solution for the cloud. Azure AD Verifiable Credentials uses [Azure Blob Storage](../../storage/blobs/storage-blobs-introduction.md) to store the configuration files when the service is issuing verifiable credentials.
+Azure Blob Storage is an object storage solution for the cloud. Azure AD Verifiable Credentials use [Azure Blob Storage](../../storage/blobs/storage-blobs-introduction.md) to store the configuration files when the service is issuing verifiable credentials.
Create and configure Blob Storage by following these steps:
Create and configure Blob Storage by following these steps:
![Screenshot that shows how to create a container.](media/verifiable-credentials-configure-issuer/create-container.png)
-## Grant access to the container
-
-After you create your container, grant the signed-in user the correct role assignment so they can access the files in Blob Storage.
-
-1. From the list of containers, select **vc-container**.
-
-1. From the menu, select **Access Control (IAM)**.
-
-1. Select **+ Add,** and then select **Add role assignment**.
-
- ![Screenshot that shows how to add a new role assignment to the blob container.](media/verifiable-credentials-configure-issuer/add-role-assignment.png)
-
-1. In **Add role assignment**:
-
- 1. For the **Role**, select **Storage Blob Data Reader**.
-
- 1. For the **Assign access to**, select **User, group, or service
- principal**.
-
- 1. Then, search the account that you're using to perform these steps, and
- select it.
-
- ![Screenshot that shows how to set up the new role assignment.](media/verifiable-credentials-configure-issuer/add-role-assignment-container.png)
-
->[!IMPORTANT]
->By default, container creators get the owner role assigned. The owner role isn't enough on its own. Your account needs the storage blob data reader role. For more information, see [Use the Azure portal to assign an Azure role for access to blob and queue data](../../storage/blobs/assign-azure-role-data-access.md).
- ### Upload the configuration files
-Azure AD Verifiable Credentials uses two JSON configuration files, the rules file and the display file.
+Azure AD Verifiable Credentials service uses two JSON configuration files, the rules file and the display file.
- The *rules* file describes important properties of verifiable credentials. In particular, it describes the claims that subjects (users) need to provide before a verifiable credential is issued for them. - The *display* file controls the branding of the credential and styling of the claims.
In this step, you create the verified credential expert card by using Azure AD V
1. For **Subscription**, select your Azure AD subscription where you created Blob Storage.
- 1. Under the **Display file**, select **Select display file**. In the Storage accounts section, select **vc-container**. Then select the **VerifiedCredentialExpertDisplay.json** file and click **Select**.
+ 1. Under the **Display file**, select **Select display file**. In the Storage accounts section, select **vc-container**. Then select the **VerifiedCredentialExpertDisplay.json** file and select **Select**.
1. Under the **Rules file**, **Select rules file**. In the Storage accounts section, select the **vc-container**. Then select the **VerifiedCredentialExpertRules.json** file, and choose **Select**.
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
Previously updated : 02/24/2022 Last updated : 04/26/2022 # Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
Specifically, you learn how to:
> [!div class="checklist"] >
-> - Set up a service principal
-> - Create a key vault in Azure Key Vault
-> - Register an application in Azure AD
-> - Set up the Verifiable Credentials service
+> - Set up a service principal.
+> - Create an Azure Key Vault instance.
+> - Register an application in Azure AD.
+> - Set up the Verifiable Credentials service.
The following diagram illustrates the Azure AD Verifiable Credentials architecture and the component you configure.
See a [video walkthrough](https://www.youtube.com/watch?v=8jqjHjQo-3c) going ove
## Prerequisites -- If you don't have Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Sign up for [Azure Active Directory Premium editions](../../active-directory/fundamentals/active-directory-get-started-premium.md)
-subscription in your tenant.
+- You need an Azure tenant with an active subscription. If you don't have Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- Ensure that you have the [global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) permission for the directory you want to configure.-- Ensure that you have [PowerShell](/powershell/scripting/install/installing-powershell) 7.0.6 LTS-x64, PowerShell 7.1.3-x64, or later installed.-
-## Set up a service principal
-
-Create a service principal for the Request Service API. The service API is the Microsoft service that you use to issue or verify Azure AD Verifiable Credentials.
-
-To create the service principal:
-
-1. Run the following PowerShell commands. These commands install and import the `Az` module. For more information, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps#installation).
-
- ```powershell
- if ((Get-Module -ListAvailable -Name "Az.Accounts") -eq $null) { Install-Module -Name "Az.Accounts" -Scope CurrentUser }
- if ((Get-Module -ListAvailable -Name "Az.Resources") -eq $null) { Install-Module "Az.Resources" -Scope CurrentUser }
- ```
-
-1. Run the following PowerShell command to connect to your Azure AD tenant. Replace \<*your-tenant-ID*> with your [Azure AD tenant ID](../../active-directory/fundamentals/active-directory-how-to-find-tenant.md).
-
- ```powershell
- Connect-AzAccount -TenantId <your-tenant-ID>
- ```
-
-1. Run the following command in the same PowerShell session. The `AppId` `bbb94529-53a3-4be5-a069-7eaf2712b826` refers to the Verifiable Credentials Microsoft service.
-
- ```powershell
- New-AzADServicePrincipal -ApplicationId "bbb94529-53a3-4be5-a069-7eaf2712b826" -DisplayName "Verifiable Credential Request Service"
- ```
## Create a key vault
A Key Vault [access policy](../../key-vault/general/assign-access-policy.md) def
1. To save the changes, select **Save**.
-### Set access policies for the Verifiable Credentials Issuer and Request services
-
-1. Select **+ Add Access Policy** to add permission to the service principal of the **Verifiable Credential Request Service**.
-
-1. In **Add access policy**:
-
- 1. For **Key permissions**, select **Get** and **Sign**.
-
- 1. For **Select principal**, select **Verifiable Credential Request Service**.
-
- 1. Select **Add**.
-
- :::image type="content" source="media/verifiable-credentials-configure-tenant/request-service-key-vault-access-policy.png" alt-text="Screenshot that demonstrates how to add an access policy for the Verifiable Credential Issuer Service." :::
-
-The access policies for the Verifiable Credentials Issuer service should be added automatically. If the **Verifiable Credential Issuer Service** doesn't appear in the list of access policies, take the following steps to manually add access policies to the service.
-
-1. Select **+ Add Access Policy** to add permission to the service principal of the **Verifiable Credential Issuer Service**.
-
-1. In **Add access policy**:
-
- 1. For **Key permissions**, select **Get** and **Sign**.
-
- 1. For **Select principal**, select **Verifiable Credential Issuer Service**.
-
- 1. Select **Add**.
-
- :::image type="content" source="media/verifiable-credentials-configure-tenant/issuer-service-key-vault-access-policy.png" alt-text="Screenshot that demonstrates how to add an access policy for the Verifiable Credential Request Service." :::
-
-1. Select **Save** to save the new policy you created.
- ## Register an application in Azure AD Azure AD Verifiable Credentials Request Service needs to be able to get access tokens to issue and verify. To get access tokens, register a web application and grant API permission for the API Verifiable Credential Request Service that you set up in the previous step.
Azure AD Verifiable Credentials Request Service needs to be able to get access t
### Grant permissions to get access tokens
-In this step, you grant permissions to the Verifiable Credential Request Service principal created in [step 1](#set-up-a-service-principal).
+In this step, you grant permissions to the Verifiable Credential Request Service principal.
To add the required permissions, follow these steps:
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
Previously updated : 02/08/2022 Last updated : 04/26/2022 # Customer intent: As a developer I am looking for information on how to enable my users to control their own information
Individuals owning and controlling their identities are able to exchange verifia
### What is a Verifiable Credential?
-Credentials are a part of our daily lives; driver's licenses are used to assert that we're capable of operating a motor vehicle, university degrees can be used to assert our level of education, and government-issued passports enable us to travel between countries. Verifiable Credentials provides a mechanism to express these sorts of credentials on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable. [The W3C Verifiable Credentials spec](https://www.w3.org/TR/vc-data-model//) explains this in further detail.
+Credentials are a part of our daily lives; driver's licenses are used to assert that we're capable of operating a motor vehicle, university degrees can be used to assert our level of education, and government-issued passports enable us to travel between countries. Verifiable Credentials provides a mechanism to express these sorts of credentials on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable. [The W3C Verifiable Credentials spec](https://www.w3.org/TR/vc-data-model/) explains this in further detail.
## Conceptual questions
Yes! The following repositories are the open-sourced components of our services.
### What are the licensing requirements?
-An Azure AD P2 license is required to use the preview of Verifiable Credentials. This is a temporary requirement, as we expect pricing for this service to be billed based on usage.
+There are no special licensing requirements to issue Verifiable credentials. All you need is An Azure account that has an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-### How do I reconfigure the Azure AD Verifiable credentials service?
+### Updating the VC Service configuration
+The following instructions will take 15 mins to complete and are only required if you have been using the Azure AD Verifiable Credentials service prior to April 25, 2022. You are required to execute these steps to update the existing service principals in your tenant that run the verifiable credentials service the following is an overview of the steps:
-Reconfiguration requires that you opt out and opt back into the Azure Active Directory Verifiable Credentials service, your existing verifiable credentials configurations will reset and your tenant will obtain a new DID forAc use during issuance and presentation.
+1. Register new service principals for the Azure AD Verifiable Service
+1. Update the Key Vault access policies
+1. Update the access to your storage container
+1. Update configuration on your Apps using the Request API
+1. Cleanup configuration (after May 6, 2022)
+
+#### **1. Register new service principals for the Azure AD Verifiable Service**
+1. Run the following PowerShell commands. These commands install and import the Azure PowerShell module. For more information, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps#installation).
+
+ ```azurepowershell
+ if((get-module -listAvailable -name "az.accounts") -eq $null){install-module -name "az.accounts" -scope currentUser}
+ if ((get-module -listAvailable -name "az.resources") -eq $null){install-module "az.resources" -scope currentUser}
+ ```
+1. Run the following PowerShell command to connect to your Azure AD tenant. Replace ```<your tenant ID>``` with your [Azure AD tenant ID](../fundamentals/active-directory-how-to-find-tenant.md)
+
+ ```azurepowershell
+ connect-azaccount -tenantID <your tenant ID>
+ ```
+1. Check if of the following Service principals have been added to your tenant by running the following command:
+
+ ```azurepowershell
+ get-azADServicePrincipal -applicationID "bb2a64ee-5d29-4b07-a491-25806dc854d3"
+ get-azADServicePrincipal -applicationID "3db474b9-6a0c-4840-96ac-1fceb342124f"
+ ```
+
+1. If you don't get any results, run the commands below to create the new service principals, if the above command results in one of the service principals is already in your tenant, you don't need to recreate it. If you try to add it through the command below, you'll get an error saying the service principle already exists.
+
+ ```azurepowershell
+ new-azADServicePrincipal -applicationID "bb2a64ee-5d29-4b07-a491-25806dc854d3"
+ new-azADServicePrincipal -applicationID "3db474b9-6a0c-4840-96ac-1fceb342124f"
+ ```
+
+ >[!NOTE]
+ >The AppId ```bb2a64ee-5d29-4b07-a491-25806dc854d3``` and ```3db474b9-6a0c-4840-96ac-1fceb342124f``` refer to the new Verifiable Credentials service principals.
+
+#### **2. Update the Key Vault access policies**
+
+Add an access policy for the **Verifiable Credentials Service**.
+
+>[!IMPORTANT]
+> At this time, do not remove any permissions!
+
+1. In the Azure portal, navigate to your key vault.
+1. Under **Settings**, select **Access policies**
+1. Select **+ Add Access Policy**
+1. Under **Key permissions**, select **Get** and **Sign**.
+1. In the **Select Service principal** section, search for Verifiable Credentials service by entering **bb2a64ee-5d29-4b07-a491-25806dc854d3**.
+1. Select **Add**.
+
+Add an access policy for the Verifiable Credentials Service Request.
+
+1. Select **+ Add Access Policy**
+1. Under **Key permissions**, select **Get** and **Sign**.
+1. In the **Select Service principal** section search for **3db474b9-6a0c-4840-96ac-1fceb342124f** which is the Verifiable Credentials Service Request part of Azure AD Free
+1. Select **Add**.
+1. Select **Save** to save your changes
+
+#### **3. Update the access to your storage container**
+
+We need to do this for the storage accounts used to store verifiable credentials rules and display files.
+
+1. Find the correct storage account and open it.
+1. From the list of containers, open select the container that you are using for the Verifiable Credentials service.
+1. From the menu, select Access Control (IAM).
+1. Select + Add, and then select Add role assignment.
+1. In Add role assignment:
+ 1. For the Role, select Storage Blob Data Reader. Select Next
+ 1. For the Assign access to, select User, group, or service principal.
+ 1. Then +Select members and search for Verifiable Credentials Service (make sure this is the exact name, since there are several similar service principals!) and hit Select
+ 1. Select Review + assign
+
+#### **4. Update configuration on your Apps using the Request API**
+
+Grant the new service principal permissions to get access tokens
+
+1. In your application. Select **API permissions** > **Add a permission**.
+1. Select **APIs my organization uses**.
+1. Search for **Verifiable Credentials Service Request** and select it. Make sure you aren't selecting the **Verifiable Credential Request Service**. Before proceeding, confirm that the **Application Client ID** is ```3db474b9-6a0c-4840-96ac-1fceb342124f```
+1. Choose **Application Permission**, and expand **VerifiableCredential.Create.All**.
+1. Select **Add permissions**.
+1. Select **Grant admin consent for** ```<your tenant name>```.
+
+Adjust the API scopes used in your application
+
+For the Request API the new scope for your application or Postman is now:
+
+```3db474b9-6a0c-96ac-1fceb342124f/.default```
+
+#### **5. Clean up configuration**
+
+**Suggested after May 6, 2022**. Once you have confirmed that the Azure AD verifiable credentials service is working normally, you can issue, verify, etc after May 6, 2022 you can proceed to clean up your tenant so that the Azure AD Verifiable Credentials service has only the new service principals.
+
+1. Run the following PowerShell command to connect to your Azure AD tenant. Replace ```<your tenant ID>``` with your Azure AD tenant ID.
+1. Run the following commands in the same PowerShell session. The AppId ```603b8c59-ba28-40ff-83d1-408eee9a93e5``` and ```bbb94529-53a3-4be5-a069-7eaf2712b826``` refer to the previous Verifiable Credentials service principals.
+
+### How do I reset the Azure AD Verifiable credentials service?
+
+Resetting requires that you opt out and opt back into the Azure Active Directory Verifiable Credentials service, your existing verifiable credentials configurations will reset and your tenant will obtain a new DID to use during issuance and presentation.
1. Follow the [opt-out](how-to-opt-out.md) instructions. 1. Go over the Azure Active Directory Verifiable credentials [deployment steps](verifiable-credentials-configure-tenant.md) to reconfigure the service.
Reconfiguration requires that you opt out and opt back into the Azure Active Dir
], ```
-### If I reconfigure the Azure AD Verifiable Credentials service, do I need to re-link my DID to my domain?
+### If I reconfigure the Azure AD Verifiable Credentials service, do I need to relink my DID to my domain?
Yes, after reconfiguring your service, your tenant has a new DID use to issue and verify verifiable credentials. You need to [associate your new DID](how-to-dnsbind.md) with your domain.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Previously updated : 02/22/2022 Last updated : 04/26/2022
This article lists the latest features, improvements, and changes in the Azure Active Directory (Azure AD) Verifiable Credentials service.
+## April
+
+From April 25th, 2022 the Verifiable Credentials service is available to more Azure tenants. This important update requires any tenant created prior to April 25, 2022 to make a 15 minutes reconfiguration of the service to ensure ongoing operation. Verifiable Credentials service Administrators must perform the [following steps](verifiable-credentials-faq.md?#updating-the-vc-service-configuration) to avoid service disruptions.
+
+>[!IMPORTANT]
+> When the configuration on your tenant has not been updated, there will be errors on issuance and presentation flows of verifiable credentials from/to your tenant. [Service configuration instructions](verifiable-credentials-faq.md?#updating-the-vc-service-configuration).
+ ## March 2022-- Azure AD Verifiable Credentials customers can now change the [domain linked](how-to-dnsbind.md) to their DID easily from the Azure Portal.
+- Azure AD Verifiable Credentials customers can now change the [domain linked](how-to-dnsbind.md) to their DID easily from the Azure portal.
- We made updates to Microsoft Authenticator that change the interaction between the Issuer of a verifiable credential and the user presenting the verifiable credential. This update forces all Verifiable Credentials to be reissued in Microsoft Authenticator for iOS. [More information](whats-new.md?#microsoft-authenticator-did-generation-update) ## February 2022
We are rolling out some breaking changes to our service. These updates require A
- We made updates to Microsoft Authenticator that change the interaction between the Issuer of a verifiable credential and the user presenting the verifiable credential. This update forces all Verifiable Credentials to be reissued in Microsoft Authenticator for Android. [More information](whats-new.md?#microsoft-authenticator-did-generation-update) >[!IMPORTANT]
-> All Azure AD Verifiable Credential customers receiving a banner notice in the Azure portal need to go through a service reconfiguration before March 31st 2022. On March 31st 2022 tenants that have not been reconfigured will lose access to any previous configuration. Administrators will have to set up a new instance of the Azure AD Verifiable Credential service. Learn more about how to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reconfigure-the-azure-ad-verifiable-credentials-service).
+> All Azure AD Verifiable Credential customers receiving a banner notice in the Azure portal need to go through a service reconfiguration before March 31st 2022. On March 31st 2022 tenants that have not been reconfigured will lose access to any previous configuration. Administrators will have to set up a new instance of the Azure AD Verifiable Credential service. Learn more about how to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service).
### Azure AD Verifiable Credentials available in Europe
Since the beginning of the Azure AD Verifiable Credentials service public previe
Take the following steps to configure the Verifiable Credentials service in Europe: 1. [Check the location](verifiable-credentials-faq.md#how-can-i-check-my-azure-ad-tenants-region) of your Azure Active Directory to make sure is in Europe.
-1. [Reconfigure the Verifiable Credentials service](verifiable-credentials-faq.md?#how-do-i-reconfigure-the-azure-ad-verifiable-credentials-service) in your tenant.
+1. [Reconfigure the Verifiable Credentials service](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service) in your tenant.
>[!IMPORTANT]
-> On March 31st, 2022 European tenants that have not been [reconfigured](verifiable-credentials-faq.md?#how-do-i-reconfigure-the-azure-ad-verifiable-credentials-service) in Europe will lose access to any previous configuration and will require to configure a new instance of the Azure AD Verifiable Credential service.
+> On March 31st, 2022 European tenants that have not been [reconfigured](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service) in Europe will lose access to any previous configuration and will require to configure a new instance of the Azure AD Verifiable Credential service.
#### Are there any changes to the way that we use the Request API as a result of this move?
The Azure AD Verifiable Credential service supports the [W3C Status List 2021](h
To uptake this feature follow the next steps: 1. [Check if your tenant has the Hub endpoint](verifiable-credentials-faq.md#how-can-i-check-if-my-tenant-has-the-new-hub-endpoint). 1. If so, go to the next step.
- 1. If not, [reconfigure the Verifiable Credentials service](verifiable-credentials-faq.md?#how-do-i-reconfigure-the-azure-ad-verifiable-credentials-service) in your tenant and go to the next step.
+ 1. If not, [reconfigure the Verifiable Credentials service](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service) in your tenant and go to the next step.
1. Create new verifiable credentials contracts. In the rules file you must add the ` "credentialStatusConfiguration": "anonymous" ` property to start using the new feature in combination with the Hub endpoint for your credentials: Sample contract file:
advisor Advisor Alerts Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-alerts-bicep.md
+
+ Title: Create Azure Advisor alerts for new recommendations using Bicep
+description: Learn how to set up an alert for new recommendations from Azure Advisor using Bicep.
++++ Last updated : 04/26/2022++
+# Quickstart: Create Azure Advisor alerts on new recommendations using Bicep
+
+This article shows you how to set up an alert for new recommendations from Azure Advisor using Bicep.
++
+Whenever Azure Advisor detects a new recommendation for one of your resources, an event is stored in [Azure Activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Azure Advisor using a recommendation-specific alerts creation experience. You can select a subscription and optionally select a resource group to specify the resources that you want to receive alerts on.
+
+You can also determine the types of recommendations by using these properties:
+
+- Category
+- Impact level
+- Recommendation type
+
+You can also configure the action that will take place when an alert is triggered by:
+
+- Selecting an existing action group
+- Creating a new action group
+
+To learn more about action groups, see [Create and manage action groups](../azure-monitor/alerts/action-groups.md).
+
+> [!NOTE]
+> Advisor alerts are currently only available for High Availability, Performance, and Cost recommendations. Security recommendations are not supported.
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- To run the commands from your local computer, install Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-az-ps).
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/insights-alertrules-servicehealth/).
++
+The Bicep file defines two resources:
+
+- [Microsoft.Insights/actionGroups](/azure/templates/microsoft.insights/actiongroups)
+- [Microsoft.Insights/activityLogAlerts](/azure/templates/microsoft.insights/activityLogAlerts)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters alertName=<alert-name>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -alertName "<alert-name>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<alert-name\>** with the name of the alert.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+- Get an [overview of activity log alerts](../azure-monitor/alerts/alerts-overview.md), and learn how to receive alerts.
+- Learn more about [action groups](../azure-monitor/alerts/action-groups.md).
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
Previously updated : 10/13/2021 Last updated : 4/26/2022
az aks disable-addons --addons azure-keyvault-secrets-provider -g myResourceGrou
> [!NOTE] > When the Azure Key Vault Provider for Secrets Store CSI Driver is enabled, it updates the pod mount and the Kubernetes secret that's defined in the `secretObjects` field of `SecretProviderClass`. It does so by polling for changes periodically, based on the rotation poll interval you've defined. The default rotation poll interval is 2 minutes.
+>[!NOTE]
+> When the secret/key is updated in external secrets store after the initial pod deployment, the updated secret will be periodically updated in the pod mount and the Kubernetes Secret.
+>
+> Depending on how the application consumes the secret data:
+>
+> 1. Mount Kubernetes secret as a volume: Use auto rotation feature + Sync K8s secrets feature in Secrets Store CSI Driver, application will need to watch for changes from the mounted Kubernetes Secret volume. When the Kubernetes Secret is updated by the CSI Driver, the corresponding volume contents are automatically updated.
+> 2. Application reads the data from containerΓÇÖs filesystem: Use rotation feature in Secrets Store CSI Driver, application will need to watch for the file change from the volume mounted by the CSI driver.
+> 3. Using Kubernetes secret for environment variable: The pod needs to be restarted to get the latest secret as environment variable.
+> Use something like https://github.com/stakater/Reloader to watch for changes on the synced Kubernetes secret and do rolling upgrades on pods
+ To enable autorotation of secrets, use the `enable-secret-rotation` flag when you create your cluster: ```azurecli-interactive
aks Csi Secrets Store Nginx Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-nginx-tls.md
helm install ingress-nginx/ingress-nginx --generate-name \
--namespace $NAMESPACE \ --set controller.replicaCount=2 \ --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux ```
helm install ingress-nginx/ingress-nginx --generate-name \
--set controller.replicaCount=2 \ --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \ --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
--set controller.podLabels.aadpodidbinding=$AAD_POD_IDENTITY_NAME \ -f - <<EOF controller:
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
NAMESPACE=ingress-basic
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update
-helm install ingress-nginx ingress-nginx/ingress-nginx --create-namespace --namespace $NAMESPACE
+helm install ingress-nginx ingress-nginx/ingress-nginx \
+ --create-namespace \
+ --namespace $NAMESPACE \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
``` ### [Azure PowerShell](#tab/azure-powershell)
$Namespace = 'ingress-basic'
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update
-helm install ingress-nginx ingress-nginx/ingress-nginx --create-namespace --namespace $Namespace
+helm install ingress-nginx ingress-nginx/ingress-nginx `
+ --create-namespace `
+ --namespace $Namespace `
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
```
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set controller.image.tag=$CONTROLLER_TAG \ --set controller.image.digest="" \ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
--set controller.admissionWebhooks.patch.image.registry=$ACR_URL \ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
helm install nginx-ingress ingress-nginx/ingress-nginx `
--set controller.image.tag=$ControllerTag ` --set controller.image.digest="" ` --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz `
--set controller.admissionWebhooks.patch.image.registry=$AcrUrl ` --set controller.admissionWebhooks.patch.image.image=$PatchImage ` --set controller.admissionWebhooks.patch.image.tag=$PatchTag `
aks Ingress Internal Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-internal-ip.md
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \ --set controller.admissionWebhooks.patch.image.digest="" \ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
--set defaultBackend.image.registry=$ACR_URL \ --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \ --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG \
helm install nginx-ingress ingress-nginx/ingress-nginx `
--set controller.image.tag=$ControllerTag ` --set controller.image.digest="" ` --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz `
--set controller.admissionWebhooks.patch.image.registry=$AcrUrl ` --set controller.admissionWebhooks.patch.image.image=$PatchImage ` --set controller.admissionWebhooks.patch.image.tag=$PatchTag `
aks Ingress Own Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-own-tls.md
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set controller.image.tag=$CONTROLLER_TAG \ --set controller.image.digest="" \ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
--set controller.admissionWebhooks.patch.image.registry=$ACR_URL \ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
helm install nginx-ingress ingress-nginx/ingress-nginx `
--set controller.image.tag=$ControllerTag ` --set controller.image.digest="" ` --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz `
--set controller.admissionWebhooks.patch.image.registry=$AcrUrl ` --set controller.admissionWebhooks.patch.image.image=$PatchImage ` --set controller.admissionWebhooks.patch.image.tag=$PatchTag `
aks Ingress Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-static-ip.md
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set controller.image.tag=$CONTROLLER_TAG \ --set controller.image.digest="" \ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
--set controller.admissionWebhooks.patch.image.registry=$ACR_URL \ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
helm install nginx-ingress ingress-nginx/ingress-nginx `
--set controller.image.tag=$ControllerTag ` --set controller.image.digest="" ` --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz `
--set controller.admissionWebhooks.patch.image.registry=$AcrUrl ` --set controller.admissionWebhooks.patch.image.image=$PatchImage ` --set controller.admissionWebhooks.patch.image.tag=$PatchTag `
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set controller.image.tag=$CONTROLLER_TAG \ --set controller.image.digest="" \ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
--set controller.admissionWebhooks.patch.image.registry=$ACR_URL \ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
helm install nginx-ingress ingress-nginx/ingress-nginx `
--set controller.image.tag=$ControllerTag ` --set controller.image.digest="" ` --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz `
--set controller.admissionWebhooks.patch.image.registry=$AcrUrl ` --set controller.admissionWebhooks.patch.image.image=$PatchImage ` --set controller.admissionWebhooks.patch.image.tag=$PatchTag `
aks Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-helm.md
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set controller.image.tag=$CONTROLLER_TAG \ --set controller.image.digest="" \ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
--set controller.admissionWebhooks.patch.image.registry=$ACR_URL \ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md
You also need the Azure CLI version 2.0.59 or later installed and configured. Ru
## Understand the AKS node update experience
-In an AKS cluster, your Kubernetes nodes run as Azure virtual machines (VMs). These Linux-based VMs use an Ubuntu image, with the OS configured to automatically check for updates every night. If security or kernel updates are available, they are automatically downloaded and installed.
+In an AKS cluster, your Kubernetes nodes run as Azure virtual machines (VMs). These Linux-based VMs use an Ubuntu image, with the OS configured to automatically check for updates every day. If security or kernel updates are available, they are automatically downloaded and installed.
![AKS node update and reboot process with kured](media/node-updates-kured/node-reboot-process.png)
You can use your own workflows and processes to handle node reboots, or use `kur
### Node image upgrades
-Unattended upgrades apply updates to the Linux node OS, but the image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node will receive all the security and kernel updates available during the automatic check every night but will remain unpatched until all checks and restarts are complete.
+Unattended upgrades apply updates to the Linux node OS, but the image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node will receive all the security and kernel updates available during the automatic check every day but will remain unpatched until all checks and restarts are complete.
Alternatively, you can use node image upgrade to check for and update node images used by your cluster. For more details on node image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][node-image-upgrade].
api-management Api Management Howto Integrate Internal Vnet Appgateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-integrate-internal-vnet-appgateway.md
All configuration items must be set up before you create the application gateway
$config = New-AzApplicationGatewayWebApplicationFirewallConfiguration -Enabled $true -FirewallMode "Prevention" ```
-1. Because TLS 1.0 currently is the default, set the application gateway to use the most recent [TLS 1.2 policy](../application-gateway/application-gateway-ssl-policy-overview.md#appgwsslpolicy20170401s).
+1. Because TLS 1.0 currently is the default, set the application gateway to use one of the recent [TLS 1.2 policy](../application-gateway/application-gateway-ssl-policy-overview.md#predefined-tls-policy).
```powershell
- $policy = New-AzApplicationGatewaySslPolicy -PolicyType Predefined -PolicyName AppGwSslPolicy20170401S
+ $policy = New-AzApplicationGatewaySslPolicy -PolicyType Predefined -PolicyName AppGwSslPolicy20220101
``` ## Create an application gateway
api-management Vscode Create Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/vscode-create-service-instance.md
Previously updated : 09/14/2020- Last updated : 04/26/2022+ # Quickstart: Create a new Azure API Management service instance using Visual Studio Code
-Azure API Management (APIM) helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. APIM enables you to create and manage modern API gateways for existing backend services hosted anywhere. For more information, see the [Overview](api-management-key-concepts.md) topic.
+Azure API Management (APIM) helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. APIM lets you create and manage modern API gateways for existing backend services hosted anywhere. For more information, see the [Overview](api-management-key-concepts.md) topic.
-This quickstart describes the steps for creating a new API Management instance using the *Azure API Management Extension* for Visual Studio Code. You can also use the extension to perform common management operations on your API Management instance.
+This quickstart describes the steps to create a new API Management instance using the *Azure API Management Extension* for Visual Studio Code. You can also use the extension to do common management actions on your API Management instance.
## Prerequisites [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-Additionally, ensure you have installed the following:
+Also, ensure you've installed the following:
- [Visual Studio Code](https://code.visualstudio.com/)
Right-click on the subscription you'd like to use, and select **Create API Manag
![Create API Management wizard in VS Code](./media/vscode-create-service-instance/vscode-apim-create.png)
-In the pane that opens, supply a name for the new API Management instance. It must be globally unique within Azure and consist of 1-50 alphanumeric characters and/or hyphens, and start with a letter and end with an alphanumeric.
+In the pane that opens, supply a name for the new API Management instance. It must be globally unique within Azure and consist of 1-50 alphanumeric characters and/or hyphens. It should also start with a letter and end with an alphanumeric character.
A new API Management instance (and parent resource group) will be created with the specified name. By default, the instance is created in the *West US* region with *Consumption* SKU. > [!TIP] > If you enable **Advanced Creation** in the *Azure API Management Extension Settings*, you can also specify an [API Management SKU](https://azure.microsoft.com/pricing/details/api-management/), [Azure region](https://status.azure.com/en-us/status), and a [resource group](../azure-resource-manager/management/overview.md) to deploy your API Management instance. >
-> While the *Consumption* SKU takes less than a minute to provision, other SKUs typically take 30-40 minutes to create.
+> While the *Consumption* SKU takes less than a minute to set up, other SKUs typically take 30-40 minutes to create.
-At this point, you're ready to import and publish your first API. You can do that and also perform common API Management operations within the extension for Visual Studio Code. See [the tutorial](visual-studio-code-tutorial.md) for more.
+At this point, you're ready to import and publish your first API. You can do that and also do common API Management actions within the extension for Visual Studio Code. See [the tutorial](visual-studio-code-tutorial.md) for more.
-![Newly created API Management instance in VS Code API Management extension pane](./media/vscode-create-service-instance/vscode-apim-instance.png)
+![Newly created API Management instance in VS Code API Management extension pane](./media/vscode-create-service-instance/visual-studio-code-api-management-instance-updated.png)
## Clean up resources When no longer needed, remove the API Management instance by right-clicking and selecting **Open in Portal** to [delete the API Management service](get-started-create-service-instance.md#clean-up-resources) and its resource group.
-Alternately, you can select **Delete API Management** to only delete the API Management instance (this operation doesn't delete its resource group).
+Alternately, you can select **Delete API Management** to only delete the API Management instance (this action doesn't delete its resource group).
-![Delete API Management instance from VS Code](./media/vscode-create-service-instance/vscode-apim-delete.png)
+![Delete API Management instance from VS Code](./media/vscode-create-service-instance/visual-studio-code-api-management-delete-updated.png)
## Next steps
app-service Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-cli.md
tags: azure-service-management
ms.assetid: 53e6a15a-370a-48df-8618-c6737e26acec Previously updated : 09/17/2021 Last updated : 04/21/2022 keywords: azure cli samples, azure cli examples, azure cli code samples
The following table includes links to bash scripts built using the Azure CLI.
| Script | Description | |-|-| |**Create app**||
-| [Create an app and deploy files with FTP](./scripts/cli-deploy-ftp.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an App Service app and deploys a file to it using FTP. |
-| [Create an app and deploy code from GitHub](./scripts/cli-deploy-github.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an App Service app and deploys code from a public GitHub repository. |
-| [Create an app with continuous deployment from GitHub](./scripts/cli-continuous-deployment-github.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an App Service app with continuous publishing from a GitHub repository you own. |
-| [Create an app and deploy code from a local Git repository](./scripts/cli-deploy-local-git.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app and configures code push from a local Git repository. |
-| [Create an app and deploy code to a staging environment](./scripts/cli-deploy-staging-environment.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app with a deployment slot for staging code changes. |
-| [Create an ASP.NET Core app in a Docker container](./scripts/cli-linux-docker-aspnetcore.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app on Linux and loads a Docker image from Docker Hub. |
-| [Create an app and expose it with a Private Endpoint](./scripts/cli-deploy-privateendpoint.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app and a Private Endpoint |
+| [Create an app and deploy files with FTP](./scripts/cli-deploy-ftp.md)| Creates an App Service app and deploys a file to it using FTP. |
+| [Create an app and deploy code from GitHub](./scripts/cli-deploy-github.md)| Creates an App Service app and deploys code from a public GitHub repository. |
+| [Create an app with continuous deployment from GitHub](./scripts/cli-continuous-deployment-github.md)| Creates an App Service app with continuous publishing from a GitHub repository you own. |
+| [Create an app and deploy code into a local Git repository](./scripts/cli-deploy-local-git.md) | Creates an App Service app and configures code push into a local Git repository. |
+| [Create an app and deploy code to a staging environment](./scripts/cli-deploy-staging-environment.md) | Creates an App Service app with a deployment slot for staging code changes. |
+| [Create an ASP.NET Core app in a Docker container](./scripts/cli-linux-docker-aspnetcore.md) | Creates an App Service app on Linux and loads a Docker image from Docker Hub. |
+| [Create an app with a Private Endpoint](./scripts/cli-deploy-privateendpoint.md) | Creates an App Service app and a Private Endpoint |
|**Configure app**||
-| [Map a custom domain to an app](./scripts/cli-configure-custom-domain.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an App Service app and maps a custom domain name to it. |
-| [Bind a custom TLS/SSL certificate to an app](./scripts/cli-configure-ssl-certificate.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an App Service app and binds the TLS/SSL certificate of a custom domain name to it. |
+| [Map a custom domain to an app](./scripts/cli-configure-custom-domain.md)| Creates an App Service app and maps a custom domain name to it. |
+| [Bind a custom TLS/SSL certificate to an app](./scripts/cli-configure-ssl-certificate.md)| Creates an App Service app and binds the TLS/SSL certificate of a custom domain name to it. |
|**Scale app**||
-| [Scale an app manually](./scripts/cli-scale-manual.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app and scales it across 2 instances. |
-| [Scale an app worldwide with a high-availability architecture](./scripts/cli-scale-high-availability.md?toc=%2fcli%2fazure%2ftoc.json) | Creates two App Service apps in two different geographical regions and makes them available through a single endpoint using Azure Traffic Manager. |
+| [Scale an app manually](./scripts/cli-scale-manual.md) | Creates an App Service app and scales it across 2 instances. |
+| [Scale an app worldwide with a high-availability architecture](./scripts/cli-scale-high-availability.md) | Creates two App Service apps in two different geographical regions and makes them available through a single endpoint using Azure Traffic Manager. |
|**Protect app**||
-| [Integrate with Azure Application Gateway](./scripts/cli-integrate-app-service-with-application-gateway.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app and integrates it with Application Gateway using service endpoint and access restrictions. |
+| [Integrate with Azure Application Gateway](./scripts/cli-integrate-app-service-with-application-gateway.md) | Creates an App Service app and integrates it with Application Gateway using service endpoint and access restrictions. |
|**Connect app to resources**||
-| [Connect an app to a SQL Database](./scripts/cli-connect-to-sql.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an App Service app and a database in Azure SQL Database, then adds the database connection string to the app settings. |
-| [Connect an app to a storage account](./scripts/cli-connect-to-storage.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an App Service app and a storage account, then adds the storage connection string to the app settings. |
-| [Connect an app to an Azure Cache for Redis](./scripts/cli-connect-to-redis.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app and an Azure Cache for Redis, then adds the redis connection details to the app settings.) |
-| [Connect an app to Cosmos DB](./scripts/cli-connect-to-documentdb.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app and a Cosmos DB, then adds the Cosmos DB connection details to the app settings. |
+| [Connect an app to a SQL Database](./scripts/cli-connect-to-sql.md)| Creates an App Service app and a database in Azure SQL Database, then adds the database connection string to the app settings. |
+| [Connect an app to a storage account](./scripts/cli-connect-to-storage.md)| Creates an App Service app and a storage account, then adds the storage connection string to the app settings. |
+| [Connect an app to an Azure Cache for Redis](./scripts/cli-connect-to-redis.md) | Creates an App Service app and an Azure Cache for Redis, then adds the redis connection details to the app settings.) |
+| [Connect an app to Cosmos DB](./scripts/cli-connect-to-documentdb.md) | Creates an App Service app and a Cosmos DB, then adds the Cosmos DB connection details to the app settings. |
|**Backup and restore app**||
-| [Backup an app](./scripts/cli-backup-onetime.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app and creates a one-time backup for it. |
-| [Create a scheduled backup for an app](./scripts/cli-backup-scheduled.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app and creates a scheduled backup for it. |
-| [Restores an app from a backup](./scripts/cli-backup-restore.md?toc=%2fcli%2fazure%2ftoc.json) | Restores an App Service app from a backup. |
+| [Backup and restore app](./scripts/cli-backup-schedule-restore.md) | Creates an App Service app and creates a one-time backup for it, creates a backup schedule for it, and then restores an App Service app from a backup. |
|**Monitor app**||
-| [Monitor an app with web server logs](./scripts/cli-monitor.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app, enables logging for it, and downloads the logs to your local machine. |
+| [Monitor an app with web server logs](./scripts/cli-monitor.md) | Creates an App Service app, enables logging for it, and downloads the logs to your local machine. |
| | |
app-service Cli Backup Onetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-backup-onetime.md
- Title: 'CLI: Backup an app'
-description: Learn how to use the Azure CLI to automate deployment and management of your App Service app. This sample shows how to back up an app.
-
-tags: azure-service-management
-- Previously updated : 12/07/2017-----
-# Back up an app using CLI
-
-This sample script creates an app in App Service with its related resources, and then creates a one-time backup for it.
---
-If you choose to install and use the CLI locally, you need Azure CLI version 2.0 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
-
-## Sample script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/backup-onetime/backup-onetime.sh?highlight=3-7 "Back up an app")]
--
-## Script explanation
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [`az group create`](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
-| [`az storage account create`](/cli/azure/storage/account#az-storage-account-create) | Creates a storage account. |
-| [`az storage container create`](/cli/azure/storage/container#az-storage-container-create) | Creates an Azure storage container. |
-| [`az storage container generate-sas`](/cli/azure/storage/container#az-storage-container-generate-sas) | Generates an SAS token for an Azure storage container. |
-| [`az appservice plan create`](/cli/azure/appservice/plan#az-appservice-plan-create) | Creates an App Service plan. |
-| [`az webapp create`](/cli/azure/webapp#az-webapp-create) | Creates an App Service app. |
-| [`az webapp config backup create`](/cli/azure/webapp/config/backup#az-webapp-config-backup-create) | Creates a backup for an App Service app. |
-| [`az webapp config backup list`](/cli/azure/webapp/config/backup#az-webapp-config-backup-list) | Gets a list of backups for an App Service app. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional App Service CLI script samples can be found in the [Azure App Service documentation](../samples-cli.md).
app-service Cli Backup Schedule Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-backup-schedule-restore.md
+
+ Title: 'CLI: Restore an app from a backup'
+description: Learn how to use the Azure CLI to automate deployment and management of your App Service app. This sample shows how to restore an app from a backup.
+
+tags: azure-service-management
+
+ms.devlang: azurecli
+ Last updated : 04/21/2022+++++
+# Backup and restore a web app from a backup using CLI
+
+This sample script creates a web app in App Service with its related resources. It then creates a one-time backup for it, and also a scheduled backup for it. Finally, it restores the web app from backup.
+++
+## Sample script
++
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+| [`az webapp config backup list`](/cli/azure/webapp/config/backup#az-webapp-config-backup-list) | Gets a list of backups for a web app. |
+| [`az webapp config backup restore`](/cli/azure/webapp/config/backup#az-webapp-config-backup-restore) | Restores a web app from a backup. |
+
+## Next steps
+
+For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional App Service CLI script samples can be found in the [Azure App Service documentation](../samples-cli.md).
app-service Cli Backup Scheduled https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-backup-scheduled.md
- Title: 'CLI: Create a scheduled backup'
-description: Learn how to use the Azure CLI to automate deployment and management of your App Service app. This sample shows how to create a scheduled backup for an app.
-
-tags: azure-service-management
-- Previously updated : 12/11/2017-----
-# Create a scheduled backup for an App Service app using CLI
-
-This sample script creates an app in App Service with its related resources, and then creates a scheduled backup for it.
---
-If you choose to install and use the CLI locally, you need Azure CLI version 2.0 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
-
-## Sample script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/backup-scheduled/backup-scheduled.sh?highlight=3-7 "Create a scheduled backup for an app")]
--
-## Script explanation
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [`az group create`](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
-| [`az storage account create`](/cli/azure/storage/account#az-storage-account-create) | Creates a storage account. |
-| [`az storage container create`](/cli/azure/storage/container#az-storage-container-create) | Creates an Azure storage container. |
-| [`az storage container generate-sas`](/cli/azure/storage/container#az-storage-container-generate-sas) | Generates an SAS token for an Azure storage container. |
-| [`az appservice plan create`](/cli/azure/appservice/plan#az-appservice-plan-create) | Creates an App Service plan. |
-| [`az webapp create`](/cli/azure/webapp#az-webapp-create) | Creates an App Service app. |
-| [`az webapp config backup update`](/cli/azure/webapp/config/backup#az-webapp-config-backup-update) | Configures a new backup schedule for an App Service app. |
-| [`az webapp config backup show`](/cli/azure/webapp/config/backup#az-webapp-config-backup-show) | Shows the backup schedule for an App Service app. |
-| [`az webapp config backup list`](/cli/azure/webapp/config/backup#az-webapp-config-backup-list) | Gets a list of backups for an App Service app. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional App Service CLI script samples can be found in the [Azure App Service documentation](../samples-cli.md).
app-service Cli Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-configure-custom-domain.md
tags: azure-service-management
ms.assetid: 5ac4a680-cc73-4578-bcd6-8668c08802c2 ms.devlang: azurecli Previously updated : 12/11/2017 Last updated : 04/21/2022
This sample script creates an app in App Service with its related resources, and
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] -
-If you choose to install and use the CLI locally, you need Azure CLI version 2.0 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/configure-custom-domain/configure-custom-domain.sh?highlight=3 "Map a custom domain to an app")]
+
+### To create the web app
++
+### Map your prepared custom domain name to the web app
+
+1. Create the following variable containing your fully qualified domain name.
+
+ ```azurecli
+ fqdn=<Replace with www.{yourdomain}>
+ ```
+
+1. Configure a CNAME record that maps your fully qualified domain name to your web app's default domain name ($webappname.azurewebsites.net).
+
+1. Map your domain name to the web app.
+
+ ```azurecli
+ az webapp config hostname add --webapp-name $webappname --resource-group myResourceGroup --hostname $fqdn
+
+ echo "You can now browse to http://$fqdn"
+ ```
+
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
app-service Cli Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-configure-ssl-certificate.md
tags: azure-service-management
ms.assetid: eb95d350-81ea-4145-a1e2-6eea3b7469b2 ms.devlang: azurecli Previously updated : 12/11/2017 Last updated : 04/21/2022
This sample script creates an app in App Service with its related resources, the
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] -
-If you choose to install and use the CLI locally, you need Azure CLI version 2.0 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/configure-ssl-certificate/configure-ssl-certificate.sh?highlight=3-5 "Bind a custom TLS/SSL certificate to an app")]
+
+### To create the web app
++
+### Map your prepared custom domain name to the web app
+
+1. Create the following variable containing your fully qualified domain name.
+
+ ```azurecli
+ fqdn=<Replace with www.{yourdomain}>
+ ```
+
+1. Configure a CNAME record that maps your fully qualified domain name to your web app's default domain name ($webappname.azurewebsites.net).
+
+1. Map your domain name to the web app.
+
+ ```azurecli
+ az webapp config hostname add --webapp-name $webappname --resource-group myResourceGroup --hostname $fqdn
+
+ echo "You can now browse to http://$fqdn"
+ ```
+
+### Upload and bind the SSL certificate
+
+1. Create the following variable containing your pfx path and password.
+
+ ```azurecli
+ pfxPath=<replace-with-path-to-your-.PFX-file>
+ pfxPassword=<replace-with-your=.PFX-password>
+ ```
+
+1. Upload the SSL certificate and get the thumbprint.
+
+ ```azurecli
+ thumbprint=$(az webapp config ssl upload --certificate-file $pfxPath --certificate-password $pfxPassword --name $webapp --resource-group $resourceGroup --query thumbprint --output tsv)
+ ```
+
+1. Bind the uploaded SSL certificate to the web app.
+
+ ```azurecli
+ az webapp config ssl bind --certificate-thumbprint $thumbprint --ssl-type SNI --name $webapp --resource-group $resourceGroup
+
+ echo "You can now browse to https://$fqdn"
+ ```
+
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
app-service Cli Connect To Documentdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-connect-to-documentdb.md
tags: azure-service-management
ms.assetid: bbbdbc42-efb5-4b4f-8ba6-c03c9d16a7ea ms.devlang: azurecli Previously updated : 12/11/2017 Last updated : 04/21/2022
This sample script creates an Azure Cosmos DB account using the Azure Cosmos DB'
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] -
-If you choose to install and use the CLI locally, you need Azure CLI version 2.0 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/connect-to-documentdb/connect-to-documentdb.sh "Azure Cosmos DB")]
+
+### Run the script
++
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, App Service app, Cosmos DB, and all related resources. Each command in the table links to command specific documentation.
app-service Cli Connect To Redis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-connect-to-redis.md
tags: azure-service-management
ms.assetid: bc8345b2-8487-40c6-a91f-77414e8688e6 ms.devlang: azurecli Previously updated : 12/11/2017 Last updated : 04/21/2022
This sample script creates an Azure Cache for Redis and an App Service app. It then links the Azure Cache for Redis to the app using app settings. - [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-If you choose to install and use the CLI locally, you need Azure CLI version 2.0 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/connect-to-redis/connect-to-redis.sh "Azure Cache for Redis")]
+
+### Run the script
++
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, App Service app, Azure Cache for Redis, and all related resources. Each command in the table links to command specific documentation.
app-service Cli Connect To Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-connect-to-sql.md
tags: azure-service-management
ms.assetid: 7c2efdd0-f553-4038-a77a-e953021b3f77 ms.devlang: azurecli Previously updated : 12/11/2017 Last updated : 04/21/2022
This sample script creates a database in Azure SQL Database and an App Service a
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] -
-If you choose to install and use the CLI locally, you need Azure CLI version 2.0 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/connect-to-sql/connect-to-sql.sh?highlight=9-10 "SQL Database")]
+
+### Run the script
++
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, App Service app, SQL Database, and all related resources. Each command in the table links to command specific documentation.
app-service Cli Connect To Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-connect-to-storage.md
tags: azure-service-management
ms.assetid: bc8345b2-8487-40c6-a91f-77414e8688e6 ms.devlang: azurecli Previously updated : 12/11/2017 Last updated : 04/21/2022
This sample script creates an Azure storage account and an App Service app. It t
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-If you choose to install and use the CLI locally, you need Azure CLI version 2.0 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
+## Sample script
-## Sample script
+### Run the script
++
+## Clean up resources
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/connect-to-storage/connect-to-storage.sh "Azure Storage")]
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, App Service app, storage account, and all related resources. Each command in the table links to command specific documentation.
app-service Cli Continuous Deployment Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-continuous-deployment-github.md
tags: azure-service-management
ms.assetid: 0205c991-0989-4ca3-bb41-237dcc964460 ms.devlang: azurecli Previously updated : 09/02/2019 Last updated : 04/15/2022
This sample script creates an app in App Service with its related resources, and
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] -
-If you choose to install and use the CLI locally, you need Azure CLI version 2.0 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/deploy-github-continuous/deploy-github-continuous.sh?highlight=3-4 "Create an app with continuous deployment from GitHub")]
+
+### To create the web app
++
+### To configure continuous deployment from GitHub
+
+1. Create the following variables containing your GitHub information.
+
+ ```azurecli
+ gitrepo=<replace-with-URL-of-your-own-GitHub-repo>
+ token=<replace-with-a-GitHub-access-token>
+ ```
+
+1. Configure continuous deployment from GitHub.
+
+ > [!TIP]
+ > The `--git-token` parameter is required only once per Azure account (Azure remembers token).
+
+ ```azurecli
+ az webapp deployment source config --name $webapp --resource-group $resourceGroup --repo-url $gitrepo --branch master --git-token $token
+ ```
+
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
app-service Cli Continuous Deployment Vsts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-continuous-deployment-vsts.md
tags: azure-service-management
ms.assetid: 389d3bd3-cd8e-4715-a3a1-031ec061d385 ms.devlang: azurecli Previously updated : 12/11/2017 Last updated : 04/15/2022
-# Create an App Service app with continuous deployment using Azure CLI
+# Create an App Service app with continuous deployment from an Azure DevOps repository using Azure CLI
This sample script creates an app in App Service with its related resources, and then sets up continuous deployment from an Azure DevOps repository. For this sample, you need:
This sample script creates an app in App Service with its related resources, and
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] - ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/deploy-vsts-continuous/deploy-vsts-continuous.sh?highlight=3-4 "Create an app with continuous deployment from Azure DevOps")]
+
+### To create the web app
++
+### To configure continuous deployment from GitHub
+
+Create the following variables containing your GitHub information.
+
+```azurecli
+gitrepo=<Replace with your Visual Studio Team Services repo URL>
+token=<Replace with a Visual Studio Team Services personal access token>
+```
+
+Configure continuous deployment from Visual Studio Team Services. The `--git-token` parameter is required only once per Azure account (Azure remembers token).
+
+```azurecli
+az webapp deployment source config --name $webapp --resource-group $resourceGroup \
+--repo-url $gitrepo --branch master --git-token $token
+```
+
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
app-service Cli Deploy Local Git https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-deploy-local-git.md
Title: 'CLI: Deploy from local Git repo'
-description: Learn how to use the Azure CLI to automate deployment and management of your App Service app. This sample shows how to deploy code from a local Git repository.
+description: Learn how to use the Azure CLI to automate deployment and management of your App Service app. This sample shows how to deploy code into a local Git repository.
tags: azure-service-management ms.assetid: 048f98aa-f708-44cb-9b9e-953f67dc6da8 ms.devlang: azurecli Previously updated : 12/11/2017 Last updated : 04/15/2022
-# Create an App Service app and deploy code from a local Git repository using Azure CLI
+# Create an App Service app and deploy code into a local Git repository using Azure CLI
This sample script creates an app in App Service with its related resources, and then deploys your app code in a local Git repository. - [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] - ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/deploy-local-git/deploy-local-git.sh?highlight=3-5 "Create an app and deploy code from a local Git repository")]
+
+### To create the web app
++
+### To deploy to your local Git repository
+
+1. Create the following variables containing your GitHub information.
+
+ ```azurecli
+ gitdirectory=<Replace with path to local Git repo>
+ username=<Replace with desired deployment username>
+ password=<Replace with desired deployment password>
+ ```
+
+1. Configure local Git and get deployment URL.
+
+ ```azurecli
+ url=$(az webapp deployment source config-local-git --name $webapp --resource-group $resourceGroup --query url --output tsv)
+ ```
+
+1. Add the Azure remote to your local Git repository and push your code. When prompted for password, use the value of $password that you specified.
+
+ ```bash
+ cd $gitdirectory
+ git remote add azure $url
+ git push azure main
+ ```
+
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
app-service Cli Deploy Staging Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-deploy-staging-environment.md
Title: 'CLI: Deploy to staging slot'
description: Learn how to use the Azure CLI to automate deployment and management of your App Service app. This sample shows how to deploy code to a staging slot. tags: azure-service-management- ms.assetid: 2b995dcd-e471-4355-9fda-00babcdb156e ms.devlang: azurecli Previously updated : 12/11/2017 Last updated : 04/25/2022
This sample script creates an app in App Service with an additional deployment s
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] - ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/deploy-deployment-slot/deploy-deployment-slot.sh "Create an app and deploy code to a staging environment")]
+
+### Run the script
++
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
app-service Cli Integrate App Service With Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-integrate-app-service-with-application-gateway.md
ms.devlang: azurecli
na Previously updated : 12/09/2019 Last updated : 04/15/2022
This sample script creates an Azure App Service web app, an Azure Virtual Networ
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] - ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/integrate-with-app-gateway/integrate-with-app-gateway.sh "Integrate with Application Gateway")]
+
+### Run the script
++
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, App Service app, Cosmos DB, and all related resources. Each command in the table links to command specific documentation.
app-service Cli Linux Acr Aspnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-linux-acr-aspnetcore.md
tags: azure-service-management
ms.assetid: 3a2d1983-ff7b-476a-ac44-49ec2aabb31a ms.devlang: azurecli Previously updated : 12/13/2018 Last updated : 04/25/2022
This sample script creates a resource group, a Linux App Service plan, and an app. It then deploys an ASP.NET Core application using a Docker Container from the Azure Container Registry. - [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-You need Azure CLI version 2.0.52 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/deploy-linux-acr/deploy-linux-acr.sh "Linux Azure Container Registry")]
+1. Create a resource group
+
+ ```azurecli
+ az group create --name myResourceGroup --location westus
+ ```
+
+1. Create an Azure Container Registry
+
+ ```azurecli
+ az acr create --name <registry_name> --resource-group myResourceGroup --location westus --sku basic --admin-enabled true --query loginServer --output tsv
+ ```
+
+1. Show ACR credentials
+
+ ```azurecli
+ az acr credential show --name <registry_name> --resource-group myResourceGroup --query [username,passwords[?name=='password'].value] --output tsv
+ ```
+
+1. Before continuing, save the ACR credentials and registry URL. You will need this information in the commands below.
+
+1. Pull from Docker
+
+ ```bash
+ docker login <acr_registry_name>.azurecr.io -u <registry_user>
+ docker pull <registry_user/container_name:version>
+ ```
+
+1. Tag Docker image
+
+ ```bash
+ docker tag <registry_user/container_name:version> <acr_registry_name>.azurecr.io/<container_name:version>
+ ```
+
+1. Push container image to Azure Container Registry
+
+ ```bash
+ docker push <acr_registry_name>.azurecr.io/<container_name:version>
+ ```
+
+1. Create an App Service plan
+
+ ```bash
+ az appservice plan create --name AppServiceLinuxDockerPlan --resource-group myResourceGroup --location westus --is-linux --sku S1
+ ```
+
+1. Create a web app
+
+ ```bash
+ az webapp create --name <app_name> --plan AppServiceLinuxDockerPlan --resource-group myResourceGroup --deployment-container-image-name <acr_registry_name>.azurecr.io/<container_name:version>
+ ```
+
+1. Configure web app with a custom Docker Container from Azure Container Registry.
+
+ ```bash
+ az webapp config container set --resource-group myResourceGroup --name <app_name> --docker-registry-server-url http://<acr_registry_name>.azurecr.io --docker-registry-server-user <registry_user> --docker-registry-server-password <registry_password>
+ ```
+
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, App Service app, and all related resources. Each command in the table links to command specific documentation.
app-service Cli Linux Docker Aspnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-linux-docker-aspnetcore.md
tags: azure-service-management
ms.assetid: 3a2d1983-ff7b-476a-ac44-49ec2aabb31a ms.devlang: azurecli Previously updated : 12/11/2017 Last updated : 04/21/2022
This sample script creates a resource group, a Linux App Service plan, and an ap
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] - ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/deploy-linux-docker/deploy-linux-docker.sh?highlight=6 "Linux Docker")]
+
+### To create the web app
++
+### Configure Web App with a Custom Docker Container from Docker Hub
+
+1. Create the following variable containing your GitHub information.
+
+ ```azurecli
+ dockerHubContainerPath="<replace-with-docker-container-path>" #format: <username>/<container-or-image>:<tag>
+ ```
+
+1. Configure the web app with a custom docker container from Docker Hub.
+
+ ```azurecli
+ az webapp config container set --docker-custom-image-name $dockerHubContainerPath --name $webApp --resource-group $resourceGroup
+ ```
+
+1. Copy the result of the following command into a browser to see the web app.
+
+ ```azurecli
+ site="http://$webapp.azurewebsites.net"
+ echo $site
+ curl "$site"
+ ```
+
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, App Service app, and all related resources. Each command in the table links to command specific documentation.
app-service Cli Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-monitor.md
tags: azure-service-management
ms.assetid: 0887656f-611c-4627-8247-b5cded7cef60 ms.devlang: azurecli Previously updated : 12/11/2017 Last updated : 04/15/2022
This sample script creates a resource group, App Service plan, and app, and conf
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] - ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/monitor-with-logs/monitor-with-logs.sh "Monitor Logs")]
+
+### Run the script
++
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, App Service app, and all related resources. Each command in the table links to command specific documentation.
app-service Cli Scale High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-scale-high-availability.md
tags: azure-service-management
ms.assetid: e4033a50-0e05-4505-8ce8-c876204b2acc ms.devlang: azurecli Previously updated : 12/11/2017 Last updated : 04/15/2022
This sample script creates a resource group, two App Service plans, two apps, a
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] - ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/scale-geographic/scale-geographic.sh "Geographic Scale")]
+
+### Run the script
++
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, App Service app, traffic manager profile, and all related resources. Each command in the table links to command specific documentation.
app-service Cli Scale Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-scale-manual.md
tags: azure-service-management
ms.assetid: 251d9074-8fff-4121-ad16-9eca9556ac96 ms.devlang: azurecli Previously updated : 12/11/2017 Last updated : 04/15/2022
This sample script creates a resource group, an App Service plan, and an app. It
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] - ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/scale-manual/scale-manual.sh "Manual Scale")]
+
+### Run the script
++
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, App Service app, and all related resources. Each command in the table links to command specific documentation.
application-gateway Application Gateway Configure Listener Specific Ssl Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-configure-listener-specific-ssl-policy.md
First create a new Application Gateway as you would usually through the portal -
## Set up a listener-specific SSL policy
-To set up a listener-specific SSL policy, you'll need to first go to the **SSL settings** tab in the Portal and create a new SSL profile. When you create an SSL profile, you'll see two tabs: **Client Authentication** and **SSL Policy**. The **SSL Policy** tab is to configure a listener-specific SSL policy. The **Client Authentication** tab is where to upload a client certificate(s) for mutual authentication - for more information, check out [Configuring a mutual authentication](./mutual-authentication-portal.md).
+Before you proceed, here are some important points related to listener-specific SSL policy.
-> [!NOTE]
-> We recommend using TLS 1.2 as TLS 1.2 will be mandated in the future.
+- We recommend using TLS 1.2 as this version will be mandated in the future.
+- You don't have to configure client authentication on an SSL profile to associate it to a listener. You can have only client authentication or listener-specific SSL policy configured, or both configured in your SSL profile.
+- Using a new Predefined or Customv2 policy enhances SSL security and performance for the entire gateway (SSL Policy and SSL Profile). Therefore, you cannot have different listeners on both old as well as new SSL (predefined or custom) policies. Consider this example,
+
+ You are currently using SSL Policy and SSL Profile with &#34;older&#34; policies/ciphers. Selecting a &#34;new&#34; Predefined or Customv2 policy for any one of them will automatically apply the same new policy for the other configuration too. However, you can customize a specific one later within the realm of the new policies such that only the new
+predefined policies, or customv2 policy, or combination of these co-exist on a gateway.
+
+To set up a listener-specific SSL policy, you'll need to first go to the **SSL settings** tab in the Portal and create a new SSL profile. When you create an SSL profile, you'll see two tabs: **Client Authentication** and **SSL Policy**. The **SSL Policy** tab is to configure a listener-specific SSL policy. The **Client Authentication** tab is where to upload a client certificate(s) for mutual authentication - for more information, check out [Configuring a mutual authentication](./mutual-authentication-portal.md).
1. Search for **Application Gateway** in portal, select **Application gateways**, and click on your existing Application Gateway.
To set up a listener-specific SSL policy, you'll need to first go to the **SSL s
7. Select **Add** to save.
- > [!NOTE]
- > You don't have to configure client authentication on an SSL profile to associate it to a listener. You can have only client authentication configure, or only listener specific SSL policy configured, or both configured in your SSL profile.
- ![Add listener specific SSL policy to SSL profile](./media/application-gateway-configure-listener-specific-ssl-policy/listener-specific-ssl-policy-ssl-profile.png) ## Associate the SSL profile with a listener
application-gateway Application Gateway Configure Ssl Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-configure-ssl-policy-powershell.md
The `Get-AzApplicationGatewayAvailableSslOptions` cmdlet provides a listing of a
``` DefaultPolicy: AppGwSslPolicy20150501 PredefinedPolicies:
- /subscriptions/147a22e9-2356-4e56-b3de-1f5842ae4a3b/resourceGroups//providers/Microsoft.Network/ApplicationGatewayAvailableSslOptions/default/Applic
+ /subscriptions/xxx-xxx/resourceGroups//providers/Microsoft.Network/ApplicationGatewayAvailableSslOptions/default/Applic
ationGatewaySslPredefinedPolicy/AppGwSslPolicy20150501
- /subscriptions/147a22e9-2356-4e56-b3de-1f5842ae4a3b/resourceGroups//providers/Microsoft.Network/ApplicationGatewayAvailableSslOptions/default/Applic
+ /subscriptions/xxx-xxx/resourceGroups//providers/Microsoft.Network/ApplicationGatewayAvailableSslOptions/default/Applic
ationGatewaySslPredefinedPolicy/AppGwSslPolicy20170401
- /subscriptions/147a22e9-2356-4e56-b3de-1f5842ae4a3b/resourceGroups//providers/Microsoft.Network/ApplicationGatewayAvailableSslOptions/default/Applic
+ /subscriptions/xxx-xxx/resourceGroups//providers/Microsoft.Network/ApplicationGatewayAvailableSslOptions/default/Applic
ationGatewaySslPredefinedPolicy/AppGwSslPolicy20170401S
+ /subscriptions/xxx-xxx/resourceGroups//providers/Microsoft.Network/ApplicationGatewayAvailableSslOptions/default/Applic
+ationGatewaySslPredefinedPolicy/AppGwSslPolicy20220101
+ /subscriptions/xxx-xxx/resourceGroups//providers/Microsoft.Network/ApplicationGatewayAvailableSslOptions/default/Applic
+ationGatewaySslPredefinedPolicy/AppGwSslPolicy20220101S
AvailableCipherSuites:
+ TLS_AES_128_GCM_SHA256
+ TLS_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
AvailableProtocols:
TLSv1_0 TLSv1_1 TLSv1_2
+ TLSv1_3
``` ## List pre-defined TLS Policies
-Application gateway comes with three pre-defined policies that can be used. The `Get-AzApplicationGatewaySslPredefinedPolicy` cmdlet retrieves these policies. Each policy has different protocol versions and cipher suites enabled. These pre-defined policies can be used to quickly configure a TLS policy on your application gateway. By default **AppGwSslPolicy20150501** is selected if no specific TLS policy is defined.
+Application gateway comes with multiple pre-defined policies that can be used. The `Get-AzApplicationGatewaySslPredefinedPolicy` cmdlet retrieves these policies. Each policy has different protocol versions and cipher suites enabled. These pre-defined policies can be used to quickly configure a TLS policy on your application gateway. By default **AppGwSslPolicy20150501** is selected if no specific TLS policy is defined.
The following output is an example of running `Get-AzApplicationGatewaySslPredefinedPolicy`.
CipherSuites:
## Configure a custom TLS policy
-When configuring a custom TLS policy, you pass the following parameters: PolicyType, MinProtocolVersion, CipherSuite, and ApplicationGateway. If you attempt to pass other parameters, you get an error when creating or updating the Application Gateway.
+When configuring a custom TLS policy, you pass the following parameters: PolicyType, MinProtocolVersion, CipherSuite, and ApplicationGateway. If you attempt to pass other parameters, you get an error when creating or updating the Application Gateway.
+
+> [!IMPORTANT]
+> - If you are using a custom SSL policy in Application Gateway v1 SKU (Standard or WAF), make sure that you add the mandatory cipher &#34;TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256&#34; to the list. This cipher is required to enable metrics and logging in the Application Gateway v1 SKU. This is not mandatory for Application Gateway v2 SKU (Standard_v2 or WAF_v2).
+> - Cipher suites "TLS_AES_128_GCM_SHA256" and "TLS_AES_256_GCM_SHA384" with TLSv1.3 are not customizable and included by default when setting a CustomV2 policy with a minimum TLS version of 1.2 or 1.3.
The following example sets a custom TLS policy on an application gateway. It sets the minimum protocol version to `TLSv1_1` and enables the following cipher suites: * TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 * TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
-> [!IMPORTANT]
-> TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 must be selected when configuring a custom TLS policy. Application gateway uses this cipher suite for backend management. You can use this in combination with any other suites, but this one must be selected as well.
- ```powershell # get an application gateway resource $gw = Get-AzApplicationGateway -Name AdatumAppGateway -ResourceGroup AdatumAppGatewayRG
$appgw = New-AzApplicationGateway -Name appgwtest -ResourceGroupName $rg.Resourc
To set a custom TLS policy, pass the following parameters: **PolicyType**, **MinProtocolVersion**, **CipherSuite**, and **ApplicationGateway**. To set a Predefined TLS policy, pass the following parameters: **PolicyType**, **PolicyName**, and **ApplicationGateway**. If you attempt to pass other parameters, you get an error when creating or updating the Application Gateway.
+> [!NOTE]
+> Using a new Predefined or Customv2 policy enhances SSL security and performance posture of the entire gateway (SSL Policy and SSL Profile). Hence, both old and new policies cannot co-exist. You are required to use any of the older predefined or custom policies across the gateway, in case there are clients requiring older TLS version or ciphers (for example, TLS v1.0).
+ In the following example, there are code samples for both Custom Policy and Predefined Policy. Uncomment the policy you want to use. ```powershell
application-gateway Application Gateway Ssl Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ssl-policy-overview.md
You can use Azure Application Gateway to centralize TLS/SSL certificate manageme
The TLS policy includes control of the TLS protocol version as well as the cipher suites and the order in which ciphers are used during a TLS handshake. Application Gateway offers two mechanisms for controlling TLS policy. You can use either a predefined policy or a custom policy.
-## Predefined TLS policy
-
-Application Gateway has three predefined security policies. You can configure your gateway with any of these policies to get the appropriate level of security. The policy names are annotated by the year and month in which they were configured. Each policy offers different TLS protocol versions and cipher suites. We recommend that you use the newest TLS policies to ensure the best TLS security.
-
-## Known issue
-Application Gateway v2 does not support the following DHE ciphers and these won't be used for the TLS connections with clients even though they are mentioned in the predefined policies. Instead of DHE ciphers, secure and faster ECDHE ciphers are recommended.
--- TLS_DHE_RSA_WITH_AES_128_GCM_SHA256-- TLS_DHE_RSA_WITH_AES_128_CBC_SHA-- TLS_DHE_RSA_WITH_AES_256_GCM_SHA384-- TLS_DHE_RSA_WITH_AES_256_CBC_SHA-- TLS_DHE_DSS_WITH_AES_128_CBC_SHA256-- TLS_DHE_DSS_WITH_AES_128_CBC_SHA-- TLS_DHE_DSS_WITH_AES_256_CBC_SHA256-- TLS_DHE_DSS_WITH_AES_256_CBC_SHA
+## Usage and version details
-### AppGwSslPolicy20150501
+- SSL 2.0 and 3.0 are disabled for all application gateways and are not configurable.
+- A custom TLS policy allows you to select any TLS protocol as the minimum protocol version for your gateway: TLSv1_0, TLSv1_1, TLSv1_2, or TLSv1_3.
+- If no TLS policy is defined, the minimum protocol version is set to TLSv1_0, and protocol versions v1.0, v1.1, and v1.2 are supported.
+- The new **Predefined and Customv2 policies** that support **TLS v1.3** are currently in **Preview** and only available with Application Gateway V2 SKUs (Standard_v2 or WAF_v2).
+- Using a new Predefined or Customv2 policy enhances SSL security and performance posture of the entire gateway (for SSL Policy and [SSL Profile](application-gateway-configure-listener-specific-ssl-policy.md#set-up-a-listener-specific-ssl-policy)). Hence, both old and new policies cannot co-exist on a gateway. You must use any of the older predefined or custom policies across the gateway if clients require older TLS versions or ciphers (for example, TLS v1.0).
+- TLS cipher suites used for the connection are also based on the type of the certificate being used. The cipher suites used in "client to application gateway connections" are based on the type of listener certificates on the application gateway. Whereas the cipher suites used in establishing "application gateway to backend pool connections" are based on the type of server certificates presented by the backend servers.
-|Property |Value |
-|||
-|Name | AppGwSslPolicy20150501 |
-|MinProtocolVersion | TLSv1_0 |
-|Default| True (if no predefined policy is specified) |
-|CipherSuites |TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384<br>TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256<br>TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384<br>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256<br>TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA<br>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA<br>TLS_DHE_RSA_WITH_AES_256_GCM_SHA384<br>TLS_DHE_RSA_WITH_AES_128_GCM_SHA256<br>TLS_DHE_RSA_WITH_AES_256_CBC_SHA<br>TLS_DHE_RSA_WITH_AES_128_CBC_SHA<br>TLS_RSA_WITH_AES_256_GCM_SHA384<br>TLS_RSA_WITH_AES_128_GCM_SHA256<br>TLS_RSA_WITH_AES_256_CBC_SHA256<br>TLS_RSA_WITH_AES_128_CBC_SHA256<br>TLS_RSA_WITH_AES_256_CBC_SHA<br>TLS_RSA_WITH_AES_128_CBC_SHA<br>TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384<br>TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256<br>TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384<br>TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256<br>TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA<br>TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA<br>TLS_DHE_DSS_WITH_AES_256_CBC_SHA256<br>TLS_DHE_DSS_WITH_AES_128_CBC_SHA256<br>TLS_DHE_DSS_WITH_AES_256_CBC_SHA<br>TLS_DHE_DSS_WITH_AES_128_CBC_SHA<br>TLS_RSA_WITH_3DES_EDE_CBC_SHA<br>TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA |
-
-### AppGwSslPolicy20170401
-
-|Property |Value |
-| | |
-|Name | AppGwSslPolicy20170401 |
-|MinProtocolVersion | TLSv1_1 |
-|Default| False |
-|CipherSuites |TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256<br>TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384<br>TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA<br>TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA<br>TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256<br>TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384<br>TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384<br>TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256<br>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA<br>TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA<br>TLS_RSA_WITH_AES_256_GCM_SHA384<br>TLS_RSA_WITH_AES_128_GCM_SHA256<br>TLS_RSA_WITH_AES_256_CBC_SHA256<br>TLS_RSA_WITH_AES_128_CBC_SHA256<br>TLS_RSA_WITH_AES_256_CBC_SHA<br>TLS_RSA_WITH_AES_128_CBC_SHA |
-
-### AppGwSslPolicy20170401S
+## Predefined TLS policy
-|Property |Value |
-|||
-|Name | AppGwSslPolicy20170401S |
-|MinProtocolVersion | TLSv1_2 |
-|Default| False |
-|CipherSuites |TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 <br> TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 <br> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA <br>TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA <br>TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256<br>TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384<br>TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384<br>TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256<br>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA<br>TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA<br>TLS_RSA_WITH_AES_256_GCM_SHA384<br>TLS_RSA_WITH_AES_128_GCM_SHA256<br>TLS_RSA_WITH_AES_256_CBC_SHA256<br>TLS_RSA_WITH_AES_128_CBC_SHA256<br>TLS_RSA_WITH_AES_256_CBC_SHA<br>TLS_RSA_WITH_AES_128_CBC_SHA<br> |
+Application Gateway offers several predefined security policies. You can configure your gateway with any of these policies to get the appropriate level of security. The policy names are annotated by the year and month in which they were configured (AppGwSslPolicy&lt;YYYYMMDD&gt;). Each policy offers different TLS protocol versions and/or cipher suites. These predefined policies are configured keeping in mind the best practices and recommendations from the Microsoft Security team. We recommend that you use the newest TLS policies to ensure the best TLS security.
+
+The following table shows the list of cipher suites and minimum protocol version support for each predefined policy. The ordering of the cipher suites determines the priority order during TLS negotiation. To know the exact ordering of the cipher suites for these predefined policies, you can refer to the PowerShell, CLI, REST API or the Listeners blade in portal.
+
+| Predefined policy names (AppGwSslPolicy&lt;YYYYMMDD&gt;) | 20150501 | 20170401 | 20170401S | 20220101 <br/> (Preview) | 20220101S <br/> (Preview) |
+| - | - | - | - | - | - |
+| **Minimum Protocol Version** | 1.0 | 1.1 | 1.2 | 1.2 | 1.2 |
+| **Enabled protocol versions** | 1.0<br/>1.1<br/>1.2 | 1.1<br/>1.2 | 1.2 | 1.2<br/>1.3 | 1.2<br/>1.3 |
+| **Default** | True | False | False | False | False |
+| TLS_AES_128_GCM_SHA256 | &cross; | &cross; | &cross; | &check; | &check; |
+| TLS_AES_256_GCM_SHA384 | &cross; | &cross; | &cross; | &check; | &check; |
+| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 | &check; | &check; | &check; | &check; | &check; |
+| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 | &check; | &check; | &check; | &check; | &check; |
+| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 | &check; | &cross; | &cross; | &check; | &cross; |
+| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 | &check; | &cross; | &cross; | &check; | &cross; |
+| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA | &check; | &check; | &check; | &cross; | &cross; |
+| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA | &check; | &check; | &check; | &cross; | &cross; |
+| TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 | &check; | &cross; | &cross; | &cross; | &cross; |
+| TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 | &check; | &cross; | &cross; | &cross; | &cross; |
+| TLS_DHE_RSA_WITH_AES_256_CBC_SHA | &check; | &cross; | &cross; | &cross; | &cross; |
+| TLS_DHE_RSA_WITH_AES_128_CBC_SHA | &check; | &cross; | &cross; | &cross; | &cross; |
+| TLS_RSA_WITH_AES_256_GCM_SHA384 | &check; | &check; | &check; | &cross; | &cross; |
+| TLS_RSA_WITH_AES_128_GCM_SHA256 | &check; | &check; | &check; | &cross; | &cross; |
+| TLS_RSA_WITH_AES_256_CBC_SHA256 | &check; | &check; | &check; | &cross; | &cross; |
+| TLS_RSA_WITH_AES_128_CBC_SHA256 | &check; | &check; | &check; | &cross; | &cross; |
+| TLS_RSA_WITH_AES_256_CBC_SHA | &check; | &check; | &check; | &cross; | &cross; |
+| TLS_RSA_WITH_AES_128_CBC_SHA | &check; | &check; | &check; | &cross; | &cross; |
+| TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 | &check; | &check; | &check; | &check; | &check; |
+| TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 | &check; | &check; | &check; | &check; | &check; |
+| TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 | &check; | &check; | &check; | &check; | &cross; |
+| TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 | &check; | &check; | &check; | &check; | &cross; |
+| TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA | &check; | &check; | &check; | &cross; | &cross; |
+| TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA | &check; | &check; | &check; | &cross; | &cross; |
+| TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 | &check; | &cross; | &cross; | &cross; | &cross; |
+| TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 | &check; | &cross; | &cross; | &cross; | &cross; |
+| TLS_DHE_DSS_WITH_AES_256_CBC_SHA | &check; | &cross; | &cross; | &cross; | &cross; |
+| TLS_DHE_DSS_WITH_AES_128_CBC_SHA | &check; | &cross; | &cross; | &cross; | &cross; |
+| TLS_RSA_WITH_3DES_EDE_CBC_SHA | &check; | &cross; | &cross; | &cross; | &cross; |
+| TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA | &check; | &cross; | &cross; | &cross; | &cross; |
## Custom TLS policy
-If a predefined TLS policy needs to be configured for your requirements, you must define your own custom TLS policy. With a custom TLS policy, you have complete control over the minimum TLS protocol version to support, as well as the supported cipher suites and their priority order.
+If a TLS policy needs to be configured for your requirements, you can use a Custom TLS policy. With a custom TLS policy, you have complete control over the minimum TLS protocol version to support, as well as the supported cipher suites and their priority order.
+
+> [!NOTE]
+> The newer, stronger ciphers and TLSv1.3 support are only available with the **CustomV2 policy (Preview)**. It provides enhanced security and performance benefits.
> [!IMPORTANT]
-> If you are using a custom SSL policy in Application Gateway v1 SKU (Standard or WAF), make sure that you add the mandatory cipher "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" to the list. This cipher is required to enable metrics and logging in the Application Gateway v1 SKU.
+> - If you are using a custom SSL policy in Application Gateway v1 SKU (Standard or WAF), make sure that you add the mandatory cipher "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" to the list. This cipher is required to enable metrics and logging in the Application Gateway v1 SKU.
> This is not mandatory for Application Gateway v2 SKU (Standard_v2 or WAF_v2).
+> - The cipher suites ΓÇ£TLS_AES_128_GCM_SHA256ΓÇ¥ and ΓÇ£TLS_AES_256_GCM_SHA384ΓÇ¥ with TLSv1.3 are not customizable. Hence, these are included by default when choosing a CustomV2 policy with minimum protocol version 1.2 or 1.3.
-### TLS/SSL protocol versions
-
-* SSL 2.0 and 3.0 are disabled by default for all application gateways. These protocol versions are not configurable.
-* A custom TLS policy gives you the option to select any one of the following three protocols as the minimum TLS protocol version for your gateway: TLSv1_0, TLSv1_1, and TLSv1_2.
-* If no TLS policy is defined, all three protocols (TLSv1_0, TLSv1_1, and TLSv1_2) are enabled.
### Cipher suites Application Gateway supports the following cipher suites from which you can choose your custom policy. The ordering of the cipher suites determines the priority order during TLS negotiation. -
+- TLS_AES_128_GCM_SHA256 (available only with Customv2)
+- TLS_AES_256_GCM_SHA384 (available only with Customv2)
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
Application Gateway supports the following cipher suites from which you can choo
- TLS_RSA_WITH_3DES_EDE_CBC_SHA - TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA
-> [!NOTE]
-> TLS cipher suites used for the connection are also based on the type of the certificate being used. In client to application gateway connections, the cipher suites used are based on the type of server certificates on the application gateway listener. In application gateway to backend pool connections, the cipher suites used are based on the type of server certificates on the backend pool servers.
+## Known issue
+Application Gateway v2 does not support the following DHE ciphers. These won't be used for the TLS connections with clients even though they are mentioned in the predefined policies. Instead of DHE ciphers, secure and faster ECDHE ciphers are recommended.
+
+- TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
+- TLS_DHE_RSA_WITH_AES_128_CBC_SHA
+- TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
+- TLS_DHE_RSA_WITH_AES_256_CBC_SHA
+- TLS_DHE_DSS_WITH_AES_128_CBC_SHA256
+- TLS_DHE_DSS_WITH_AES_128_CBC_SHA
+- TLS_DHE_DSS_WITH_AES_256_CBC_SHA256
+- TLS_DHE_DSS_WITH_AES_256_CBC_SHA
## Next steps
application-gateway High Traffic Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/high-traffic-support.md
Enable bot protection to block known bad bots. This should reduce the amount of
Diagnostic logs allow you to view firewall logs, performance logs, and access logs. You can use these logs in Azure to manage and troubleshoot Application Gateways. For more information, see our [diagnostics documentation](./application-gateway-diagnostics.md#diagnostic-logging). ## Set up an TLS policy for extra security
-Ensure you're using the latest TLS policy version ([AppGwSslPolicy20170401S](./application-gateway-ssl-policy-overview.md#appgwsslpolicy20170401s)). This enforces TLS 1.2 and stronger ciphers. For more information, see [configuring TLS policy versions and cipher suites via PowerShell](./application-gateway-configure-ssl-policy-powershell.md).
+Ensure you're using the latest TLS policy version ([AppGwSslPolicy20220101](./application-gateway-ssl-policy-overview.md#predefined-tls-policy)) or higher. These support minimum TLS version 1.2 with stronger ciphers. For more information, see [configuring TLS policy versions and cipher suites via PowerShell](./application-gateway-configure-ssl-policy-powershell.md).
application-gateway Rewrite Http Headers Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-http-headers-url.md
In the case of a URL redirect, Application Gateway sends a redirect response to
- If a response has more than one header with the same name, then rewriting the value of one of those headers will result in dropping the other headers in the response. This can usually happen with Set-Cookie header since you can have more than one Set-Cookie header in a response. One such scenario is when you are using an app service with an application gateway and have configured cookie-based session affinity on the application gateway. In this case the response will contain two Set-Cookie headers: one used by the app service, for example: `Set-Cookie: ARRAffinity=ba127f1caf6ac822b2347cc18bba0364d699ca1ad44d20e0ec01ea80cda2a735;Path=/;HttpOnly;Domain=sitename.azurewebsites.net` and another for application gateway affinity, for example, `Set-Cookie: ApplicationGatewayAffinity=c1a2bd51lfd396387f96bl9cc3d2c516; Path=/`. Rewriting one of the Set-Cookie headers in this scenario can result in removing the other Set-Cookie header from the response. - Rewrites are not supported when the application gateway is configured to redirect the requests or to show a custom error page.-- Header names can contain any alphanumeric characters and specific symbols as defined in [RFC 7230](https://tools.ietf.org/html/rfc7230#page-27). We don't currently support the underscore (\_) special character in Header names.
+- Request header names can contain alphanumeric characters and hyphens. Headers names containing other characters will be discarded when a request is sent to the backend target.
+- Response header names can contain any alphanumeric characters and specific symbols as defined in [RFC 7230](https://tools.ietf.org/html/rfc7230#page-27), with the exception of underscores (\_).
- Connection and upgrade headers cannot be rewritten - Rewrites are not supported for 4xx and 5xx responses generated directly from Application Gateway
azure-maps Geographic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geographic-scope.md
Title: Azure Maps service geographic scope+ description: Learn about Azure Maps service's geographic mappings
The table below describes the mapping between geography and supported Azure geog
### URL example for geographic mapping
-The following is the [Search - Get Search Address](/rest/api/maps/search/get-search-address) command:
+The following is the [Search - Get Search Address](/rest/api/maps/search/get-search-address) request:
```http GET https://{geography}.atlas.microsoft.com/search/address/{format}?api-version=1.0&query={query}
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
} }, "dataSources": {
- "logFiles ": [
+ "logFiles": [
{ "streams": [ "Custom-MyLogFileFormat"
The final step is to create a data collection association that associates the da
- Learn more about the [Azure Monitor agent](azure-monitor-agent-overview.md). - Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).-- Learn more about [data collection endpoints](../essentials/data-collection-endpoint-overview.md).
+- Learn more about [data collection endpoints](../essentials/data-collection-endpoint-overview.md).
azure-monitor Itsmc Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-definition.md
Action groups provide a modular and reusable way to trigger actions for your Azu
### Define a template
-Certain work item types can use templates that you define in the ITSM tool. By using templates, you can define fields that will be automatically populated according to fixed values for an action group. You can define which template you want to use as a part of the definition of an action group. You can find in ServiceNow docs information about how to create templates - (here)[https://docs.servicenow.com/bundle/paris-platform-administration/page/administer/form-administration/task/t_CreateATemplateUsingTheTmplForm.html].
+Certain work item types can use templates that you define in the ITSM tool. Using templates, you can define fields that will be automatically populated using fixed values for an action group. You can define which template you want to use as a part of the definition of an action group. Find information about how to create templates [here](https://docs.servicenow.com/bundle/paris-platform-administration/page/administer/form-administration/task/t_CreateATemplateUsingTheTmplForm.html).
To create an action group:
azure-monitor Itsmc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-overview.md
Title: IT Service Management Connector overview
-description: This article provides an overview of IT Service Management Connector (ITSMC).
+ Title: IT Service Management integration
+description: This article provides an overview of the ways you can integrate with an IT Service Management product.
Last updated 3/30/2022
-# IT Service Management Connector Overview
+# IT Service Management (ITSM) Integration
:::image type="icon" source="media/itsmc-overview/itsmc-symbol.png":::
-IT Service Management Connector allows you to connect Azure Monitor to supported IT Service Management (ITSM) products or services using either ITSM actions or Secure webhook actions.
+This article describes how you can integrate Azure Monitor with supported IT Service Management (ITSM) products.
-Azure services like Azure Log Analytics and Azure Monitor provide tools to detect, analyze, and troubleshoot problems with your Azure and non-Azure resources. But the work items related to an issue typically reside in an ITSM product or service. The ITSM Connector provides a bi-directional connection between Azure and ITSM tools to help you resolve issues faster. You can create work items in your ITSM tool, based on your Azure alerts (Metric Alerts, Activity Log Alerts, and Log Analytics alerts).
+Azure services like Azure Log Analytics and Azure Monitor provide tools to detect, analyze, and troubleshoot problems with your Azure and non-Azure resources. But the work items related to an issue typically reside in an ITSM product or service.
-The ITSM Connector supports connections with the following ITSM tools:
+Azure Monitor provides a bi-directional connection between Azure and ITSM tools to help you resolve issues faster. You can create work items in your ITSM tool, based on your Azure alerts (Metric Alerts, Activity Log Alerts, and Log Analytics alerts).
+
+Azure Monitor supports connections with the following ITSM tools:
- ServiceNow ITSM or ITOM-- System Center Service Manager (SCSM) - BMC >[!NOTE] > As of March 1, 2022, System Center ITSM integrations with Azure alerts is no longer enabled for new customers. New System Center ITSM Connections are not supported. > Existing ITSM connections are supported. For information about legal terms and the privacy policy, see [Microsoft Privacy Statement](https://go.microsoft.com/fwLink/?LinkID=522330&clcid=0x9).
-## ITSM Connector Workflow
-Depending on your integration, start using the ITSM Connector with these steps:
+## ITSM Integration Workflow
+Depending on your integration, start connecting to your ITSM with these steps:
- For Service Now ITOM events and BMC Helix use the Secure webhook action: 1. [Register your app with Azure AD.](./itsm-connector-secure-webhook-connections-azure-configuration.md#register-with-azure-active-directory) 1. [Define Service principal.](./itsm-connector-secure-webhook-connections-azure-configuration.md#define-service-principal)
Depending on your integration, start using the ITSM Connector with these steps:
- [ServiceNow ITOM](./itsmc-secure-webhook-connections-servicenow.md) - [BMC Helix](./itsmc-secure-webhook-connections-bmc.md). -- For Service Now ITSM and SCSM use the ITSM action:
+- For Service Now ITSM, use the ITSM action:
1. Connect to your ITSM. - For ServiceNow ITSM, see [the ServiceNow connection instructions](./itsmc-connections-servicenow.md). - For SCSM, see [the System Center Service Manager connection instructions](./itsmc-connections-scsm.md).
- 1. (Optional) Set up the IP Ranges. In order to list the ITSM IP addresses in order to allow ITSM connections from partners ITSM tools, we recommend the to list the whole public IP range of Azure region where their LogAnalytics workspace belongs. [details here](https://www.microsoft.com/en-us/download/details.aspx?id=56519). For regions EUS/WEU/EUS2/WUS2/US South Central the customer can list ActionGroup network tag only.)
- 1. [Configure Azure ITSM Solution](./itsmc-definition.md#add-it-service-management-connector)
- 1. [Configure Azure ITSM connector for your ITSM environment.](./itsmc-definition.md#create-an-itsm-connection)
+ 1. (Optional) Set up the IP Ranges. In order to list the ITSM IP addresses to allow ITSM connections from partner ITSM tools, we recommend listing the whole public IP range of Azure region where their LogAnalytics workspace belongs. [details here](https://www.microsoft.com/en-us/download/details.aspx?id=56519). For regions EUS/WEU/EUS2/WUS2/US South Central the customer can list ActionGroup network tag only.)
+ 1. [Configure your Azure ITSM Solution](./itsmc-definition.md#add-it-service-management-connector)
+ 1. [Configure the Azure ITSM connector for your ITSM environment.](./itsmc-definition.md#create-an-itsm-connection)
1. [Configure Action Group to leverage ITSM connector.](./itsmc-definition.md#define-a-template) ## Next steps
azure-monitor Data Model Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-context.md
Anonymous user ID. Represents the end user of the application. When telemetry is
[Sampling](./sampling.md) is one of the techniques to minimize the amount of collected telemetry. Sampling algorithm attempts to either sample in or out all the correlated telemetry. Anonymous user ID is used for sampling score generation. So anonymous user ID should be a random enough value.
+> [!NOTE]
+> The count of anonymous user IDs is not the same as the number of unique application users. The count of anonymous user IDs is typically higher because each time the user opens your app on a different device or browser, or cleans up browser cookies, a new unique anonymous user id is allocated. This may result in counting the same physical users multiple times.
+
+User IDs can be cross referenced with session IDs to provide unique telemetry dimensions and establish user activity over a session duration.
+ Using anonymous user ID to store user name is a misuse of the field. Use Authenticated user ID. Max length: 128
Max length: 128
Authenticated user ID. The opposite of anonymous user ID, this field represents the user with a friendly name. This is only collected by default with the ASP.NET Framework SDK's [`AuthenticatedUserIdTelemetryInitializer`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/WEB/Src/Web/Web/AuthenticatedUserIdTelemetryInitializer.cs).
+When users authenticate in your app, you can use the Application Insights SDK to initialize the Authenticated User ID with a value that identifies the user in a persistent manner across browser and devices, all telemetry items are then attributed to that unique ID. This enables querying for all telemetry collected for a specific user (subject to [sampling configurations](./sampling.md) and [telemetry filtering](./api-filtering-sampling.md)).
+
+User IDs can be cross referenced with session IDs to provide unique telemetry dimensions and establish user activity over a session duration.
+ Max length: 1024
azure-monitor Opencensus Python Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-request.md
OpenCensus doesn't have an extension for FastAPI. To write your own FastAPI midd
HTTP_URL = COMMON_ATTRIBUTES['HTTP_URL'] HTTP_STATUS_CODE = COMMON_ATTRIBUTES['HTTP_STATUS_CODE']
- tracer = Tracer(exporter=AzureExporter(connection_string=f'InstrumentationKey={APPINSIGHTS_INSTRUMENTATIONKEY}'),sampler=ProbabilitySampler(1.0))
+ APPINSIGHTS_CONNECTION_STRING='<your-appinsights_connection-string-here>'
+ exporter=AzureExporter(connection_string=f'{APPINSIGHTS_CONNECTION_STRING}')
+ sampler=ProbabilitySampler(1.0)
# fastapi middleware for opencensus @app.middleware("http")
- async def middlewareOpencensus(request: Request, call_next):
+ async def middlewareOpencensus(request: Request, call_next):
+ tracer = Tracer(exporter=exporter, sampler=sampler)
with tracer.span("main") as span: span.span_kind = SpanKind.SERVER
azure-monitor Profiler Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-containers.md
+
+ Title: Profile Azure Containers with Application Insights Profiler
+description: Enable Application Insights Profiler for Azure Containers.
++
+ms.contributor: charles.weininger
+ Last updated : 04/25/2022++
+# Profile live Azure containers with Application Insights
+
+You can enable the Application Insights Profiler for ASP.NET Core application running in your container almost without code. To enable the Application Insights Profiler on your container instance, you'll need to:
+
+* Add the reference to the NuGet package.
+* Set the environment variables to enable it.
+
+In this article, you'll learn the various ways you can:
+- Install the NuGet package in the project.
+- Set the environment variable via the orchestrator (like Kubernetes).
+- Learn security considerations around production deployment, like protecting your Application Insights Instrumentation key.
+
+## Pre-requisites
+
+- [An Application Insights resource](./create-new-resource.md). Make note of the instrumentation key.
+- [Docker Desktop](https://www.docker.com/products/docker-desktop/) to build docker images.
+- [.NET 6 SDK](https://dotnet.microsoft.com/download/dotnet/6.0) installed.
+
+## Set up the environment
+
+1. Clone and use the following [sample project](https://github.com/microsoft/ApplicationInsights-Profiler-AspNetCore/tree/main/examples/EnableServiceProfilerForContainerAppNet6):
+
+ ```bash
+ git clone https://github.com/microsoft/ApplicationInsights-Profiler-AspNetCore.git
+ ```
+
+1. Navigate to the Container App example:
+
+ ```bash
+ cd examples/EnableServiceProfilerForContainerAppNet6
+ ```
+
+1. This example is a bare bone project created by calling the following CLI command:
+
+ ```powershell
+ dotnet new mvc -n EnableServiceProfilerForContainerApp
+ ```
+
+ Note that we've added delay in the `Controllers/WeatherForecastController.cs` project to simulate the bottleneck.
+
+ ```CSharp
+ [HttpGet(Name = "GetWeatherForecast")]
+ public IEnumerable<WeatherForecast> Get()
+ {
+ SimulateDelay();
+ ...
+ // Other existing code.
+ }
+ private void SimulateDelay()
+ {
+ // Delay for 500ms to 2s to simulate a bottleneck.
+ Thread.Sleep((new Random()).Next(500, 2000));
+ }
+ ```
+
+## Pull the latest ASP.NET Core build/runtime images
+
+1. Navigate to the .NET Core 6.0 example directory.
+
+ ```bash
+ cd examples/EnableServiceProfilerForContainerAppNet6
+ ```
+
+1. Pull the latest ASP.NET Core images
+
+ ```shell
+ docker pull mcr.microsoft.com/dotnet/sdk:6.0
+ docker pull mcr.microsoft.com/dotnet/aspnet:6.0
+ ```
+
+> [!TIP]
+> Find the official images for Docker [SDK](https://hub.docker.com/_/microsoft-dotnet-sdk) and [runtime](https://hub.docker.com/_/microsoft-dotnet-aspnet).
+
+## Add your Application Insights key
+
+1. Via your Application Insights resource in the Azure portal, take note of your Application Insights instrumentation key.
+
+ :::image type="content" source="./media/profiler-containerinstances/application-insights-key.png" alt-text="Find instrumentation key in Azure portal":::
+
+1. Open `appsettings.json` and add your Application Insights instrumentation key to this code section:
+
+ ```json
+ {
+ "ApplicationInsights":
+ {
+ "InstrumentationKey": "Your instrumentation key"
+ }
+ }
+ ```
+
+## Build and run the Docker image
+
+1. Review the `Dockerfile`.
+
+1. Build the example image:
+
+ ```bash
+ docker build -t profilerapp .
+ ```
+
+1. Run the container:
+
+ ```bash
+ docker run -d -p 8080:80 --name testapp profilerapp
+ ```
+
+## View the container via your browser
+
+To hit the endpoint, either:
+
+- Visit [http://localhost:8080/weatherforecast](http://localhost:8080/weatherforecast) in your browser, or
+- Use curl:
+
+ ```terraform
+ curl http://localhost:8080/weatherforecast
+ ```
++
+## Inspect the logs
+
+Optionally, inspect the local log to see if a session of profiling finished:
+
+```bash
+docker logs testapp
+```
+
+In the local logs, note the following events:
+
+```output
+Starting application insights profiler with instrumentation key: your-instrumentation key # Double check the instrumentation key
+Service Profiler session started. # Profiler started.
+Finished calling trace uploader. Exit code: 0 # Uploader is called with exit code 0.
+Service Profiler session finished. # A profiling session is completed.
+```
+
+## View the Service Profiler traces
+
+1. Wait for 2-5 minutes so the events can be aggregated to Application Insights.
+1. Open the **Performance** blade in your Application Insights resource.
+1. Once the trace process is complete, you will see the Profiler Traces button like it below:
+
+ :::image type="content" source="./media/profiler-containerinstances/profiler_traces.png" alt-text="Profile traces in the performance blade":::
+++
+## Clean up resources
+
+Run the following command to stop the example project:
+
+```bash
+docker rm -f testapp
+```
+
+## Next Steps
+
+- Learn more about [Application Insights Profiler](./profiler-overview.md).
+- Learn how to enable Profiler in your [ASP.NET Core applications run on Linux](./profiler-aspnetcore-linux.md).
azure-monitor Profiler Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-overview.md
Title: Profile production apps in Azure with Application Insights Profiler
-description: Identify the hot path in your web server code with a low-footprint profiler.
+description: Identify the hot path in your web server code with a low-footprint profiler
++
+ms.contributor: charles.weininger
Previously updated : 08/06/2018 Last updated : 04/25/2022+ # Profile production applications in Azure with Application Insights+
+Azure Application Insights Profiler provides performance traces for applications running in production in Azure. Profiler:
+- Captures the data automatically at scale without negatively affecting your users.
+- Helps you identify the ΓÇ£hotΓÇ¥ code path spending the most time handling a particular web request.
+ ## Enable Application Insights Profiler for your application
-Azure Application Insights Profiler provides performance traces for applications that are running in production in Azure. Profiler captures the data automatically at scale without negatively affecting your users. Profiler helps you identify the ΓÇ£hotΓÇ¥ code path that takes the longest time when it's handling a particular web request.
+### Supported in Profiler
-Profiler works with .NET applications that are deployed on the following Azure services. Specific instructions for enabling Profiler for each service type are in the links below.
+Profiler works with .NET applications deployed on the following Azure services. View specific instructions for enabling Profiler for each service type in the links below.
-* [Azure App Service](profiler.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Cloud Services](profiler-cloudservice.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Service Fabric](profiler-servicefabric.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Virtual Machines and virtual machine scale sets](profiler-vm.md?toc=/azure/azure-monitor/toc.json)
-* [**PREVIEW** ASP.NET Core Azure Linux Web Apps](profiler-aspnetcore-linux.md?toc=/azure/azure-monitor/toc.json)
+| Compute platform | .NET (>= 4.6) | .NET Core | Java |
+| - | - | | - |
+| [Azure App Service](profiler.md) | Yes | Yes | No |
+| [Azure Virtual Machines and virtual machine scale sets for Windows](profiler-vm.md) | Yes | Yes | No |
+| [Azure Virtual Machines and virtual machine scale sets for Linux](profiler-aspnetcore-linux.md) | No | Yes | No |
+| [Azure Cloud Services](profiler-cloudservice.md) | Yes | Yes | N/A |
+| [Azure Container Instances for Windows](profiler-containers.md) | No | Yes | No |
+| [Azure Container Instances for Linux](profiler-containers.md) | No | Yes | No |
+| Kubernetes | No | Yes | No |
+| Azure Functions | No | No | No |
+| Azure Spring Cloud | N/A | No | No |
+| [Azure Service Fabric](profiler-servicefabric.md) | Yes | Yes | No |
If you've enabled Profiler but aren't seeing traces, check our [Troubleshooting guide](profiler-troubleshooting.md?toc=/azure/azure-monitor/toc.json). ## View Profiler data
-For Profiler to upload traces, your application must be actively handling requests. If you're doing an experiment, you can generate requests to your web app by using [Application Insights performance testing](/vsts/load-test/app-service-web-app-performance-test). If you've newly enabled Profiler, you can run a short load test. While the load test is running, select the **Profile Now** button on the [**Profiler Settings** pane](profiler-settings.md). When Profiler is running, it profiles randomly about once per hour and for a duration of two minutes. If your application is handling a steady stream of requests, Profiler uploads traces every hour.
+For Profiler to upload traces, your application must be actively handling requests. To generate requests:
+- **If you're doing an experiment,** use [Application Insights performance testing](/vsts/load-test/app-service-web-app-performance-test).
+- **If you've newly enabled Profiler,** simply run a short load test.
-After your application receives some traffic and Profiler has had time to upload the traces, you should have traces to view. This process can take 5 to 10 minutes. To view traces, in the **Performance** pane, select **Take Actions**, and then select the **Profiler Traces** button.
+While the load test is running, select the **Profile Now** button on the [**Profiler Settings** pane](profiler-settings.md). Once Profiler starts running, it profiles randomly about once per hour, for a duration of two minutes. If your application is handling a steady stream of requests, Profiler uploads traces every hour.
-![Application Insights Performance pane preview Profiler traces][performance-blade]
+After your application receives some traffic and Profiler has had time to upload the traces, you should be able to view traces within 5 to 10 minutes. To view traces:
-Select a sample to display a code-level breakdown of time spent executing the request.
+1. Select **Take Actions** in the **Performance** pane.
+1. Select the **Profiler Traces** button.
-![Application Insights trace explorer][trace-explorer]
+ ![Application Insights Performance pane preview Profiler traces][performance-blade]
+
+1. Select a sample to display a code-level breakdown of time spent executing the request.
+
+ ![Application Insights trace explorer][trace-explorer]
The trace explorer displays the following information:
-* **Show Hot Path**: Opens the biggest leaf node, or at least something close. In most cases, this node is near a performance bottleneck.
-* **Label**: The name of the function or event. The tree displays a mix of code and events that occurred, such as SQL and HTTP events. The top event represents the overall request duration.
-* **Elapsed**: The time interval between the start of the operation and the end of the operation.
-* **When**: The time when the function or event was running in relation to other functions.
+| Category | Description |
+| -- | -- |
+| **Show Hot Path** | Opens the biggest leaf node, or at least something close. In most cases, this node is near a performance bottleneck. |
+| **Label** | The name of the function or event. The tree displays a mix of code and events that occurred, such as SQL and HTTP events. The top event represents the overall request duration. |
+| **Elapsed** | The time interval between the start of the operation and the end of the operation. |
+| **When** | The time when the function or event was running in relation to other functions. |
-## How to read performance data
-The Microsoft service profiler uses a combination of sampling methods and instrumentation to analyze the performance of your application. When detailed collection is in progress, the service profiler samples the instruction pointer of each machine CPU every millisecond. Each sample captures the complete call stack of the thread that's currently executing. It gives detailed information about what that thread was doing, at both a high level and a low level of abstraction. The service profiler also collects other events to track activity correlation and causality, including context switching events, Task Parallel Library (TPL) events, and thread pool events.
+### Other options for viewing profiler data
+
+Besides viewing the profiles in the Azure portal, you can download the profiles and open them in other tools. There are 3 options for viewing the contents' profiles. The downloaded file is a .diagsession file and can be opened natively by Visual Studio. Use the profiling tools in Visual Studio to examine the details of the file.
+
+If you rename the file by adding `.zip` to the end of the file name, you can also open it in:
-The call stack that's displayed in the timeline view is the result of the sampling and instrumentation. Because each sample captures the complete call stack of the thread, it includes code from Microsoft .NET Framework and from other frameworks that you reference.
+- Windows Performance analyzer
+ - [Download](https://www.microsoft.com/p/windows-performance-analyzer/9n0w1b2bxgnz)
+ - [Documentation](https://docs.microsoft.com/windows-hardware/test/wpt/windows-performance-analyzer)
+- Perfview
+ - [Download](https://github.com/microsoft/perfview/blob/main/documentation/Downloading.md)
+ - [How-to videos](https://docs.microsoft.com/shows/PerfView-Tutorial/)
+
+## How to read performance data
+
+The Microsoft service Profiler uses a combination of sampling methods and instrumentation to analyze the performance of your application. During detailed collection the service Profiler:
+- Samples the instruction pointer of each machine CPU every millisecond. Each sample:
+ - Captures the complete call stack of the thread that's currently executing (the result of sampling and instrumentation).
+ - Includes code from Microsoft .NET Framework and from other frameworks that you reference.
+ - Gives detailed information about the thread actions, at both a high level and a low level of abstraction.
+- Collects other events to track activity correlation and causality, including:
+ - Context switching events
+ - Task Parallel Library (TPL) events
+ - Thread pool events
### <a id="jitnewobj"></a>Object allocation (clr!JIT\_New or clr!JIT\_Newarr1)
-**clr!JIT\_New** and **clr!JIT\_Newarr1** are helper functions in .NET Framework that allocate memory from a managed heap. **clr!JIT\_New** is invoked when an object is allocated. **clr!JIT\_Newarr1** is invoked when an object array is allocated. These two functions are usually fast and take relatively small amounts of time. If **clr!JIT\_New** or **clr!JIT\_Newarr1** takes a lot of time in your timeline, the code might be allocating many objects and consuming significant amounts of memory.
+**clr!JIT\_New** and **clr!JIT\_Newarr1** are helper functions in .NET Framework that allocate memory from a managed heap.
+- **clr!JIT\_New** is invoked when an object is allocated.
+- **clr!JIT\_Newarr1** is invoked when an object array is allocated.
+
+These two functions usually work quickly. If **clr!JIT\_New** or **clr!JIT\_Newarr1** take up time in your timeline, the code might be allocating many objects and consuming significant amounts of memory.
### <a id="theprestub"></a>Loading code (clr!ThePreStub)
-**clr!ThePreStub** is a helper function in .NET Framework that prepares the code to execute for the first time. This execution usually includes, but isn't limited to, just-in-time (JIT) compilation. For each C# method, **clr!ThePreStub** should be invoked at most once during a process.
+**clr!ThePreStub** is a helper function in .NET Framework that prepares the code for initial execution, which usually includes just-in-time (JIT) compilation. For each C# method, **clr!ThePreStub** should be invoked, at most, once during a process.
-If **clr!ThePreStub** takes a long time for a request, the request is the first one to execute that method. The time for .NET Framework runtime to load the first method is significant. You might consider using a warmup process that executes that portion of the code before your users access it, or consider running Native Image Generator (ngen.exe) on your assemblies.
+If **clr!ThePreStub** takes extra time for a request, it's the first request to execute that method. The .NET Framework runtime takes a significant amount of time to load the first method. Consider:
+- Using a warmup process that executes that portion of the code before your users access it.
+- Running Native Image Generator (ngen.exe) on your assemblies.
### <a id="lockcontention"></a>Lock contention (clr!JITutil\_MonContention or clr!JITutil\_MonEnterWorker)
-**clr!JITutil\_MonContention** or **clr!JITutil\_MonEnterWorker** indicates that the current thread is waiting for a lock to be released. This text is often displayed when you execute a C# **LOCK** statement, invoke the **Monitor.Enter** method, or invoke a method with the **MethodImplOptions.Synchronized** attribute. Lock contention usually occurs when thread _A_ acquires a lock and thread _B_ tries to acquire the same lock before thread _A_ releases it.
+**clr!JITutil\_MonContention** or **clr!JITutil\_MonEnterWorker** indicate that the current thread is waiting for a lock to be released. This text is often displayed when you:
+- Execute a C# **LOCK** statement,
+- Invoke the **Monitor.Enter** method, or
+- Invoke a method with the **MethodImplOptions.Synchronized** attribute.
+
+Lock contention usually occurs when thread _A_ acquires a lock and thread _B_ tries to acquire the same lock before thread _A_ releases it.
### <a id="ngencold"></a>Loading code ([COLD])
-If the method name contains **[COLD]**, such as **mscorlib.ni![COLD]System.Reflection.CustomAttribute.IsDefined**, .NET Framework runtime is executing code for the first time that isn't optimized by [profile-guided optimization](/cpp/build/profile-guided-optimizations). For each method, it should be displayed at most once during the process.
+If the .NET Framework runtime is executing [unoptimized code](/cpp/build/profile-guided-optimizations) for the first time, the method name will contain **[COLD]**:
-If loading code takes a substantial amount of time for a request, the request is the first one to execute the unoptimized portion of the method. Consider using a warmup process that executes that portion of the code before your users access it.
+`mscorlib.ni![COLD]System.Reflection.CustomAttribute.IsDefined`
+
+For each method, it should be displayed once during the process, at most.
+
+If loading code takes a substantial amount of time for a request, it's the request's initiate execute of the unoptimized portion of the method. Consider using a warmup process that executes that portion of the code before your users access it.
### <a id="httpclientsend"></a>Send HTTP request
Methods such as **SqlCommand.Execute** indicate that the code is waiting for a d
### <a id="await"></a>Waiting (AWAIT\_TIME)
-**AWAIT\_TIME** indicates that the code is waiting for another task to finish. This delay usually happens with the C# **AWAIT** statement. When the code does a C# **AWAIT**, the thread unwinds and returns control to the thread pool, and there's no thread that is blocked waiting for the **AWAIT** to finish. However, logically, the thread that did the **AWAIT** is "blocked," and it's waiting for the operation to finish. The **AWAIT\_TIME** statement indicates the blocked time waiting for the task to finish.
+**AWAIT\_TIME** indicates that the code is waiting for another task to finish. This delay occurs with the C# **AWAIT** statement. When the code does a C# **AWAIT**:
+- The thread unwinds and returns control to the thread pool.
+- There's no blocked thread waiting for the **AWAIT** to finish.
+
+However, logically, the thread that did the **AWAIT** is "blocked", waiting for the operation to finish. The **AWAIT\_TIME** statement indicates the blocked time, waiting for the task to finish.
### <a id="block"></a>Blocked time
-**BLOCKED_TIME** indicates that the code is waiting for another resource to be available. For example, it might be waiting for a synchronization object, for a thread to be available, or for a request to finish.
+**BLOCKED_TIME** indicates that the code is waiting for another resource to be available. For example, it might be waiting for:
+- A synchronization object
+- A thread to be available
+- A request to finish
### Unmanaged Async
-.NET framework emits ETW events and passes activity ids between threads so that async calls can be tracked across threads. Unmanaged code (native code) and some older styles of asynchronous code are missing these events and activity ids, so the profiler cannot tell what thread and what functions are running on the thread. This is labeled 'Unmanaged Async' in the call stack. If you download the ETW file, you may be able to use [PerfView](https://github.com/Microsoft/perfview/blob/master/documentation/Downloading.md) to get more insight into what is happening.
+In order for async calls to be tracked across threads, .NET Framework emits ETW events and passes activity ids between threads. Since unmanaged (native) code and some older styles of asynchronous code lack these events and activity ids, the Profiler can't track the thread and functions running on the thread. This is labeled **Unmanaged Async** in the call stack. Download the ETW file to use [PerfView](https://github.com/Microsoft/perfview/blob/master/documentation/Downloading.md) for more insight.
### <a id="cpu"></a>CPU time
The application is performing network operations.
### <a id="when"></a>When column
-The **When** column is a visualization of how the INCLUSIVE samples collected for a node vary over time. The total range of the request is divided into 32 time buckets. The inclusive samples for that node are accumulated in those 32 buckets. Each bucket is represented as a bar. The height of the bar represents a scaled value. For nodes that are marked **CPU_TIME** or **BLOCKED_TIME**, or where there is an obvious relationship to consuming a resource (for example, a CPU, disk, or thread), the bar represents the consumption of one of the resources during the bucket. For these metrics, it's possible to get a value of greater than 100 percent by consuming multiple resources. For example, if you use, on average, two CPUs during an interval, you get 200 percent.
+The **When** column is a visualization of the variety of _inclusive_ samples collected for a node over time. The total range of the request is divided into 32 time buckets, where the node's inclusive samples accumulate. Each bucket is represented as a bar. The height of the bar represents a scaled value. For the following nodes, the bar represents the consumption of one of the resources during the bucket:
+- Nodes marked **CPU_TIME** or **BLOCKED_TIME**.
+- Nodes with an obvious relationship to consuming a resource (for example, a CPU, disk, or thread).
+
+For these metrics, you can get a value of greater than 100% by consuming multiple resources. For example, if you use two CPUs during an interval on average, you get 200%.
## Limitations
-The default data retention period is five days. The maximum data that's ingested per day is 10 GB.
+The default data retention period is five days. The maximum data ingested per day is 10 GB.
-There are no charges for using the Profiler service. For you to use it, your web app must be hosted in at least the basic tier of the Web Apps feature of Azure App Service.
+There are no charges for using the Profiler service. To use it, your web app must be hosted in the basic tier of the Web Apps feature of Azure App Service, at minimum.
## Overhead and sampling algorithm
-Profiler randomly runs two minutes every hour on each virtual machine that hosts the application that has Profiler enabled for capturing traces. When Profiler is running, it adds from 5 to 15 percent CPU overhead to the server.
+Profiler randomly runs two minutes/hour on each virtual machine hosting the application with Profiler enabled for capturing traces. When Profiler is running, it adds from 5-15% CPU overhead to the server.
## Next steps Enable Application Insights Profiler for your Azure application. Also see:
azure-monitor Usage Segmentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-segmentation.md
Three of the usage blades use the same tool to slice and dice telemetry from you
* **Events tool**: How often certain pages and features of your app are used. A page view is counted when a browser loads a page from your app, provided you've [instrumented it](./javascript.md). A custom event represents one occurrence of something happening in your app, often a user interaction like a button select or the completion of some task. You insert code in your app to [generate custom events](./api-custom-events-metrics.md#trackevent).
+
+> [!NOTE]
+> For details on an alternative to using [anonymous IDs](./data-model-context.md#anonymous-user-id) and ensuring an accurate count, reference the documentation for [authenticated IDs](./data-model-context.md#authenticated-user-id).
## Querying for certain users
The **Meet your users** section shows information about five sample users matche
- [Retention](usage-retention.md) - [User Flows](usage-flows.md) - [Workbooks](../visualize/workbooks-overview.md)
- - [Add user context](./usage-overview.md)
+ - [Add user context](./usage-overview.md)
azure-monitor Data Collection Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-overview.md
Title: Data Collection Rules in Azure Monitor description: Overview of data collection rules (DCRs) in Azure Monitor including their contents and structure and how you can create and work with them. Previously updated : 03/31/2022 Last updated : 04/26/2022
There are currently two types of data collection rule in Azure Monitor:
- **Standard DCR**. Used with different workflows that send data to Azure Monitor. Workflows currently supported are [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and [custom logs (preview)](../logs/custom-logs-overview.md). -- **Workspace transformation DCR)**. Used with a Log Analytics workspace to apply [ingestion-time transformations (preview)](../logs/ingestion-time-transformations.md) to workflows that don't currently support DCRs.
+- **Workspace transformation DCR**. Used with a Log Analytics workspace to apply [ingestion-time transformations (preview)](../logs/ingestion-time-transformations.md) to workflows that don't currently support DCRs.
## Structure of a data collection rule Data collection rules are formatted in JSON. While you may not need to interact with them directly, there are scenarios where you may need to directly edit a data collection rule. See [Data collection rule structure](data-collection-rule-structure.md) for a description of this structure and different elements.
azure-monitor Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-configure.md
This zone covers the global endpoints used by Azure Monitor, meaning endpoints t
* **live** - Application Insights live metrics endpoint * **profiler** - Application Insights profiler endpoint * **snapshot** - Application Insights snapshots endpoint
+* **diagservices-query** - Application Insights Profiler and Snapshot Debugger (used when accessing profiler/debugger results in the Azure Portal)
This zone also covers the resource specific endpoints for [Data Collection Endpoints](../essentials/data-collection-endpoint-overview.md): * `<unique-dce-identifier>.<regionname>.handler.control` - Private configuration endpoint, part of a Data Collection Endpoint (DCE) resource
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md
Several other features don't have a direct cost, but you instead pay for the ing
| Web tests | There is a cost for [multi-step web tests](app/availability-multistep.md) in Application Insights, but this feature has been deprecated. ## Data transfer charges
-Sending data to Azure Monitor can incur data bandwidth charges. As described in the [Azure Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions charged as outbound data transfer at the normal rate. Inbound data transfer is free. However, this charge is typically very small compared to the costs for data ingestion and retention. Controlling costs for Log Analytics should focus on your ingested data volume.
+Sending data to Azure Monitor can incur data bandwidth charges. As described in the [Azure Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions charged as outbound data transfer at the normal rate, although data sent to a different region via [Diagnostic Settings](essentials/diagnostic-settings.md) does not incur data transfer charges. Inbound data transfer is free. Data transfer charges are typically very small compared to the costs for data ingestion and retention. Controlling costs for Log Analytics should focus on your ingested data volume.
+ ## Estimate Azure Monitor usage and costs If you're new to Azure Monitor, you can use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate your costs. In the **Search** box, enter *Azure Monitor*, and then select the **Azure Monitor** tile. The pricing calculator will help you estimate your likely costs based on your expected utilization.
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 04/11/2022 Last updated : 04/26/2022 # Solution architectures using Azure NetApp Files
This section provides solutions for Azure platform services.
* [Azure NetApp Files + Trident = Dynamic and Persistent Storage for Kubernetes](https://anfcommunity.com/2021/02/16/azure-netapp-files-trident-dynamic-and-persistent-storage-for-kubernetes/) * [Trident - Storage Orchestrator for Containers](https://netapp-trident.readthedocs.io/en/stable-v20.04/kubernetes/operations/tasks/backends/anf.html) * [Magento e-commerce platform in Azure Kubernetes Service (AKS)](/azure/architecture/example-scenario/magento/magento-azure)
+* [Protecting Magento e-commerce platform in AKS against disasters with Astra Control Service](https://techcommunity.microsoft.com/t5/azure-architecture-blog/protecting-magento-e-commerce-platform-in-aks-against-disasters/ba-p/3285525)
+* [Protecting applications on private Azure Kubernetes Service clusters with Astra Control Service](https://techcommunity.microsoft.com/t5/azure-architecture-blog/protecting-applications-on-private-azure-kubernetes-service/ba-p/3289422)
### Azure Red Hat Openshift
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
Title: Bicep config file description: Describes the configuration file for your Bicep deployments Previously updated : 11/16/2021 Last updated : 04/26/2022 # Configure your Bicep environment
Bicep supports a configuration file named `bicepconfig.json`. Within this file,
To customize values, create this file in the directory where you store Bicep files. You can add `bicepconfig.json` files in multiple directories. The configuration file closest to the Bicep file in the directory hierarchy is used.
+To create a `bicepconfig.json` file in Visual Studio Code, see [Visual Studio Code](./visual-studio-code.md#create-bicep-configuration-file).
+ ## Available settings When working with [modules](modules.md), you can add aliases for module paths. These aliases simplify your Bicep file because you don't have to repeat complicated paths. For more information, see [Add module settings to Bicep config](bicep-config-modules.md). The [Bicep linter](linter.md) checks Bicep files for syntax errors and best practice violations. You can override the default settings for the Bicep file validation by modifying `bicepconfig.json`. For more information, see [Add linter settings to Bicep config](bicep-config-linter.md).
-You can also configure the credential precedence for authenticating to Azure from Bicep CLI and Visual Studio Code. The credentials are used to publish modules to registries and to restore external modules to the local cache when using the insert resource function.
+You can also configure the credential precedence for authenticating to Azure from Bicep CLI and Visual Studio Code. The credentials are used to publish modules to registries and to restore external modules to the local cache when using the insert resource function.
## Credential precedence
azure-resource-manager Compare Template Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/compare-template-syntax.md
description: Compares Azure Resource Manager templates developed with JSON and B
Previously updated : 03/01/2022 Last updated : 04/26/2022 # Comparing JSON and Bicep for templates
To iterate over items in an array or count:
For Bicep, you can set an explicit dependency but this approach isn't recommended. Instead, rely on implicit dependencies. An implicit dependency is created when one resource declaration references the identifier of another resource.
-The following shows a network interface with an implicit dependency on a network security group. It references the network security group with `nsg.id`.
+The following shows a network interface with an implicit dependency on a network security group. It references the network security group with `netSecurityGroup.id`.
```bicep resource netSecurityGroup 'Microsoft.Network/networkSecurityGroups@2020-06-01' = {
resource nic1 'Microsoft.Network/networkInterfaces@2020-06-01' = {
properties: { ... networkSecurityGroup: {
- id: nsg.id
+ id: netSecurityGroup.id
} } }
azure-resource-manager Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/visual-studio-code.md
Title: Create Bicep files by using Visual Studio Code description: Describes how to create Bicep files by using Visual Studio Code Previously updated : 04/13/2022 Last updated : 04/26/2022 # Create Bicep files by using Visual Studio Code
-This article shows you how to use Visual Studio Code to create Bicep files
+This article shows you how to use Visual Studio Code to create Bicep files.
## Install VS Code
To set up your environment for Bicep development, see [Install Bicep tools](inst
Visual Studio Code comes with several Bicep commands.
-Open or create a Bicep file in VS Code, select the **View** menu and then select **Command Palette**. You can also use the key combination **[CTRL]+[SHIFT]+P** to bring up the command palette.
+Open or create a Bicep file in VS Code, select the **View** menu and then select **Command Palette**. You can also use the key combination **[CTRL]+[SHIFT]+P** to bring up the command palette. Type **Bicep** to list the Bicep commands.
![Visual Studio Code Bicep commands](./media/visual-studio-code/visual-studio-code-bicep-commands.png)
-### Build
+These commands include:
+
+- [Build Bicep File](#build-bicep-file)
+- [Create Bicep Configuration File](#create-bicep-configuration-file)
+- [Deploy Bicep File](#deploy-bicep-file)
+- [Insert Resource](#insert-resource)
+- [Open Bicep Visualizer](#open-bicep-visualizer)
+
+### Build Bicep File
The `build` command converts a Bicep file to an Azure Resource Manager template (ARM template). The new JSON template is stored in the same folder with the same file name. If a file with the same file name exists, it overwrites the old file. For more information, see [Bicep CLI commands](./bicep-cli.md#bicep-cli-commands).
+### Create Bicep configuration file
+
+The [Bicep configuration file (bicepconfig.json)](./bicep-config.md) can be used to customize your Bicep development experience. You can add `bicepconfig.json` in multiple directories. The configuration file closest to the bicep file in the directory hierarchy is used. When you select this command, the extension opens a dialog for you to select a folder. The default folder is where you store the Bicep file. If a `bicepconfig.json` file already exists in the folder, you have the option to overwrite the existing file.
+
+### Deploy Bicep File
+
+> [!NOTE]
+> Deploy Bicep File is an experimental function. To enable the feature, select **Manage**, type **bicep**, and then select **Enable Deploy**.
+> ![Bicep Visual Studio Code enable deploy](./media/visual-studio-code/visual-studio-code-bicep-enable-deploy.png)
+
+You can deploy Bicep files directly from Visual Studio Code. Select **Deploy Bicep file** from the command palette. The extension prompts you to sign in Azure, select subscription, and create/select resource group.
+ ### Insert Resource The `insert resource` command adds a resource declaration in the Bicep file by providing the resource ID of an existing resource. After you select **Insert Resource**, enter the resource ID in the command palette. It takes a few moments to insert the resource.
Similar to exporting templates, the process tries to create a usable resource. H
For more information, see [Decompiling ARM template JSON to Bicep](./decompile.md).
-### Open Visualizer
+### Open Bicep Visualizer
The visualizer shows the resources defined in the Bicep file with the resource dependency information. The diagram is the visualization of a [Linux virtual machine Bicep file](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-simple-linux/main.bicep).
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Title: Azure subscription limits and quotas description: Provides a list of common Azure subscription and service limits, quotas, and constraints. This article includes information on how to increase limits along with maximum values. Previously updated : 03/17/2022 Last updated : 04/26/2022 # Azure subscription and service limits, quotas, and constraints
azure-resource-manager Create Private Link Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/create-private-link-access-portal.md
Title: Create private link for managing resources - Azure portal description: Use Azure portal to create private link for managing resources. Previously updated : 03/24/2022 Last updated : 04/26/2022 # Use portal to create private link for managing Azure resources (preview)
azure-resource-manager Create Private Link Access Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/create-private-link-access-rest.md
Title: Manage resources through private link description: Restrict management access for resource to private link Previously updated : 03/24/2022 Last updated : 04/26/2022 # Use REST API to create private link for managing Azure resources (preview)
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | | | | | > | workspaces | global | 1-50 | Lowercase letters, hyphens, and numbers.<br><br>Start and end with letter or number.<br><br>Can't contain `-ondemand` | > | workspaces / bigDataPools | workspace | 1-15 | Letters and numbers.<br><br>Start with letter. End with letter or number.<br><br>Can't contain [reserved word](../troubleshooting/error-reserved-resource-name.md). |
-> | workspaces / sqlPools | workspace | 1-15 | Can contain only letters, numbers, or underscore.<br><br>Can't contain [reserved word](../troubleshooting/error-reserved-resource-name.md). |
+> | workspaces / sqlPools | workspace | 1-60 | Can't contain `<>*%&:\/?@-` or control characters. <br><br>Can't end with `.` or space. <br><br>Can't contain [reserved word](../troubleshooting/error-reserved-resource-name.md). |
## Microsoft.TimeSeriesInsights
azure-resource-manager Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/data-types.md
The following example shows two secure parameters.
} } ```
+> [!NOTE]
+> Secure strings and objects aren't recommended to be used as an output type because they're not stored in the deployment history.
## Next steps
backup Backup Azure Vms Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-automation.md
Title: Back up and recover Azure VMs with PowerShell description: Describes how to back up and recover Azure VMs using Azure Backup with PowerShell Previously updated : 01/04/2022 Last updated : 04/25/2022
To restore backup data, identify the backed-up item and the recovery point that
The basic steps to restore an Azure VM are:
-* Select the VM.
-* Choose a recovery point.
-* Restore the disks.
-* Create the VM from stored disks.
+> [!div class="checklist"]
+> * Select the VM.
+> * Choose a recovery point.
+> * Restore the disks.
+> * Create the VM from stored disks.
+
+Now, you can also use PowerShell to directly restore the backup content to a VM (original/new), without performing the above steps separately. For more information, see [Restore data to virtual machine using PowerShell](#restore-data-to-virtual-machine-using-powershell).
### Select the VM (when restoring files)
After the required files are copied, use [Disable-AzRecoveryServicesBackupRPMoun
Disable-AzRecoveryServicesBackupRPMountScript -RecoveryPoint $rp[0] -VaultId $targetVault.ID ```
+## Restore data to virtual machine using PowerShell
+
+You can now directly restore data to original/alternate VM without performing multiple steps.
+
+### Restore data to original VM
+
+```powershell-interactive
+$vault = Get-AzRecoveryServicesVault -ResourceGroupName "resourceGroup" -Name "vaultName"
+$BackupItem = Get-AzRecoveryServicesBackupItem -BackupManagementType "AzureVM" -WorkloadType "AzureVM" -Name "V2VM" -VaultId $vault.ID
+$StartDate = (Get-Date).AddDays(-7)
+$EndDate = Get-Date
+$RP = Get-AzRecoveryServicesBackupRecoveryPoint -Item $BackupItem -StartDate $StartDate.ToUniversalTime() -EndDate $EndDate.ToUniversalTime() -VaultId $vault.ID
+$OriginalLocationRestoreJob = Restore-AzRecoveryServicesBackupItem -RecoveryPoint $RP[0] -StorageAccountName "DestStorageAccount" -StorageAccountResourceGroupName "DestStorageAccRG" -VaultId $vault.ID -VaultLocation $vault.Location
+```
+
+```output
+WorkloadName Operation Status StartTime EndTime
+ -
+V2VM Restore InProgress 26-Apr-16 1:14:01 PM 01-Jan-01 12:00:00 AM
+```
+
+The last command triggers an original location restore operation to restore the data in-place in the existing VM.
+
+### Restore data to a newly created VM
+
+```powershell-interactive
+$vault = Get-AzRecoveryServicesVault -ResourceGroupName "resourceGroup" -Name "vaultName"
+$BackupItem = Get-AzRecoveryServicesBackupItem -BackupManagementType "AzureVM" -WorkloadType "AzureVM" -Name "V2VM" -VaultId $vault.ID
+$StartDate = (Get-Date).AddDays(-7)
+$EndDate = Get-Date
+$RP = Get-AzRecoveryServicesBackupRecoveryPoint -Item $BackupItem -StartDate $StartDate.ToUniversalTime() -EndDate $EndDate.ToUniversalTime() -VaultId $vault.ID
+$AlternateLocationRestoreJob = Restore-AzRecoveryServicesBackupItem -RecoveryPoint $RP[0] -TargetResourceGroupName "Target_RG" -StorageAccountName "DestStorageAccount" -StorageAccountResourceGroupName "DestStorageAccRG" -TargetVMName "TagetVirtualMachineName" -TargetVNetName "Target_VNet" -TargetVNetResourceGroup "" -TargetSubnetName "subnetName" -VaultId $vault.ID -VaultLocation $vault.Location
+```
+
+```output
+WorkloadName Operation Status StartTime EndTime
+ -
+V2VM Restore InProgress 26-Apr-16 1:14:01 PM 01-Jan-01 12:00:00 AM
+```
+
+The last command triggers an alternate location restore operation to create a new VM in *Target_RG* resource group as per the inputs specified by parameters *TargetVMName*, *TargetVNetName*, *TargetVNetResourceGroup*, *TargetSubnetName*. This ensures that the data is restored in the required VM, virtual network and subnet.
+ ## Next steps If you prefer to use PowerShell to engage with your Azure resources, see the PowerShell article, [Deploy and Manage Backup for Windows Server](backup-client-automation.md). If you manage DPM backups, see the article, [Deploy and Manage Backup for DPM](backup-dpm-automation.md).
backup Tutorial Restore Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-restore-disk.md
Title: Tutorial - Restore a VM with Azure CLI description: Learn how to restore a disk and create a recover a VM in Azure with Backup and Recovery Services. Previously updated : 01/05/2022 Last updated : 04/25/2022
Azure Backup creates recovery points that are stored in geo-redundant recovery v
For information on using PowerShell to restore a disk and create a recovered VM, see [Back up and restore Azure VMs with PowerShell](backup-azure-vms-automation.md#restore-an-azure-vm).
+Now, you can also use CLI to directly restore the backup content to a VM (original/new), without performing the above steps separately. For more information, see [Restore data to virtual machine using CLI](#restore-data-to-virtual-machine-using-cli).
+ [!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] - This tutorial requires version 2.0.18 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
To confirm that your VM has been created from your recovered disk, list the VMs
az vm list --resource-group myResourceGroup --output table ```
+## Restore data to virtual machine using CLI
+
+You can now directly restore data to original/alternate VM without performing multiple steps.
+
+### Restore data to original VM
+
+```azurecli-interactive
+az backup restore restore-disks \
+ --resource-group myResourceGroup \
+ --vault-name myRecoveryServicesVault \
+ --container-name myVM \
+ --item-name myVM \
+ --restore-mode OriginalLocation
+ --storage-account mystorageaccount \
+ --rp-name myRecoveryPointName \
+```
+
+```output
+Name Operation Status Item Name Start Time UTC Duration
+-- - -- - --
+7f2ad916 Restore InProgress myVM 2017-09-19T19:39:52 0:00:34.520850
+```
+
+The last command triggers an original location restore operation to restore the data in-place in the existing VM.
+
+
+### Restore data to a newly created VM
+
+```azurecli-interactive
+az backup restore restore-disks \
+ --resource-group myResourceGroup \
+ --vault-name myRecoveryServicesVault \
+ --container-name myVM \
+ --item-name myVM \
+ --restore-mode OriginalLocation
+ --storage-account mystorageaccount \
+ --rp-name myRecoveryPointName \
+```
+
+```output
+Name Operation Status Item Name Start Time UTC Duration
+-- - -- - --
+7f2ad916 Restore InProgress myVM 2017-09-19T19:39:52 0:00:34.520850
+```
+
+The last command triggers an alternate location restore operation to create a new VM in *Target_RG* resource group as per the inputs specified by parameters *TargetVMName*, *TargetVNetName*, *TargetVNetResourceGroup*, *TargetSubnetName*. This ensures the data is restored in the required VM, virtual network, and subnet.
+ ## Next steps In this tutorial, you restored a disk from a recovery point and then created a VM from the disk. You learned how to:
cognitive-services Spatial Analysis Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-operations.md
The following is an example of a JSON input for the SPACEANALYTICS_CONFIG parame
"type": "count", "config": { "trigger": "event",
- "threshold": 16.00,
+ "threshold": 13.00,
"focus": "footprint" } }
The following is an example of a JSON input for the `SPACEANALYTICS_CONFIG` para
"type": "linecrossing", "config": { "trigger": "event",
- "threshold": 16.00,
+ "threshold": 13.00,
"focus": "footprint" } }
The following is an example of a JSON input for the `SPACEANALYTICS_CONFIG` para
| `line` | list| The definition of the line. This is a directional line allowing you to understand "entry" vs. "exit".| | `start` | value pair| x, y coordinates for line's starting point. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size. | | `end` | value pair| x, y coordinates for line's ending point. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size. |
-| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. The default value is 16. This is the recommended value to achieve maximum accuracy. |
+| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. The default value is 13. This is the recommended value to achieve maximum accuracy. |
| `type` | string| For **cognitiveservices.vision.spatialanalysis-personcrossingline**, this should be `linecrossing`.| |`trigger`|string|The type of trigger for sending an event.<br>Supported Values: "event": fire when someone crosses the line.| | `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box). The default value is footprint.|
This is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter tha
"type": "zonecrossing", "config":{ "trigger": "event",
- "threshold": 48.00,
+ "threshold": 38.00,
"focus": "footprint" } }]
This is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter tha
"type": "zonedwelltime", "config":{ "trigger": "event",
- "threshold": 16.00,
+ "threshold": 13.00,
"focus": "footprint" } }]
This is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter tha
| `name` | string| Friendly name for this zone.| | `polygon` | list| Each value pair represents the x,y for vertices of polygon. The polygon represents the areas in which people are tracked or counted. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size. | `target_side` | int| Specifies a side of the zone defined by `polygon` to measure how long people face that side while in the zone. 'dwellTimeForTargetSide' will output that estimated time. Each side is a numbered edge between the two vertices of the polygon that represents your zone. For example, the edge between the first two vertices of the polygon represents the first side, 'side'=1. The value of `target_side` is between `[0,N-1]` where `N` is the number of sides of the `polygon`. This is an optional field. |
-| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. The default value is 48 when the type is `zonecrossing` and 16 when time is `DwellTime`. These are the recommended values to achieve maximum accuracy. |
+| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. The default value is 38 when the type is `zonecrossing` and 13 when time is `DwellTime`. These are the recommended values to achieve maximum accuracy. |
| `type` | string| For **cognitiveservices.vision.spatialanalysis-personcrossingpolygon** this should be `zonecrossing` or `zonedwelltime`.| | `trigger`|string|The type of trigger for sending an event<br>Supported Values: "event": fire when someone enters or exits the zone.| | `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box). The default value is footprint.|
This is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter tha
"minimum_distance_threshold":6.0, "maximum_distance_threshold":35.0, "aggregation_method": "average"
- "threshold": 16.00,
+ "threshold": 13.00,
"focus": "footprint" } }]
The following is an example of a JSON input for the `SPACEANALYTICS_CONFIG` para
"type": "linecrossing", "config": { "trigger": "event",
- "threshold": 16.00,
+ "threshold": 13.00,
"focus": "footprint" } }
The following is an example of a JSON input for the `SPACEANALYTICS_CONFIG` para
"output_frequency": 1, "minimum_distance_threshold": 6.0, "maximum_distance_threshold": 35.0,
- "threshold": 16.00,
+ "threshold": 13.00,
"focus": "footprint" } },
The following is an example of a JSON input for the `SPACEANALYTICS_CONFIG` para
"config": { "trigger": "event", "output_frequency": 1,
- "threshold": 16.00,
+ "threshold": 13.00,
"focus": "footprint" } }, { "type": "zonecrossing", "config": {
- "threshold": 48.00,
+ "threshold": 38.00,
"focus": "footprint" } }, { "type": "zonedwelltime", "config": {
- "threshold": 16.00,
+ "threshold": 13.00,
"focus": "footprint" } }
cognitive-services Get Speech Recognition Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-speech-recognition-results.md
keywords: speech to text, speech to text software
* [Try the speech to text quickstart](get-started-speech-to-text.md) * [Improve recognition accuracy with custom speech](custom-speech-overview.md)
-* [Transcribe audio in batches](batch-transcription.md)
+* [Use batch transcription](batch-transcription.md)
cognitive-services How To Recognize Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-recognize-speech.md
Previously updated : 02/17/2022 Last updated : 04/24/2022 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python zone_pivot_groups: programming-languages-speech-services
keywords: speech to text, speech to text software
* [Try the speech to text quickstart](get-started-speech-to-text.md) * [Improve recognition accuracy with custom speech](custom-speech-overview.md)
-* [Transcribe audio in batches](batch-transcription.md)
+* [Use batch transcription](batch-transcription.md)
cognitive-services Translator Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-overview.md
Previously updated : 11/16/2021 Last updated : 04/25/2022 keywords: translator, text translation, machine translation, translation service, custom translator
-# What is Translator?
+# What is Azure Cognitive Services Translator?
-Translator is a cloud-based neural machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs. Translator can be used with any operating system and powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this overview, you'll learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md).
+Azure Cognitive Services Translator is a cloud-based neural machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs. Translator can be used with any operating system and powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this overview, you'll learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md).
Translator documentation contains the following article types:
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/whats-new.md
Previously updated : 06/22/2021 Last updated : 04/25/2022 <!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD036 -->
-# What's new in Translator
+# What's new in Azure Cognitive Services Translator
-Review the latest updates to the text Translator service. Bookmark this page to stay up to date with release notes, feature enhancements, and documentation updates.
+Bookmark this page to stay up to date with release notes, feature enhancements, and documentation updates.
+* Translator is a language service that enables users to translate text and documents, helps businesses expand their global outreach, and supports at-risk and endangered language preservation.
+
+* Translator supports language translation for more than 100 languages. If your language community is interested in partnering with Microsoft to add your language to Translator, contact us via the [Translator community partner onboarding form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-riVR3Xj0tOnIRdZOALbM9UOU1aMlNaWFJOOE5YODhRR1FWVzY0QzU1OS4u).
++
+## April 2022
+
+### [Text and document translation support for Faroese](https://www.microsoft.com/translator/blog/2022/04/25/introducing-faroese-translation-for-faroese-flag-day/), [Basque and Galician](https://www.microsoft.com/translator/blog/2022/04/12/break-the-language-barrier-with-translator-now-with-two-new-languages/)
+
+* Translator service has [text and document translation language support](language-support.md) for Faroese, a Germanic language originating on the Faroe Islands. The Faroe Islands are a self-governing country within the Kingdom of Denmark located between Norway and Iceland. Faroese is descended from Old West Norse spoken by Vikings in the Middle Ages.
+
+* Translator service has [text and document translation language support](language-support.md) for Basque and Galician. Basque is a language isolate, meaning it isn't related to any other modern language. It's spoken in parts of northern Spain and southern France. Galician is spoken in northern Portugal and western Spain. Both Basque and Galician are co-official languages of Spain.
+
+## March 2022
+
+### [Text and document translation support for Somali and Zulu languages](https://www.microsoft.com/translator/blog/2022/03/29/translator-welcomes-two-new-languages-somali-and-zulu/)
+
+* Translator service has [text and document translation language support](language-support.md) for Somali and Zulu.
+
+* The Somali language is spoken throughout Africa by more than 21 million people. It's in the Cushitic branch of the Afroasiatic language family.
+
+* The Zulu language is spoken by 12 million people and is recognized as one of South Africa's 11 official languages.
+
+## February 2022
+
+### [Text and document translation support for Upper Sorbian](https://www.microsoft.com/translator/blog/2022/02/21/translator-celebrates-international-mother-language-day-by-adding-upper-sorbian/), [Inuinnaqtun, and Romanized Inuktitut](https://www.microsoft.com/translator/blog/2022/02/01/introducing-inuinnaqtun-and-romanized-inuktitut/)
+
+* Translator service has [text and document translation language support](language-support.md) for Upper Sorbian. The Translator team has worked tirelessly to preserve indigenous and endangered languages around the world. Language data provided by the Upper Sorbian language community was instrumental in introducing this language to Translator.
+
+* Translator service has [text and document translation language support](language-support.md) for Inuinnaqtun and Romanized Inuktitut. Both are indigenous languages that are essential and treasured foundations of Canadian culture and society.
+
+## January 2022
+
+### Custom Translator portal (v2.0) public preview
+
+* The [Custom Translator portal (v2.0)](https://portal.customtranslator.azure.ai/) is now in public preview and includes significant changes that makes it easier to create your custom translation systems.
+
+* To learn more, see our Custom Translator [documentation](custom-translator/overview.md) and try our [quickstart](custom-translator/v2-preview/quickstart.md) for step-by-step instructions.
+
+## October 2021
+
+### [Text and document support for more than 100 languages](https://www.microsoft.com/translator/blog/2021/10/11/translator-now-translates-more-than-100-languages/)
+
+* Translator service has added **Bashkir**, **Dhivehi**, **Georgian**, **Kyrgyz**, **Macedonian (Cyrillic)**, **Mongolian (Traditional)**, **Tatar**, **Tibetan**, **Turkmen**, **Uyghur**, and **Uzbek (Latin)**. This addition brings the total number of languages supported in Translator to 103.
+
+## August 2021
+
+### [Text and document translation support for literary Chinese](https://www.microsoft.com/translator/blog/2021/08/25/microsoft-translator-releases-literary-chinese-translation/)
+
+* Azure Cognitive Services Translator has [text and document language support](language-support.md) for literary Chinese, a traditional style of written Chinese used by classical Chinese poets and in ancient Chinese poetry.
## June 2021
-### [Document Translation client libraries for C#/.NET and Python](document-translation/client-sdks.md)ΓÇönow available in prerelease.
+### [Document Translation client libraries for C#/.NET and Python](document-translation/client-sdks.md)ΓÇönow available in prerelease
## May 2021
-### [Document Translation ΓÇò now in general availability](https://www.microsoft.com/translator/blog/2021/05/25/translate-full-documents-with-document-translation-%e2%80%95-now-in-general-availability/)
+### [Document Translation ΓÇò now generally available](https://www.microsoft.com/translator/blog/2021/05/25/translate-full-documents-with-document-translation-%e2%80%95-now-in-general-availability/)
* **Feature release**: Translator's [Document Translation](document-translation/overview.md) feature is generally available. Document Translation is designed to translate large files and batch documents with rich content while preserving original structure and format. You can also use custom glossaries and custom models built with [Custom Translator](custom-translator/overview.md) to ensure your documents are translated quickly and accurately.
Review the latest updates to the text Translator service. Bookmark this page to
* **New release**: [Document Translation](document-translation/overview.md) is available as a preview feature of the Translator Service. Preview features are still in development and aren't meant for production use. They're made available on a "preview" basis so customers can get early access and provide feedback. Document Translation enables you to translate large documents and process batch files while still preserving the original structure and format. _See_ [Microsoft Translator blog: Introducing Document Translation](https://www.microsoft.com/translator/blog/2021/02/17/introducing-document-translation/)
-### [Text translation support for nine added languages](https://www.microsoft.com/translator/blog/2021/02/22/microsoft-translator-releases-nine-new-languages-for-international-mother-language-day-2021/)
+### [Text and document translation support for nine added languages](https://www.microsoft.com/translator/blog/2021/02/22/microsoft-translator-releases-nine-new-languages-for-international-mother-language-day-2021/)
-* Translator service has [text translation language support](language-support.md) for the following languages:
+* Translator service has [text and document translation language support](language-support.md) for the following languages:
* **Albanian**. An isolate language unrelated to any other and spoken by nearly 8 million people. * **Amharic**. An official language of Ethiopia spoken by approximately 32 million people. It's also the liturgical language of the Ethiopian Orthodox church.
Review the latest updates to the text Translator service. Bookmark this page to
## January 2021
-### [Text translation support for Inuktitut](https://www.microsoft.com/translator/blog/2021/01/27/inuktitut-is-now-available-in-microsoft-translator/)
+### [Text and document translation support for Inuktitut](https://www.microsoft.com/translator/blog/2021/01/27/inuktitut-is-now-available-in-microsoft-translator/)
-* Translator service has [text translation language support](language-support.md) for **Inuktitut**, one of the principal Inuit languages of Canada. Inuktitut is one of eight official aboriginal languages in the Northwest Territories.
+* Translator service has [text and document translation language support](language-support.md) for **Inuktitut**, one of the principal Inuit languages of Canada. Inuktitut is one of eight official aboriginal languages in the Northwest Territories.
## November 2020
Review the latest updates to the text Translator service. Bookmark this page to
## October 2020
-### [Text translation support for Canadian French](https://www.microsoft.com/translator/blog/2020/10/20/cest-tiguidou-ca-translator-adds-canadian-french/)
+### [Text and document translation support for Canadian French](https://www.microsoft.com/translator/blog/2020/10/20/cest-tiguidou-ca-translator-adds-canadian-french/)
-* Translator service has [text translation language support](language-support.md) for **Canadian French**. Canadian French and European French are similar to one another and are mutually understandable. However, there can be significant differences in vocabulary, grammar, writing, and pronunciation. Over 7 million Canadians (20 percent of the population) speak French as their first language.
+* Translator service has [text and document translation language support](language-support.md) for **Canadian French**. Canadian French and European French are similar to one another and are mutually understandable. However, there can be significant differences in vocabulary, grammar, writing, and pronunciation. Over 7 million Canadians (20 percent of the population) speak French as their first language.
## September 2020
-### [Text translation support for Assamese and Axomiya](https://www.microsoft.com/translator/blog/2020/09/29/assamese-text-translation-is-here/)
+### [Text and document translation support for Assamese and Axomiya](https://www.microsoft.com/translator/blog/2020/09/29/assamese-text-translation-is-here/)
-* Translator service has [text translation language support](language-support.md) for **Assamese** also knows as **Axomiya**. Assamese / Axomiya is primarily spoken in Eastern India by approximately 14 million people.
+* Translator service has [text and document translation language support](language-support.md) for **Assamese** also knows as **Axomiya**. Assamese / Axomiya is primarily spoken in Eastern India by approximately 14 million people.
## August 2020
Review the latest updates to the text Translator service. Bookmark this page to
* **New release**: Custom Translator V2 phase 1 is available. The newest version of Custom Translator will roll out in two phases to provide quicker translation and quality improvements, and allow you to keep your training data in the region of your choice. *See* [Microsoft Translator blog: Custom Translator: Introducing higher quality translations and regional data residency](https://www.microsoft.com/translator/blog/2020/08/05/custom-translator-v2-is-now-available/)
-### [Text translation support for two Kurdish dialects](https://www.microsoft.com/translator/blog/2020/08/20/translator-adds-two-kurdish-dialects-for-text-translation/)
+### [Text and document translation support for two Kurdish dialects](https://www.microsoft.com/translator/blog/2020/08/20/translator-adds-two-kurdish-dialects-for-text-translation/)
* **Northern (Kurmanji) Kurdish** (15 million native speakers) and **Central (Sorani) Kurdish** (7 million native speakers). Most Kurdish texts are written in Kurmanji and Sorani.
-### [Text translation support for two Afghan languages](https://www.microsoft.com/translator/blog/2020/08/17/translator-adds-dari-and-pashto-text-translation/)
+### [Text and document translation support for two Afghan languages](https://www.microsoft.com/translator/blog/2020/08/17/translator-adds-dari-and-pashto-text-translation/)
* **Dari** (20 million native speakers) and **Pashto** (40 - 60 million speakers). The two official languages of Afghanistan.
-### [Text translation support for Odia](https://www.microsoft.com/translator/blog/2020/08/13/odia-language-text-translation-is-now-available-in-microsoft-translator/)
+### [Text and document translation support for Odia](https://www.microsoft.com/translator/blog/2020/08/13/odia-language-text-translation-is-now-available-in-microsoft-translator/)
* **Odia** is a classical language spoken by 35 million people in India and across the world. It joins **Bangla**, **Gujarati**, **Hindi**, **Kannada**, **Malayalam**, **Marathi**, **Punjabi**, **Tamil**, **Telugu**, **Urdu**, and **English** as the 12th most used language of India supported by Microsoft Translator.
communication-services Network Diagnostic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/network-diagnostic.md
+
+ Title: Developer tools - Network Diagnostics Tool for Azure Communication Services
+description: Conceptual documentation outlining the capabilities provided by the Network Test Tool.
+++++ Last updated : 03/29/2022++++
+# Network Diagnostics Tool
++
+The Network Diagnostics Tool enables Azure Communication Services developers to ensure that their device and network conditions are optimal for connecting to the service to ensure a great call experience. The tool can be found at [aka.ms/acsdiagnostics](https://acs-network-diagnostic-tool.azurewebsites.net/). Users can quickly run a test, by pressing the start test button. The tool performs diagnostics on the network, devices, and call quality. The results of the diagnostics are directly provided through the tools UI. No sign-in required to use the tool.
+
+![Network Diagnostic Tool home screen](../media/network-diagnostic-tool.png)
+
+As part of the diagnostics performed, the user is asked to enable permissions for the tool to access their devices. Next, the user is asked to record their voice, which is then played back using an echo bot to ensure that the microphone is working. The tool finally, performs a video test. The test uses the camera to detect video and measure the quality for sent and received frames.
+
+## Performed tests
+
+ The tool performs the following tests on behalf of the users and provides results for them:
+
+ | Test | Description |
+ |--||
+ | Browser Diagnostic | Checks for browser compatibility. Azure Communication Services supports specific browsers for [calling](../voice-video-calling/calling-sdk-features.md#javascript-calling-sdk-support-by-os-and-browser) and [chat](../chat/sdk-features.md#javascript-chat-sdk-support-by-os-and-browser). |
+ | Media Device Diagnostic | Checks for availability of device (camera, microphone and speaker) and enabled permissions for those devices on the browser. |
+ | Service Connectivity | Checks whether it can connect to Azure Communication Services |
+ | Audio Test | Performs an echo bot call. Here the user can talk to echo bot and hear themselves back. The test records media quality statistics for audio including jitter, bitrate, packet loss and RTT with thresholds for optimal conditions. |
+ | Video Test | Performs a loop back video test, where video captured by the camera is sent back and forth to check for network quality conditions. The test records media quality statistics for video including jitter, bitrate, packet loss and RTT with thresholds for optimal conditions. |
+
+## Privacy
+
+When a user runs a network diagnostic, the tool collects and store service and client telemetry data to verify your network conditions and ensure that they're compatible with Azure Communication Services. The telemetry collected doesn't contain personal identifiable information. The test utilizes both audio and video collected through your device for this verification. The audio and video used for the test aren't stored.
+
+## Next Steps
+
+- [Explore User-Facing Diagnostic APIs](../voice-video-calling/user-facing-diagnostics.md)
+- [Enable Media Quality Statistics in your application](../voice-video-calling/media-quality-sdk.md)
+- [Add Real-Time Inspection tool to your application](./real-time-inspection.md)
+- [Consume call logs with Azure Monitor](../analytics/call-logs-azure-monitor.md)
communication-services Real Time Inspection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/real-time-inspection.md
+
+ Title: Developer Tools - Real-Time Inspection for Azure Communication Services
+description: Conceptual documentation outlining the capabilities provided by the Real-Time Inspection tool.
+++++ Last updated : 03/29/2022++++
+# Real-time Inspection Tool for Azure Communication Services
++
+The Real-time Inspection Tool enables Azure Communication Services developers to inspect the state of the `Call` to debug or monitor their solution. For developers building an Azure Communication Services solution, they might need visibility for debugging into general call information such as the `Call ID` or advanced states, such as did a user facing diagnostic fire. The Real-time Inspection Tool provides developers this information and more. It can be easily added to any JavaScript (Web) solution by downloading the npm package `azure/communication-tools`.
+
+>[!NOTE]
+>Find the open-source repository for the tool [here](https://github.com/Azure/communication-inspection).
+
+## Capabilities
+
+The Real-time Inspection Tool provides developers three categories of information that can be used for debugging purposes:
+
+| Category | Descriptions |
+|--|--|
+| General Call Information | Includes call id, participants, devices and user agent information (browser, version, etc.) |
+| Media Quality Stats | Metrics and statistics provided by [Media Quality APIs](../voice-video-calling/media-quality-sdk.md). Metrics are clickable for time series view.|
+| User Facing Diagnostics | List of [user facing diagnostics](../voice-video-calling/user-facing-diagnostics.md).|
+
+Data collected by the tool is only kept locally and temporarily. It can be downloaded from within the interface.
+
+Real-time Inspection Tool is compatible with the same browsers as the Calling SDK [here](../voice-video-calling/calling-sdk-features.md?msclkid=f9cf66e6a6de11ec977ae3f6d266ba8d#javascript-calling-sdk-support-by-os-and-browser).
+
+## Get started with Real-time Inspection Tool
+
+The tool can be accessed through an npm package `azure/communication-inspection`. The package contains the `InspectionTool` object that can be attached to a `Call`. The Call Inspector requires an `HTMLDivElement` as part of its constructor on which it will be rendered. The `HTMLDivElement` will dictate the size of the Call Inspector.
+
+### Installing Real-time Inspection Tool
+
+```bash
+npm i @azure/communication-inspection
+```
+
+### Initialize Real-time Inspection Tool
+
+```javascript
+import { CallClient, CallAgent } from "@azure/communication-calling";
+import { InspectionTool } from "@azure/communication-tools";
+
+const callClient = new callClient();
+const callAgent = await callClient.createCallAgent({INSERT TOKEN CREDENTIAL});
+const call = callAgent.startCall({INSERT CALL INFORMATION});
+
+const inspectionTool = new InspectionTool(call, {HTMLDivElement});
+
+```
+## Usage
+
+`start`: enable the `InspectionTool` to start reading data from the call object and storing it locally for visualization.
+
+```javascript
+
+inspectionTool.start()
+
+```
+
+`stop`: disable the `InspectionTool` from reading data from the call object.
+
+```javascript
+
+inspectionTool.stop()
+
+```
+
+`open`: Open the `InspectionTool` in the UI.
+
+```javascript
+
+inspectionTool.open()
+
+```
+
+`close`: Dismiss the `InspectionTool` in the UI.
+
+```javascript
+
+inspectionTool.close()
+
+```
+
+## Next Steps
+
+- [Explore User-Facing Diagnostic APIs](../voice-video-calling/user-facing-diagnostics.md)
+- [Enable Media Quality Statistics in your application](../voice-video-calling/media-quality-sdk.md)
+- [Leverage Network Diagnostic Tool](./network-diagnostic.md)
+- [Consume call logs with Azure Monitor](../analytics/call-logs-azure-monitor.md)
cosmos-db Create Sql Api Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-python.md
Now go back to the Azure portal to get your connection string information and co
This step is optional. Learn about the database resources created in code, or skip ahead to [Update your connection string](#update-your-connection-string).
-The following snippets are all taken from the *cosmos_get_started.py* file.
+The following snippets are all taken from the [cosmos_get_started.py](https://github.com/Azure-Samples/azure-cosmos-db-python-getting-started/blob/main/cosmos_get_started.py) file.
* The CosmosClient is initialized. Make sure to update the "endpoint" and "key" values as described in the [Update your connection string](#update-your-connection-string) section.
cost-management-billing Change Credit Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/change-credit-card.md
If you want to a delete credit card, see [Delete an Azure billing payment method
The supported payment methods for Microsoft Azure are credit cards, debit cards, and check wire transfer. To get approved to pay by check wire transfer, see [Pay for your Azure subscription by check or wire transfer](pay-by-invoice.md).
+>[!NOTE]
+> Credit cards are accepted and debit cards are accepted by most countries or regions. However, Hong Kong and Brazil only support credit cards.
+ With a Microsoft Customer Agreement, your payment methods are associated with billing profiles. Learn how to [check access to a Microsoft Customer Agreement](#check-the-type-of-your-account). When you create a new subscription, you can specify a new credit card. When you do so, no other subscriptions get associated with the new credit card. However, if you later make any of the following changes, *all subscriptions* will use the payment method you select.
cost-management-billing Troubleshoot Declined Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-declined-card.md
When you choose a card, Azure displays the card options that are valid in the co
> [!Note] > - American Express credit cards are not currently supported as a payment instrument in India. We have no time frame as to when it may be an accepted form of payment.
-> - Debit cards are not currently accepted in Hong Kong and Brazil.
+> - Credit cards are accepted and debit cards are accepted by most countries or regions. However, debit cards are not currently accepted in Hong Kong and Brazil.
## You're using a virtual or prepaid card
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
If have an Microsoft Online Services Program (pay-as-you-go) account and you hav
If you want to learn how to change your default payment method to check or wire transfer, see [How to pay by invoice](../manage/pay-by-invoice.md).
-There are a few countries that don't allow the use of debit cards, however in general, you can use them to pay your Azure bill. Virtual and prepaid debit cards can't be used to pay your Azure bill.
+There are a few countries that don't allow the use of debit cards, however in general, you can use them to pay your Azure bill. Virtual and prepaid debit cards can't be used to pay your Azure bill.
+
+Hong Kong and Brazil only support credit cards.
### Check or wire transfer
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-templates.md
Title: Deploy VMs on your Azure Stack Edge Pro device via templates
-description: Describes how to create and manage virtual machines (VMs) on a Azure Stack Edge Pro device using templates.
+description: Describes how to create and manage virtual machines (VMs) on an Azure Stack Edge Pro device using templates.
Previously updated : 02/22/2021 Last updated : 04/22/2022 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs.
In this tutorial, weΓÇÖll use pre-written sample templates for creating resource
## VM deployment workflow
-To deploy Azure Stack Edge Pro VMs across many device, you can use a single sysprepped VHD for your full fleet, the same template for deployment, and just make minor changes to the parameters to that template for each deployment location (these changes could be by hand as weΓÇÖre doing here, or programmatic.)
+To deploy Azure Stack Edge Pro VMs across many devices, you can use a single sysprepped VHD for your full fleet, the same template for deployment, and just make minor changes to the parameters to that template for each deployment location (these changes could be by hand as weΓÇÖre doing here, or programmatic.)
The high level summary of the deployment workflow using templates is as follows:
Configure these prerequisites on your Azure Stack Edge Pro device.
Configure these prerequisites on your client that will be used to access the Azure Stack Edge Pro device.
-1. [Download Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) if you are using it to upload a VHD. Alternatively, you can download AzCopy to upload a VHD. You may need to configure TLS 1.2 on your client machine if running older versions of AzCopy.
+1. [Download Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) if you're using it to upload a VHD. Alternatively, you can download AzCopy to upload a VHD. You may need to configure TLS 1.2 on your client machine if running older versions of AzCopy.
1. [Download the VM templates and parameters files](https://aka.ms/ase-vm-templates) to your client machine. Unzip it into a directory youΓÇÖll use as a working directory.
Configure these prerequisites to create the resources needed for VM creation.
### Create a resource group
+### [Az](#tab/az)
+
+Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which the Azure resources such as storage account, disk, managed disk are deployed and managed.
+
+> [!IMPORTANT]
+> All the resources are created in the same location as that of the device and the location is set to **DBELocal**.
+
+```powershell
+New-AzResourceGroup -Name <Resource group name> -Location DBELocal
+```
+
+Here's a sample output:
+
+```powershell
+PS C:\WINDOWS\system32> New-AzResourceGroup -Name myaserg1 -Location DBELocal
+
+ResourceGroupName : myaserg1
+Location : dbelocal
+ProvisioningState : Succeeded
+Tags :
+ResourceId : /subscriptions/04a485ed-7a09-44ab-6671-66db7f111122/resourceGroups/myaserg1
+
+PS C:\WINDOWS\system32>
+```
+
+### [AzureRM](#tab/azure-rm)
+ Create an Azure resource group with [New-AzureRmResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which the Azure resources such as storage account, disk, managed disk are deployed and managed. > [!IMPORTANT]
ResourceId : /subscriptions/DDF9FC44-E990-42F8-9A91-5A6A5CC472DB/resource
PS C:\windows\system32> ``` ++ ### Create a storage account
+### [Az](#tab/az)
+
+Create a new storage account using the resource group created in the previous step. This account is a **local storage account** that will be used to upload the virtual disk image for the VM.
+
+```powershell
+New-AzStorageAccount -Name <Storage account name> -ResourceGroupName <Resource group name> -Location DBELocal -SkuName Standard_LRS
+```
+
+> [!NOTE]
+> Only the local storage accounts such as Locally redundant storage (Standard_LRS or Premium_LRS) can be created via Azure Resource Manager. To create tiered storage accounts, see the steps in [Add, connect to storage accounts on your Azure Stack Edge Pro](./azure-stack-edge-gpu-deploy-add-storage-accounts.md).
+
+Here's a sample output:
+
+```powershell
+PS C:\WINDOWS\system32>New-AzStorageAccount -Name myasesa1 -ResourceGroupName myaserg1 -Location DBELocal -SkuName Standard_LRS
+
+StorageAccountName ResourceGroupName PrimaryLocation SkuName Kind AccessTier CreationTime ProvisioningState EnableHttpsTrafficOnly
+ -- - - - --
+myasesa1 myaserg1 DBELocal Standard_LRS Storage 4/18/2022 8:35:09 PM Succeeded False
+
+PS C:\WINDOWS\system32>
+```
+
+To get the storage account key, run the `Get-AzStorageAccountKey` command. Here's a sample output:
+
+```powershell
+PS C:\WINDOWS\system32> Get-AzStorageAccountKey
+
+cmdlet Get-AzStorageAccountKey at command pipeline position 1
+Supply values for the following parameters:
+(Type !? for Help.)
+ResourceGroupName: myaserg1
+Name: myasesa1
+
+KeyName Value Permissions
+- -- --
+key1 7a707uIh43qADXvuhwqtw39mwq3M97r1BflhoF2yZ6W9FNkGOCblxb7nDSiYVGQprpkKk0Au2AjmgUXUT6yCog== Full
+key2 2v1VQ6qH1CJ9bOjB15p4jg9Ejn7iazU95Qe8hAGE22MTL21Ac5skA6kZnE3nbe+rdiXiORBeVh9OpJcMOfoaZg== Full
+
+PS C:\WINDOWS\system32>
+```
+
+### [AzureRM](#tab/azure-rm)
+ Create a new storage account using the resource group created in the previous step. This account is a **local storage account** that will be used to upload the virtual disk image for the VM. ```powershell
New-AzureRmStorageAccount -Name <Storage account name> -ResourceGroupName <Resou
> [!NOTE] > Only the local storage accounts such as Locally redundant storage (Standard_LRS or Premium_LRS) can be created via Azure Resource Manager. To create tiered storage accounts, see the steps in [Add, connect to storage accounts on your Azure Stack Edge Pro](./azure-stack-edge-gpu-deploy-add-storage-accounts.md).
-A sample output is shown below.
+Here's a sample output:
```powershell PS C:\windows\system32> New-AzureRmStorageAccount -Name myasegpusavm -ResourceGroupName myasegpurgvm -Location DBELocal -SkuName Standard_LRS
Supply values for the following parameters:
ResourceGroupName: myasegpurgvm Name: myasegpusavm
-KeyName Value Permissions
-- -- --
+KeyName Value Permissions
+- -- --
key1 GsCm7QriXurqfqx211oKdfQ1C9Hyu5ZutP6Xl0dqlNNhxLxDesDej591M8y7ykSPN4fY9vmVpgc4ftkwAO7KQ== 11 key2 7vnVMJUwJXlxkXXOyVO4NfqbW5e/5hZ+VOs+C/h/ReeoszeV+qoyuBitgnWjiDPNdH4+lSm1/ZjvoBWsQ1klqQ== ll ``` ++ ### Add blob URI to hosts file
-Make sure that you have already added the blob URI in hosts file for the client that you are using to connect to Blob storage. **Run Notepad as administrator** and add the following entry for the blob URI in the `C:\windows\system32\drivers\etc\hosts`:
+Make sure that you've already added the blob URI in hosts file for the client that you're using to connect to Blob storage. **Run Notepad as administrator** and add the following entry for the blob URI in the `C:\windows\system32\drivers\etc\hosts`:
`<Device IP> <storage account name>.blob.<Device name>.<DNS domain>`
In a typical environment, you would have your DNS configured so that all storage
### (Optional) Install certificates
-Skip this step if you will connect via Storage Explorer using *http*. If you are using *https*, then you need to install appropriate certificates in Storage Explorer. In this case, install the blob endpoint certificate. For more information, see how to create and upload certificates in [Manage certificates](azure-stack-edge-gpu-manage-certificates.md).
+Skip this step if you'll connect via Storage Explorer using *http*. If you're using *https*, then you need to install appropriate certificates in Storage Explorer. In this case, install the blob endpoint certificate. For more information, see how to create and upload certificates in [Manage certificates](azure-stack-edge-gpu-manage-certificates.md).
### Create and upload a VHD
Copy any disk images to be used into page blobs in the local storage account tha
![Import blob storage endpoint certificate](media/azure-stack-edge-gpu-deploy-virtual-machine-templates/import-blob-storage-endpoint-certificate-1.png)
- - If you are using device generated certificates, download and convert the blob storage endpoint `.cer` certificate to a `.pem` format. Run the following command.
+ - If you're using device generated certificates, download and convert the blob storage endpoint `.cer` certificate to a `.pem` format. Run the following command.
```powershell PS C:\windows\system32> Certutil -encode 'C:\myasegpu1_Blob storage (1).cer' .\blobstoragecert.pem
Copy any disk images to be used into page blobs in the local storage account tha
Output Length = 1954 CertUtil: -encode command completed successfully. ```
- - If you are bringing your own certificate, use the signing chain root certificate in `.pem` format.
+ - If you're bringing your own certificate, use the signing chain root certificate in `.pem` format.
-3. After you have imported the certificate, restart Storage Explorer for the changes to take effect.
+3. After you've imported the certificate, restart Storage Explorer for the changes to take effect.
![Restart Storage Explorer](media/azure-stack-edge-gpu-deploy-virtual-machine-templates/restart-storage-explorer-1.png)
Copy any disk images to be used into page blobs in the local storage account tha
8. The storage account appears in the left-pane. Select and expand the storage account. Select **Blob containers**, right-click, and select **Create Blob Container**. Provide a name for your blob container.
-9. Select the container you just created and in the right-pane, select **Upload > Upload files**.
+9. Select the container you just created, and then in the right-pane, select **Upload > Upload files**.
![Upload VHD file 1](media/azure-stack-edge-gpu-deploy-virtual-machine-templates/upload-vhd-file-1.png)
Copy any disk images to be used into page blobs in the local storage account tha
![Upload VHD file 3](media/azure-stack-edge-gpu-deploy-virtual-machine-templates/upload-vhd-file-3.png)
-12. Copy and save the **Uri**, which you will use in later steps.
+12. Copy and save the **Uri**, which you'll use in later steps.
![Copy URI](media/azure-stack-edge-gpu-deploy-virtual-machine-templates/copy-uri-1.png)
Copy any disk images to be used into page blobs in the local storage account tha
To create image for your VM, edit the `CreateImage.parameters.json` parameters file and then deploy the template `CreateImage.json` that uses this parameter file. - ### Edit parameters file The file `CreateImage.parameters.json` takes the following parameters:
The file `CreateImage.parameters.json` takes the following parameters:
Edit the file `CreateImage.parameters.json` to include the following values for your Azure Stack Edge Pro device:
-1. Provide the OS type corresponding to the VHD you will upload. The OS type can be Windows or Linux.
+1. Provide the OS type corresponding to the VHD you'll upload. The OS type can be Windows or Linux.
```json "parameters": {
Edit the file `CreateImage.parameters.json` to include the following values for
3. Provide a unique image name. This image is used to create VM in the later steps.
- Here is a sample json that is used in this article.
+ Here's a sample json that is used in this article.
```json {
Edit the file `CreateImage.parameters.json` to include the following values for
5. Save the parameters file.
+### Deploy template
-### Deploy template
+### [Az](#tab/az)
+
+Deploy the template `CreateImage.json`. This template deploys the image resources that will be used to create VMs in the later step.
+
+> [!NOTE]
+> When you deploy the template if you get an authentication error, your Azure credentials for this session may have expired. Rerun the `login-Az` command to connect to Azure Resource Manager on your Azure Stack Edge Pro device again.
+
+1. Run the following command:
+
+ ```powershell
+ $templateFile = "Path to CreateImage.json"
+ $templateParameterFile = "Path to CreateImage.parameters.json"
+ $RGName = "<Name of your resource group>"
+ New-AzResourceGroupDeployment `
+ -ResourceGroupName $RGName `
+ -TemplateFile $templateFile `
+ -TemplateParameterFile $templateParameterFile `
+ -Name "<Name for your deployment>"
+ ```
+
+ This command deploys an image resource.
+
+1. To query the resource, run the following command:
+
+ ```powershell
+ Get-AzImage -ResourceGroupName <Resource Group Name> -name <Image Name>
+ ```
+
+ Here's a sample output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> $templateFile = "C:\12-09-2020\CreateImage\CreateImage.json"
+ PS C:\WINDOWS\system32> $templateParameterFile = "C:\12-09-2020\CreateImage\CreateImage.parameters.json"
+ PS C:\WINDOWS\system32> $RGName = "myaserg1"
+ PS C:\WINDOWS\system32> New-AzResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "deployment1"
+
+ DeploymentName : deployment1
+ ResourceGroupName : myaserg1
+ ProvisioningState : Succeeded
+ Timestamp : 4/18/2022 9:24:26 PM
+ Mode : Incremental
+ TemplateLink :
+ Parameters :
+ Name Type Value
+ =============== ========================= ==========
+ osType String Linux
+ imageName String myaselinuximg1
+ imageUri String
+ https://myasepro2stor.blob.dm1176047910p.wdshcsso.com/myasepro2cont1/ubuntu13.vhd
+
+ Outputs :
+ DeploymentDebugLogLevel :
+
+ PS C:\WINDOWS\system32>
+ ```
+
+### [AzureRM](#tab/azure-rm)
Deploy the template `CreateImage.json`. This template deploys the image resources that will be used to create VMs in the later step.
Deploy the template `CreateImage.json`. This template deploys the image resource
$templateParameterFile = "Path to CreateImage.parameters.json" $RGName = "<Name of your resource group>" New-AzureRmResourceGroupDeployment `
- -ResourceGroupName $RGName `
+ -ResourceGroupName $RGName `
-TemplateFile $templateFile ` -TemplateParameterFile $templateParameterFile ` -Name "<Name for your deployment>" ```
- This command deploys an image resource. To query the resource, run the following command:
+ This command deploys an image resource.
+
+1. To query the resource, run the following command:
```powershell Get-AzureRmImage -ResourceGroupName <Resource Group Name> -name <Image Name> ```
- Here is a sample output of a successfully created image.
+ Here's a sample output of a successfully created image.
```powershell PS C:\WINDOWS\system32> login-AzureRMAccount -EnvironmentName aztest -TenantId c0257de7-538f-415c-993a-1b87a031879d
Deploy the template `CreateImage.json`. This template deploys the image resource
DeploymentDebugLogLevel : PS C:\WINDOWS\system32> ```+ ## Create VM ### Edit parameters file to create VM+
+### [Az](#tab/az)
+To create a VM, use the `CreateVM.parameters.json` parameter file. It takes the following parameters.
+
+```json
+"vmName": {
+ "value": "<Name for your VM>"
+ },
+ "adminUsername": {
+ "value": "<Username to log into the VM>"
+ },
+ "Password": {
+ "value": "<Password to log into the VM>"
+ },
+ "imageName": {
+ "value": "<Name for your image>"
+ },
+ "vmSize": {
+ "value": "<A supported size for your VM>"
+ },
+ "vnetName": {
+ "value": "<Name for the virtual network, use ASEVNET>"
+ },
+ "subnetName": {
+ "value": "<Name for the subnet, use ASEVNETsubNet>"
+ },
+ "vnetRG": {
+ "value": "<Resource group for Vnet, use ASERG>"
+ },
+ "nicName": {
+ "value": "<Name for the network interface>"
+ },
+ "privateIPAddress": {
+ "value": "<Private IP address, enter a static IP in the subnet created earlier or leave empty to assign DHCP>"
+ },
+ "IPConfigName": {
+ "value": "<Name for the ipconfig associated with the network interface>"
+ }
+```
+
+Assign appropriate parameters in `CreateVM.parameters.json` for your Azure Stack Edge Pro device.
+
+1. Provide a unique name, network interface name, and ipconfig name.
+1. Enter a username, password, and a supported VM size.
+1. When you enabled the network interface for compute, a virtual switch and a virtual network were automatically created on that network interface. You can query the existing virtual network to get the Vnet name, Subnet name, and Vnet resource group name.
+
+ Run the following command:
+
+ ```powershell
+ Get-AzVirtualNetwork
+ ```
+ Here's the sample output:
+
+ ```powershell
+
+ PS C:\WINDOWS\system32> Get-AzVirtualNetwork
+
+ Name : ASEVNET
+ ResourceGroupName : ASERG
+ Location : dbelocal
+ Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/ASERG/providers/Microsoft
+ .Network/virtualNetworks/ASEVNET
+ Etag : W/"990b306d-18b6-41ea-a456-b275efe21105"
+ ResourceGuid : f8309d81-19e9-42fc-b4ed-d573f00e61ed
+ ProvisioningState : Succeeded
+ Tags :
+ AddressSpace : {
+ "AddressPrefixes": [
+ "10.57.48.0/21"
+ ]
+ }
+ DhcpOptions : null
+ Subnets : [
+ {
+ "Name": "ASEVNETsubNet",
+ "Etag": "W/\"990b306d-18b6-41ea-a456-b275efe21105\"",
+ "Id": "/subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/ASERG/provider
+ s/Microsoft.Network/virtualNetworks/ASEVNET/subnets/ASEVNETsubNet",
+ "AddressPrefix": "10.57.48.0/21",
+ "IpConfigurations": [],
+ "ResourceNavigationLinks": [],
+ "ServiceEndpoints": [],
+ "ProvisioningState": "Succeeded"
+ }
+ ]
+ VirtualNetworkPeerings : []
+ EnableDDoSProtection : false
+ EnableVmProtection : false
+
+ PS C:\WINDOWS\system32>
+ ```
+
+ Use ASEVNET for Vnet name, ASEVNETsubNet for Subnet name, and ASERG for Vnet resource group name.
+
+1. Now youΓÇÖll need a static IP address to assign to the VM that is in the subnet network defined above. Replace **PrivateIPAddress** with this address in the parameter file. To have the VM get an IP address from your local DCHP server, leave the `privateIPAddress` value blank.
+
+ ```json
+ "privateIPAddress": {
+ "value": "5.5.153.200"
+ },
+ ```
+
+1. Save the parameters file.
+
+ Here is a sample json used in this article.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vmName": {
+ "value": "vm1"
+ },
+ "adminUsername": {
+ "value": "Administrator"
+ },
+ "Password": {
+ "value": "Password1"
+ },
+ "imageName": {
+ "value": "myaselinuximg1"
+ },
+ "vmSize": {
+ "value": "Standard_NC4as_T4_v3"
+ },
+ "vnetName": {
+ "value": "vswitch1"
+ },
+ "subnetName": {
+ "value": "vswitch1subNet"
+ },
+ "vnetRG": {
+ "value": "myaserg1"
+ },
+ "nicName": {
+ "value": "nic1"
+ },
+ "privateIPAddress": {
+ "value": ""
+ },
+ "IPConfigName": {
+ "value": "ipconfig1"
+ }
+ }
+ }
+ ```
+
+### [AzureRM](#tab/azure-rm)
+ To create a VM, use the `CreateVM.parameters.json` parameter file. It takes the following parameters. ```json
Assign appropriate parameters in `CreateVM.parameters.json` for your Azure Stack
1. Provide a unique name, network interface name, and ipconfig name. 1. Enter a username, password, and a supported VM size.
-1. When you enabled the network interface for compute, a virtual switch and a virtual network was automatically created on that network interface. You can query the existing virtual network to get the Vnet name, Subnet name, and Vnet resource group name.
+1. When you enabled the network interface for compute, a virtual switch and a virtual network were automatically created on that network interface. You can query the existing virtual network to get the Vnet name, Subnet name, and Vnet resource group name.
Run the following command: ```powershell Get-AzureRmVirtualNetwork ```
- Here is the sample output:
+ Here's the sample output:
```powershell
Assign appropriate parameters in `CreateVM.parameters.json` for your Azure Stack
}, ```
-4. Save the parameters file.
+1. Save the parameters file.
- Here is a sample json that is used in this article.
+ Here's a sample json that is used in this article.
```json {
Assign appropriate parameters in `CreateVM.parameters.json` for your Azure Stack
} } }
- ```
+ ```
+ ### Deploy template to create VM Deploy the VM creation template `CreateVM.json`. This template creates a network interface from the existing VNet and creates VM from the deployed image.
+### [Az](#tab/az)
+
+1. Run the following command:
+
+ ```powershell
+ Command:
+
+ $templateFile = "<Path to CreateVM.json>"
+ $templateParameterFile = "<Path to CreateVM.parameters.json>"
+ $RGName = "<Resource group name>"
+
+ New-AzResourceGroupDeployment `
+ -ResourceGroupName $RGName `
+ -TemplateFile $templateFile `
+ -TemplateParameterFile $templateParameterFile `
+ -Name "<DeploymentName>"
+ ```
+ The VM creation will take 15-20 minutes. Here's a sample output of a successfully created VM:
+
+ ```powershell
+ PS C:\WINDOWS\system32> $templateFile = "C:\12-09-2020\CreateVM\CreateVM.json"
+ PS C:\WINDOWS\system32> $templateParameterFile = "C:\12-09-2020\CreateVM\CreateVM.parameters.json"
+ PS C:\WINDOWS\system32> $RGName = "myaserg1"
+ PS C:\WINDOWS\system32> New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "Deployment2"
+
+ DeploymentName : Deployment2
+ ResourceGroupName : myaserg1
+ ProvisioningState : Succeeded
+ Timestamp : 04/18/2022 1:51:28 PM
+ Mode : Incremental
+ TemplateLink :
+ Parameters :
+ Name Type Value
+ =============== ========================= ==========
+ vmName String vm1
+ adminUsername String Administrator
+ password String Password1
+ imageName String myaselinuximg
+ vmSize String Standard_NC4as_T4_v3
+ vnetName String vswitch1
+ vnetRG String myaserg1
+ subnetName String vswitch1subNet
+ nicName String nic1
+ ipConfigName String ipconfig1
+ privateIPAddress String
+
+ Outputs :
+ DeploymentDebugLogLevel :
+
+ PS C:\WINDOWS\system32
+ ```
+
+ You can also run the `New-AzResourceGroupDeployment` command asynchronously with `ΓÇôAsJob` parameter. Here's a sample output when the cmdlet runs in the background. You can then query the status of job that is created using the `Get-Job` cmdlet.
+
+ ```powershell
+ PS C:\WINDOWS\system32> New-AzResourceGroupDeployment `
+ >> -ResourceGroupName $RGName `
+ >> -TemplateFile $templateFile `
+ >> -TemplateParameterFile $templateParameterFile `
+ >> -Name "Deployment4" `
+ >> -AsJob
+
+ Id Name PSJobTypeName State HasMoreData Location Command
+ -- - - -- -- -- -
+ 4 Long Running... AzureLongRun... Running True localhost New-AzureRmResourceGro...
+
+ PS C:\WINDOWS\system32> Get-Job -Id 4
+
+ Id Name PSJobTypeName State HasMoreData Location Command
+ -- - - -- -- -- -
+ ```
+
+1. Check if the VM is successfully provisioned. Run the following command:
+
+ `Get-AzVm`
+
+### [AzureRM](#tab/azure-rm)
+ 1. Run the following command: ```powershell
Deploy the VM creation template `CreateVM.json`. This template creates a network
-Name "<DeploymentName>" ```
- The VM creation will take 15-20 minutes. Here is a sample output of a successfully created VM.
+ The VM creation will take 15-20 minutes. Here's a sample output of a successfully created VM.
```powershell PS C:\WINDOWS\system32> $templateFile = "C:\12-09-2020\CreateVM\CreateVM.json"
Deploy the VM creation template `CreateVM.json`. This template creates a network
PS C:\WINDOWS\system32 ```
- You can also run the `New-AzureRmResourceGroupDeployment` command asynchronously with `ΓÇôAsJob` parameter. Here is a sample output when the cmdlet runs in the background. You can then query the status of job that is created using the `Get-Job` cmdlet.
+ You can also run the `New-AzureRmResourceGroupDeployment` command asynchronously with `ΓÇôAsJob` parameter. Here's a sample output when the cmdlet runs in the background. You can then query the status of job that is created using the `Get-Job` cmdlet.
```powershell PS C:\WINDOWS\system32> New-AzureRmResourceGroupDeployment `
Deploy the VM creation template `CreateVM.json`. This template creates a network
-- - - -- -- -- - ```
-7. Check if the VM is successfully provisioned. Run the following command:
+1. Check if the VM is successfully provisioned. Run the following command:
`Get-AzureRmVm` + ## Connect to a VM
Follow these steps to connect to a Linux VM.
[!INCLUDE [azure-stack-edge-gateway-connect-vm](../../includes/azure-stack-edge-gateway-connect-virtual-machine-linux.md)] - ## Next steps [Azure Resource Manager cmdlets](/powershell/module/azurerm.resources/?view=azurermps-6.13.0&preserve-view=true)
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Last updated 03/30/2022
This article lists the security alerts you might get from Microsoft Defender for Cloud and any Microsoft Defender plans you've enabled. The alerts shown in your environment depend on the resources and services you're protecting, as well as your customized configuration.
-At the bottom of this page, there's a table describing the Microsoft Defender for Cloud kill chain aligned with version 7 of the [MITRE ATT&CK matrix](https://attack.mitre.org/versions/v7/).
+At the bottom of this page, there's a table describing the Microsoft Defender for Cloud kill chain aligned with version 9 of the [MITRE ATT&CK matrix](https://attack.mitre.org/versions/v9/).
[Learn how to respond to these alerts](managing-and-responding-alerts.md).
Understanding the intention of an attack can help you investigate and report the
The series of steps that describe the progression of a cyberattack from reconnaissance to data exfiltration is often referred to as a "kill chain".
-Defender for Cloud's supported kill chain intents are based on [version 7 of the MITRE ATT&CK matrix](https://attack.mitre.org/versions/v7/) and described in the table below.
+Defender for Cloud's supported kill chain intents are based on [version 9 of the MITRE ATT&CK matrix](https://attack.mitre.org/versions/v9/) and described in the table below.
| Tactic | Description | |--|-|
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
To help you understand how important each recommendation is to your overall secu
Defender for Cloud provides: -- **Security alerts** - When Defender for Cloud detects a threat in any area of your environment, it generates a security alert. These alerts describe details of the affected resources, suggested remediation steps, and in some cases an option to trigger a logic app in response. Whether an alert is generated by Defender for Cloud, or received by Defender for Cloud from an integrated security product, you can export it. To export your alerts to Microsoft Sentinel, any third-party SIEM, or any other external tool, follow the instructions in [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md). Defender for Cloud's threat protection includes fusion kill-chain analysis, which automatically correlates alerts in your environment based on cyber kill-chain analysis, to help you better understand the full story of an attack campaign, where it started and what kind of impact it had on your resources. [Defender for Cloud's supported kill chain intents are based on version 7 of the MITRE ATT&CK matrix](alerts-reference.md#intentions).
+- **Security alerts** - When Defender for Cloud detects a threat in any area of your environment, it generates a security alert. These alerts describe details of the affected resources, suggested remediation steps, and in some cases an option to trigger a logic app in response. Whether an alert is generated by Defender for Cloud, or received by Defender for Cloud from an integrated security product, you can export it. To export your alerts to Microsoft Sentinel, any third-party SIEM, or any other external tool, follow the instructions in [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md). Defender for Cloud's threat protection includes fusion kill-chain analysis, which automatically correlates alerts in your environment based on cyber kill-chain analysis, to help you better understand the full story of an attack campaign, where it started and what kind of impact it had on your resources. [Defender for Cloud's supported kill chain intents are based on version 9 of the MITRE ATT&CK matrix](alerts-reference.md#intentions).
- **Advanced threat protection features** for virtual machines, SQL databases, containers, web applications, your network, and more - Protections include securing the management ports of your VMs with [just-in-time access](just-in-time-access-overview.md), and [adaptive application controls](adaptive-application-controls.md) to create allowlists for what apps should and shouldn't run on your machines.
defender-for-cloud Managing And Responding Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/managing-and-responding-alerts.md
For an overview of how Defender for Cloud generates alerts, see [How Microsoft D
The right pane includes the **Alert details** tab containing further details of the alert to help you investigate the issue: IP addresses, files, processes, and more.
- ![Suggestions for what to do about security alerts.](./media/managing-and-responding-alerts/security-center-alert-remediate.png)
+ :::image type="content" source="./media/managing-and-responding-alerts/security-center-alert-remediate.png" alt-text="Suggestions for what to do about security alerts.":::
Also in the right pane is the **Take action** tab. Use this tab to take further actions regarding the security alert. Actions such as: - *Inspect resource context* - sends you to the resource's activity logs that support the security alert
For an overview of how Defender for Cloud generates alerts, see [How Microsoft D
- *Trigger automated response* - provides the option to trigger a logic app as a response to this security alert - *Suppress similar alerts* - provides the option to suppress future alerts with similar characteristics if the alert isnΓÇÖt relevant for your organization
- ![Take action tab.](./media/managing-and-responding-alerts/alert-take-action.png)
+ :::image type="content" source="./media/managing-and-responding-alerts/alert-take-action.png" alt-text="Take action tab.":::
## Change the status of multiple security alerts at once
defender-for-cloud Regulatory Compliance Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md
Title: 'Tutorial: Regulatory compliance checks - Microsoft Defender for Cloud' description: 'Tutorial: Learn how to Improve your regulatory compliance using Microsoft Defender for Cloud.'++ Previously updated : 11/09/2021 Last updated : 04/26/2022 # Tutorial: Improve your regulatory compliance [!INCLUDE [Banner for top of topics](./includes/banner.md)]
-Microsoft Defender for Cloud helps streamline the process for meeting regulatory compliance requirements, using the **regulatory compliance dashboard**.
-
-Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in the standards that you've applied to your subscriptions. The dashboard reflects the status of your compliance with these standards.
+Microsoft Defender for Cloud helps streamline the process for meeting regulatory compliance requirements, using the **regulatory compliance dashboard**. Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in the standards that you've applied to your subscriptions. The dashboard reflects the status of your compliance with these standards.
When you enable Defender for Cloud on an Azure subscription, the [Azure Security Benchmark](/security/benchmark/azure/introduction) is automatically assigned to that subscription. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security.
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
To step through the features covered in this tutorial: - [Enable enhanced security features](defender-for-cloud-introduction.md). You can enable these for free for 30 days.-- You must be signed in with an account that has reader access to the policy compliance data (**Security Reader** is insufficient). The role of **Global reader** for the subscription will work. At a minimum, you'll need to have **Resource Policy Contributor** and **Security Admin** roles assigned.
+- You must be signed in with an account that has reader access to the policy compliance data. The **Global reader** for the subscription has access to the policy compliance data, but the **Security Reader** role does not. At a minimum, you'll need to have **Resource Policy Contributor** and **Security Admin** roles assigned.
## Assess your regulatory compliance
For example, you might want Defender for Cloud to email a specific user when a c
## FAQ - Regulatory compliance dashboard -- [What standards are supported in the compliance dashboard?](#what-standards-are-supported-in-the-compliance-dashboard)-- [Why do some controls appear grayed out?](#why-do-some-controls-appear-grayed-out)-- [How can I remove a built-in standard, like PCI-DSS, ISO 27001, or SOC2 TSP from the dashboard?](#how-can-i-remove-a-built-in-standard-like-pci-dss-iso-27001-or-soc2-tsp-from-the-dashboard)-- [I made the suggested changed based on the recommendation, yet it isn't being reflected in the dashboard](#i-made-the-suggested-changed-based-on-the-recommendation-yet-it-isnt-being-reflected-in-the-dashboard)-- [What permissions do I need to access the compliance dashboard?](#what-permissions-do-i-need-to-access-the-compliance-dashboard)-- [The regulatory compliance dashboard isn't loading for me](#the-regulatory-compliance-dashboard-isnt-loading-for-me)-- [How can I view a report of passing and failing controls per standard in my dashboard?](#how-can-i-view-a-report-of-passing-and-failing-controls-per-standard-in-my-dashboard)-- [How can I download a report with compliance data in a format other than PDF?](#how-can-i-download-a-report-with-compliance-data-in-a-format-other-than-pdf)-- [How can I create exceptions for some of the policies in the regulatory compliance dashboard?](#how-can-i-create-exceptions-for-some-of-the-policies-in-the-regulatory-compliance-dashboard)-- [What Microsoft Defender plans or licenses do I need to use the regulatory compliance dashboard?](#what-microsoft-defender-plans-or-licenses-do-i-need-to-use-the-regulatory-compliance-dashboard)-- [How do I know which benchmark or standard to use?](#how-do-i-know-which-benchmark-or-standard-to-use)
+- [Tutorial: Improve your regulatory compliance](#tutorial-improve-your-regulatory-compliance)
+ - [Prerequisites](#prerequisites)
+ - [Assess your regulatory compliance](#assess-your-regulatory-compliance)
+ - [Improve your compliance posture](#improve-your-compliance-posture)
+ - [Generate compliance status reports and certificates](#generate-compliance-status-reports-and-certificates)
+ - [Configure frequent exports of your compliance status data](#configure-frequent-exports-of-your-compliance-status-data)
+ - [Run workflow automations when there are changes to your compliance](#run-workflow-automations-when-there-are-changes-to-your-compliance)
+ - [FAQ - Regulatory compliance dashboard](#faqregulatory-compliance-dashboard)
+ - [What standards are supported in the compliance dashboard?](#what-standards-are-supported-in-the-compliance-dashboard)
+ - [Why do some controls appear grayed out?](#why-do-some-controls-appear-grayed-out)
+ - [How can I remove a built-in standard, like PCI-DSS, ISO 27001, or SOC2 TSP from the dashboard?](#how-can-i-remove-a-built-in-standard-like-pci-dss-iso-27001-or-soc2-tsp-from-the-dashboard)
+ - [I made the suggested changes based on the recommendation, but it isn't being reflected in the dashboard?](#i-made-the-suggested-changes-based-on-the-recommendation-but-it-isnt-being-reflected-in-the-dashboard)
+ - [What permissions do I need to access the compliance dashboard?](#what-permissions-do-i-need-to-access-the-compliance-dashboard)
+ - [The regulatory compliance dashboard isn't loading for me](#the-regulatory-compliance-dashboard-isnt-loading-for-me)
+ - [How can I view a report of passing and failing controls per standard in my dashboard?](#how-can-i-view-a-report-of-passing-and-failing-controls-per-standard-in-my-dashboard)
+ - [How can I download a report with compliance data in a format other than PDF?](#how-can-i-download-a-report-with-compliance-data-in-a-format-other-than-pdf)
+ - [How can I create exceptions for some of the policies in the regulatory compliance dashboard?](#how-can-i-create-exceptions-for-some-of-the-policies-in-the-regulatory-compliance-dashboard)
+ - [What Microsoft Defender plans or licenses do I need to use the regulatory compliance dashboard?](#what-microsoft-defender-plans-or-licenses-do-i-need-to-use-the-regulatory-compliance-dashboard)
+ - [How do I know which benchmark or standard to use?](#how-do-i-know-which-benchmark-or-standard-to-use)
+ - [Next steps](#next-steps)
### What standards are supported in the compliance dashboard? By default, the regulatory compliance dashboard shows you the Azure Security Benchmark. The Azure Security Benchmark is the Microsoft-authored, Azure-specific guidelines for security, and compliance best practices based on common compliance frameworks. Learn more in the [Azure Security Benchmark introduction](../security/benchmarks/introduction.md).
Some controls are grayed out. These controls don't have any Defender for Cloud a
### How can I remove a built-in standard, like PCI-DSS, ISO 27001, or SOC2 TSP from the dashboard? To customize the regulatory compliance dashboard, and focus only on the standards that are applicable to you, you can remove any of the displayed regulatory standards that aren't relevant to your organization. To remove a standard, follow the instructions in [Remove a standard from your dashboard](update-regulatory-compliance-packages.md#remove-a-standard-from-your-dashboard).
-### I made the suggested changed based on the recommendation, yet it isn't being reflected in the dashboard
+### I made the suggested changes based on the recommendation, but it isn't being reflected in the dashboard?
After you take action to resolve recommendations, wait 12 hours to see the changes to your compliance data. Assessments are run approximately every 12 hours, so you'll see the effect on your compliance data only after the assessments run. ### What permissions do I need to access the compliance dashboard?
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 04/13/2022 Last updated : 04/26/2022 # What's new in Microsoft Defender for Cloud?
Updates in April include:
### New Defender for Servers plans
-Microsoft Defender for Servers is now offered in two incremental plans.
+Microsoft Defender for Servers is now offered in two incremental plans:
-- Microsoft Defender for Servers Plan 2, formerly Defender for Servers-- Microsoft Defender for Servers Plan 1, including support for Defender for Endpoint only
+- Defender for Servers Plan 2, formerly Defender for Servers
+- Defender for Servers Plan 1, provides support for Microsoft Defender for Endpoint only
-While Microsoft Defender for Servers Plan 2 continues to provide, complete protections from threats and vulnerabilities to your cloud and on-premises workloads, Microsoft Defender for Servers Plan 1 provides endpoint protection only, powered by Microsoft Defender for Endpoint and natively integrated with Defender for Cloud. Read more about the [Microsoft Defender for Servers plans](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans).
+While Defender for Servers Plan 2 continues to provide protections from threats and vulnerabilities to your cloud and on-premises workloads, Defender for Servers Plan 1 provides endpoint protection only, powered by the natively integrated Defender for Endpoint. Read more about the [Defender for Servers plans](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans).
-If you have been using Defender for Servers until now ΓÇô no action is required.
-
-In addition, Defender for Cloud also begins gradual support for the [Defender for Endpoint unified agent for Windows Server 2012 R2 and 2016](https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/defending-windows-server-2012-r2-and-2016/ba-p/2783292). Defender for Servers Plan 1 deploys the new unified agent to Windows Server 2012 R2 and 2016 workloads. Defender for Servers Plan 2 deploys the legacy agent to Windows Server 2012 R2 and 2016 workloads, and will start deploying the unified agent soon.
+If you have been using Defender for Servers until now no action is required.
+
+In addition, Defender for Cloud also begins gradual support for the [Defender for Endpoint unified agent for Windows Server 2012 R2 and 2016](https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/defending-windows-server-2012-r2-and-2016/ba-p/2783292). Defender for Servers Plan 1 deploys the new unified agent to Windows Server 2012 R2 and 2016 workloads. Defender for Servers Plan 2 deploys the legacy agent to Windows Server 2012 R2 and 2016 workloads and will start deploying the unified agent soon.
### Relocation of custom recommendations
All of the alerts for Microsoft Defender for Storage will continue to include th
### See the activity logs that relate to a security alert
-As part of the actions you can take to [triage a security alert](managing-and-responding-alerts.md#respond-to-security-alerts), you can find the related platform logs in **Inspect resource context** to gain context about the affected resource.
+As part of the actions you can take to [evaluate a security alert](managing-and-responding-alerts.md#respond-to-security-alerts), you can find the related platform logs in **Inspect resource context** to gain context about the affected resource.
Microsoft Defender for Cloud identifies platform logs that are within one day of the alert.
-The platform logs can help you evaluate the security threat and identify steps that you can take to mitigate risk.
+The platform logs can help you evaluate the security threat and identify steps that you can take to mitigate the identified risk.
## March 2022
dms Migration Dms Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-dms-powershell-cli.md
Previously updated : 03/28/2022 Last updated : 04/26/2022 # Migrate databases at scale using automation (Preview)
-The [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) enables you to assess, get Azure recommendations and migrate your SQL Server databases to Azure. Using automation with [Azure PowerShell](/powershell/module/az.datamigration) or [Azure CLI](/cli/azure/datamigration), you can leverage the capabilities of the extension with Azure Database Migration Service to migrate one or more databases at scale (including databases across multiple SQL Server instances).
+The [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) enables you to assess, get Azure recommendations and migrate your SQL Server databases to Azure. Using automation with [Azure PowerShell](/powershell/module/az.datamigration) or [Azure CLI](/cli/azure/datamigration), you can use the capabilities of the extension with Azure Database Migration Service to migrate one or more databases at scale (including databases across multiple SQL Server instances).
The following sample scripts can be referenced to suit your migration scenario using Azure PowerShell or Azure CLI: |Scripting language |Migration scenario |Azure Samples link | |||| |PowerShell |SQL Server assessment |[Azure-Samples/data-migration-sql/PowerShell/sql-server-assessment](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-assessment.md) |
-|PowerShell |SQL Server to Azure SQL Managed Instance (using file share) |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-mi-fileshare](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-mi-fileshare.md) |
-|PowerShell |SQL Server to Azure SQL Managed Instance (using Azure storage) |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-mi-blob](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-mi-blob.md) |
-|PowerShell |SQL Server to SQL Server on Azure Virtual Machines (using file share) |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-vm-fileshare](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-vm-fileshare.md) |
-|PowerShell |SQL Server to SQL Server on Azure Virtual Machines (using Azure Storage) |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-vm-blob](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-vm-blob.md) |
+|PowerShell |SQL Server to **Azure SQL Managed Instance** (using file share) |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-mi-fileshare](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-mi-fileshare.md) |
+|PowerShell |SQL Server to **Azure SQL Managed Instance** (using Azure storage) |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-mi-blob](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-mi-blob.md) |
+|PowerShell |SQL Server to **SQL Server on Azure Virtual Machines** (using file share) |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-vm-fileshare](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-vm-fileshare.md) |
+|PowerShell |SQL Server to **SQL Server on Azure Virtual Machines** (using Azure Storage) |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-vm-blob](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-vm-blob.md) |
+|PowerShell |SQL Server to **Azure SQL Database** |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-db](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-db) |
|PowerShell |Sample: End-to-End migration automation |[Azure-Samples/data-migration-sql/PowerShell/scripts/](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/scripts/) | |PowerShell |Sample: End-to-End migration automation for multiple databases |[Azure-Samples/data-migration-sql/PowerShell/scripts/multiple%20databases/](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/scripts/multiple%20databases/) | |CLI |SQL Server assessment |[Azure-Samples/data-migration-sql/CLI/sql-server-assessment](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-assessment.md) |
-|CLI |SQL Server to Azure SQL Managed Instance (using file share) |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-mi-fileshare](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-mi-fileshare.md) |
-|CLI |SQL Server to Azure SQL Managed Instance (using Azure storage) |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-mi-blob](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-mi-blob.md) |
-|CLI |SQL Server to SQL Server on Azure Virtual Machines (using file share) |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-vm-fileshare](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-vm-fileshare.md) |
-|CLI |SQL Server to SQL Server on Azure Virtual Machines (using Azure Storage) |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-vm-blob](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-vm-blob.md) |
+|CLI |SQL Server to **Azure SQL Managed Instance** (using file share) |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-mi-fileshare](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-mi-fileshare.md) |
+|CLI |SQL Server to **Azure SQL Managed Instance** (using Azure storage) |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-mi-blob](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-mi-blob.md) |
+|CLI |SQL Server to **SQL Server on Azure Virtual Machines** (using file share) |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-vm-fileshare](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-vm-fileshare.md) |
+|CLI |SQL Server to **SQL Server on Azure Virtual Machines** (using Azure Storage) |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-vm-blob](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-vm-blob.md) |
+|CLI |SQL Server to **Azure SQL Database** |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-db](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-db) |
|CLI |Sample: End-to-End migration automation |[Azure-Samples/data-migration-sql/CLI/scripts/](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/scripts/) | |CLI |Sample: End-to-End migration automation for multiple databases |[Azure-Samples/data-migration-sql/CLI/scripts/multiple%20databases/](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/scripts/multiple%20databases/) |
The following sample scripts can be referenced to suit your migration scenario u
Pre-requisites that are common across all supported migration scenarios using Azure PowerShell or Azure CLI are: * Have an Azure account that is assigned to one of the built-in roles listed below:
- - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
- - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
+ - Contributor for the target Azure SQL Managed Instance, SQL Server on Azure Virtual Machines or Azure SQL Database (and Storage Account to upload your database backup files from SMB network share).
+ - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance, SQL Server on Azure Virtual Machines or Azure SQL Database.
- Owner or Contributor role for the Azure subscription. > [!IMPORTANT] > Azure account is only required when running the migration steps and is not required for assessment or Azure recommendation steps process.
-* Create a target [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/create-configure-managed-instance-powershell-quickstart) or [SQL Server on Azure Virtual Machine](/azure/azure-sql/virtual-machines/windows/sql-vm-create-powershell-quickstart)
+* Create a target [Azure SQL Managed Instance](../azure-sql/managed-instance/create-configure-managed-instance-powershell-quickstart.md), [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/sql-vm-create-powershell-quickstart.md) or [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md)
> [!IMPORTANT]
- > If you have an existing Azure Virtual Machine, it should be registered with [SQL IaaS Agent extension in Full management mode](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management#management-modes).
+ > If you have an existing Azure Virtual Machine, it should be registered with [SQL IaaS Agent extension in Full management mode](../azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md#management-modes).
* Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission. * Use one of the following storage options for the full database and transaction log backup files: - SMB network share
Pre-requisites that are common across all supported migration scenarios using Az
> - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported. > - Use compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups. * Ensure that the service account running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.
-* The source SQL Server instance certificate from a database protected by Transparent Data Encryption (TDE) needs to be migrated to the target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine before migrating data. To learn more, see [Migrate a certificate of a TDE-protected database to Azure SQL Managed Instance](/azure/azure-sql/managed-instance/tde-certificate-migrate) and [Move a TDE Protected Database to Another SQL Server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server).
+* The source SQL Server instance certificate from a database protected by Transparent Data Encryption (TDE) needs to be migrated to the target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine before migrating data. To learn more, see [Migrate a certificate of a TDE-protected database to Azure SQL Managed Instance](../azure-sql/managed-instance/tde-certificate-migrate.md) and [Move a TDE Protected Database to Another SQL Server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server).
> [!TIP] > If your database contains sensitive data that is protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), migration process using Azure Data Studio with DMS will automatically migrate your Always Encrypted keys to your target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine.
$migOpId = az datamigration sql-managed-instance show --managed-instance-name "m
az datamigration sql-managed-instance cutover --managed-instance-name "mySQLMI" --resource-group "myRG" --target-db-name "AdventureWorks2008" --migration-operation-id $migOpId ```
-> [!TIP]
-> If you receive the error "The subscription is not registered to use namespace 'Microsoft.DataMigration'. See https://aka.ms/rps-not-found for how to register subscriptions.", run
-> ```azurepowershell Register-AzResourceProvider -ProviderNamespace "Microsoft.DataMigration". ```
+If you receive the error "The subscription is not registered to use namespace 'Microsoft.DataMigration'. See https://aka.ms/rps-not-found for how to register subscriptions.", run this command:
+```azurepowershell
+ Register-AzResourceProvider -ProviderNamespace "Microsoft.DataMigration"
+```
## Next steps
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
-+
event-hubs Transport Layer Security Audit Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-audit-minimum-version.md
+
+ Title: Use Azure Policy to audit for compliance of minimum TLS version for an Azure Event Hubs namespace
+
+description: Configure Azure Policy to audit compliance of Azure Event Hubs for using a minimum version of Transport Layer Security (TLS).
+++++ Last updated : 04/25/2022+++
+# Use Azure Policy to audit for compliance of minimum TLS version for an Azure Event Hubs namespace (Preview)
+
+If you have a large number of Microsoft Azure Event Hubs namespaces, you may want to perform an audit to make sure that all namespaces are configured for the minimum version of TLS that your organization requires. To audit a set of Event Hubs namespaces for their compliance, use Azure Policy. Azure Policy is a service that you can use to create, assign, and manage policies that apply rules to Azure resources. Azure Policy helps you to keep those resources compliant with your corporate standards and service level agreements. For more information, see [Overview of Azure Policy](../governance/policy/overview.md).
+
+## Create a policy with an audit effect
+
+Azure Policy supports effects that determine what happens when a policy rule is evaluated against a resource. The audit effect creates a warning when a resource is not in compliance, but does not stop the request. For more information about effects, see [Understand Azure Policy effects](../governance/policy/concepts/effects.md).
+
+To create a policy with an audit effect for the minimum TLS version with the Azure portal, follow these steps:
+
+1. In the Azure portal, navigate to the Azure Policy service.
+2. Under the **Authoring** section, select **Definitions**.
+3. Select **Add policy definition** to create a new policy definition.
+4. For the **Definition location** field, select the **More** button to specify where the audit policy resource is located.
+5. Specify a name for the policy. You can optionally specify a description and category.
+6. Under **Policy rule** , add the following policy definition to the **policyRule** section.
+
+ ```json
+ {
+ "policyRule": {
+ "if": {
+ "allOf": [
+ {
+ "field": "type",
+ "equals": "Microsoft.EventHub/namespaces"
+ },
+ {
+ "not": {
+ "field": " Microsoft.EventHub/namespaces/minimumTlsVersion",
+ "equals": "1.2"
+ }
+ }
+ ]
+ },
+ "then": {
+ "effect": "audit"
+ }
+ }
+ }
+ ```
+
+7. Save the policy.
+
+### Assign the policy
+
+Next, assign the policy to a resource. The scope of the policy corresponds to that resource and any resources beneath it. For more information on policy assignment, see [Azure Policy assignment structure](../governance/policy/concepts/assignment-structure.md).
+
+To assign the policy with the Azure portal, follow these steps:
+
+1. In the Azure portal, navigate to the Azure Policy service.
+2. Under the **Authoring** section, select **Assignments**.
+3. Select **Assign policy** to create a new policy assignment.
+4. For the **Scope** field, select the scope of the policy assignment.
+5. For the **Policy definition** field, select the **More** button, then select the policy you defined in the previous section from the list.
+6. Provide a name for the policy assignment. The description is optional.
+7. Leave **Policy enforcement** set to _Enabled_. This setting has no effect on the audit policy.
+8. Select **Review + create** to create the assignment.
+
+### View compliance report
+
+After you have assigned the policy, you can view the compliance report. The compliance report for an audit policy provides information on which Event Hubs namespaces are not in compliance with the policy. For more information, see [Get policy compliance data](../governance/policy/how-to/get-compliance-data.md).
+
+It may take several minutes for the compliance report to become available after the policy assignment is created.
+
+To view the compliance report in the Azure portal, follow these steps:
+
+1. In the Azure portal, navigate to the Azure Policy service.
+2. Select **Compliance**.
+3. Filter the results for the name of the policy assignment that you created in the previous step. The report shows how many resources are not in compliance with the policy.
+4. You can drill down into the report for additional details, including a list of Event Hubs namespaces that are not in compliance.
+
+## Use Azure Policy to enforce the minimum TLS version
+
+Azure Policy supports cloud governance by ensuring that Azure resources adhere to requirements and standards. To enforce a minimum TLS version requirement for the Event Hubs namespaces in your organization, you can create a policy that prevents the creation of a new Event Hubs namespace that sets the minimum TLS requirement to an older version of TLS than that which is dictated by the policy. This policy will also prevent all configuration changes to an existing namespace if the minimum TLS version setting for that namespace is not compliant with the policy.
+
+The enforcement policy uses the deny effect to prevent a request that would create or modify an Event Hubs namespace so that the minimum TLS version no longer adheres to your organization's standards. For more information about effects, see [Understand Azure Policy effects](../governance/policy/concepts/effects.md).
+
+To create a policy with a deny effect for a minimum TLS version that is less than TLS 1.2, provide the following JSON in the **policyRule** section of the policy definition:
+
+```json
+{
+ "policyRule": {
+ "if": {
+ "allOf": [
+ {
+ "field": "type",
+ "equals": " Microsoft.EventHub/namespaces"
+ },
+ {
+ "not": {
+ "field": " Microsoft.EventHub/namespaces/minimumTlsVersion",
+ "equals": "1.2"
+ }
+ }
+ ]
+ },
+ "then": {
+ "effect": "deny"
+ }
+ }
+}
+```
+
+After you create the policy with the deny effect and assign it to a scope, a user cannot create an Event Hubs namespace with a minimum TLS version that is older than 1.2. Nor can a user make any configuration changes to an existing Event Hubs namespace that currently requires a minimum TLS version that is older than 1.2. Attempting to do so results in an error. The required minimum TLS version for the Event Hubs namespace must be set to 1.2 to proceed with namespace creation or configuration.
+
+An error will be shown if you try to create an Event Hubs namespace with the minimum TLS version set to TLS 1.0 when a policy with a deny effect requires that the minimum TLS version be set to TLS 1.2.
+
+## Next steps
+
+See the following documentation for more information.
+
+- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-enforce-minimum-version.md)
+- [Configure the minimum TLS version for an Event Hubs namespace](transport-layer-security-configure-minimum-version.md)
+- [Configure Transport Layer Security (TLS) for an Event Hubs client application](transport-layer-security-configure-client-version.md)
event-hubs Transport Layer Security Configure Client Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-configure-client-version.md
+
+ Title: Configure Transport Layer Security (TLS) for an Event Hubs client application
+
+description: Configure a client application to communicate with Azure Event Hubs using a minimum version of Transport Layer Security (TLS).
+++++ Last updated : 04/25/2022+++
+# Configure Transport Layer Security (TLS) for an Event Hubs client application (Preview)
+
+For security purposes, an Azure Event Hubs namespace may require that clients use a minimum version of Transport Layer Security (TLS) to send requests. Calls to Azure Event Hubs will fail if the client is using a version of TLS that is lower than the minimum required version. For example, if a namespace requires TLS 1.2, then a request sent by a client who is using TLS 1.1 will fail.
+
+This article describes how to configure a client application to use a particular version of TLS. For information about how to configure a minimum required version of TLS for an Azure Event Hubs namespace, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-configure-minimum-version.md).
+
+## Configure the client TLS version
+
+In order for a client to send a request with a particular version of TLS, the operating system must support that version.
+
+The following example shows how to set the client's TLS version to 1.2 from .NET. The .NET Framework used by the client must support TLS 1.2. For more information, see [Support for TLS 1.2](/dotnet/framework/network-programming/tls#support-for-tls-12).
+
+# [.NET](#tab/dotnet)
+
+The following sample shows how to enable TLS 1.2 in a .NET client using the Azure.Messaging.ServiceBus client library of Event Hubs:
+
+```csharp
+{
+ // Enable TLS 1.2 before connecting to Event Hubs
+ System.Net.ServicePointManager.SecurityProtocol = System.Net.SecurityProtocolType.Tls12;
+
+ // Connection string to your Event Hubs namespace
+ string connectionString = "<NAMESPACE CONNECTION STRING>";
+
+ // Name of your Event Hub
+ string eventHubName = "<EVENT HUB NAME>";
+
+ // The sender used to publish messages to the queue
+ var producer = new EventHubProducerClient(connectionString, eventHubName);
+
+ // Use the producer client to send a message to the Event Hubs queue
+ using EventDataBatch eventBatch = await producer.CreateBatchAsync();
+ var eventData = new EventData("This is an event body");
+
+ if (!eventBatch.TryAdd(eventData))
+ {
+ throw new Exception($"The event could not be added.");
+ }
+}
+```
+++
+## Verify the TLS version used by a client
+
+To verify that the specified version of TLS was used by the client to send a request, you can use [Fiddler](https://www.telerik.com/fiddler) or a similar tool. Open Fiddler to start capturing client network traffic, then execute one of the examples in the previous section. Look at the Fiddler trace to confirm that the correct version of TLS was used to send the request.
+
+## Next steps
+
+See the following documentation for more information.
+
+- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-enforce-minimum-version.md)
+- [Configure the minimum TLS version for an Event Hubs namespace](transport-layer-security-configure-minimum-version.md)
+- [Use Azure Policy to audit for compliance of minimum TLS version for an Event Hubs namespace](transport-layer-security-audit-minimum-version.md)
event-hubs Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-configure-minimum-version.md
+
+ Title: Configure the minimum TLS version for an Event Hubs namespace using ARM
+
+description: Configure an Azure Event Hubs namespace to use a minimum version of Transport Layer Security (TLS).
+++++ Last updated : 04/25/2022+++
+# Configure the minimum TLS version for an Event Hubs namespace using ARM (Preview)
+
+To configure the minimum TLS version for an Event Hubs namespace, set the `MinimumTlsVersion` version property. When you create an Event Hubs namespace with an Azure Resource Manager template, the `MinimumTlsVersion` property is set to 1.2 by default, unless explicitly set to another version.
+
+> [!NOTE]
+> Namespaces created using an api-version prior to 2022-01-01-preview will have 1.0 as the value for `MinimumTlsVersion`. This behavior was the prior default, and is still there for backwards compatibility.
+
+## Create a template to configure the minimum TLS version
+
+To configure the minimum TLS version for an Event Hubs namespace with a template, create a template with the `MinimumTlsVersion` property set to 1.0, 1.1, or 1.2. The following steps describe how to create a template in the Azure portal.
+
+1. In the Azure portal, choose **Create a resource**.
+2. In **Search the Marketplace** , type **custom deployment** , and then press **ENTER**.
+3. Choose **Custom deployment (deploy using custom templates) (preview)**, choose **Create** , and then choose **Build your own template in the editor**.
+4. In the template editor, paste in the following JSON to create a new namespace and set the minimum TLS version to TLS 1.2. Remember to replace the placeholders in angle brackets with your own values.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "variables": {
+ "serviceBusNamespaceName": "[concat(uniqueString(subscription().subscriptionId), 'tls')]"
+ },
+ "resources": [
+ {
+ "name": "[variables('serviceBusNamespaceName')]",
+ "type": "Microsoft.EventHub/namespaces",
+ "apiVersion": "2022-01-01-preview",
+ "location": "westeurope",
+ "properties": {
+ "minimumTlsVersion": "1.2"
+ },
+ "dependsOn": [],
+ "tags": {}
+ }
+ ]
+ }
+ ```
+
+5. Save the template.
+6. Specify resource group parameter, then choose the **Review + create** button to deploy the template and create a namespace with the `MinimumTlsVersion` property configured.
+
+> [!NOTE]
+> After you update the minimum TLS version for the Event Hubs namespace, it may take up to 30 seconds before the change is fully propagated.
+
+Configuring the minimum TLS version requires api-version 2022-01-01-preview or later of the Azure Event Hubs resource provider.
+
+## Check the minimum required TLS version for multiple namespaces
+
+To check the minimum required TLS version across a set of Event Hubs namespaces with optimal performance, you can use the Azure Resource Graph Explorer in the Azure portal. To learn more about using the Resource Graph Explorer, see [Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md).
+
+Running the following query in the Resource Graph Explorer returns a list of Event Hubs namespaces and displays the minimum TLS version for each namespace:
+
+```kusto
+resources
+| where type =~ 'Microsoft.EventHub/namespaces'
+| extend minimumTlsVersion = parse\_json(properties).minimumTlsVersion
+| project subscriptionId, resourceGroup, name, minimumTlsVersion
+```
+
+## Test the minimum TLS version from a client
+
+To test that the minimum required TLS version for an Event Hubs namespace forbids calls made with an older version, you can configure a client to use an older version of TLS. For more information about configuring a client to use a specific version of TLS, see [Configure Transport Layer Security (TLS) for a client application](transport-layer-security-configure-client-version.md).
+
+When a client accesses an Event Hubs namespace using a TLS version that does not meet the minimum TLS version configured for the namespace, Azure Event Hubs returns error code 400 error (Bad Request) and a message indicating that the TLS version that was used is not permitted for making requests against this Event Hubs namespace.
+
+> [!NOTE]
+> Due to limitations in the confluent library, errors coming from an invalid TLS version will not surface when connecting through the Kafka protocol. Instead a general exception will be shown.
+
+> [!NOTE]
+> When you configure a minimum TLS version for an Event Hubs namespace, that minimum version is enforced at the application layer. Tools that attempt to determine TLS support at the protocol layer may return TLS versions in addition to the minimum required version when run directly against the Event Hubs namespace endpoint.
+
+## Next steps
+
+See the following documentation for more information.
+
+- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-enforce-minimum-version.md)
+- [Configure Transport Layer Security (TLS) for an Event Hubs client application](transport-layer-security-configure-client-version.md)
+- [Use Azure Policy to audit for compliance of minimum TLS version for an Event Hubs namespace](transport-layer-security-audit-minimum-version.md)
event-hubs Transport Layer Security Enforce Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-enforce-minimum-version.md
+
+ Title: Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace
+
+description: Configure a service bus namespace to require a minimum version of Transport Layer Security (TLS) for clients making requests against Azure Event Hubs.
+++++ Last updated : 04/25/2022+++
+# Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace (Preview)
+
+Communication between a client application and an Azure Event Hubs namespace is encrypted using Transport Layer Security (TLS). TLS is a standard cryptographic protocol that ensures privacy and data integrity between clients and services over the Internet. For more information about TLS, see [Transport Layer Security](https://datatracker.ietf.org/wg/tls/about/).
+
+Azure Event Hubs supports choosing a specific TLS version for namespaces. Currently Azure Event Hubs uses TLS 1.2 on public endpoints by default, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility.
+
+Azure Event Hubs namespaces permit clients to send and receive data with TLS 1.0 and above. To enforce stricter security measures, you can configure your Event Hubs namespace to require that clients send and receive data with a newer version of TLS. If an Event Hubs namespace requires a minimum version of TLS, then any requests made with an older version will fail.
+
+> [!IMPORTANT]
+> If you are using a service that connects to Azure Event Hubs, make sure that that service is using the appropriate version of TLS to send requests to Azure Event Hubs before you set the required minimum version for an Event Hubs namespace.
+
+## Permissions necessary to require a minimum version of TLS
+
+To set the `MinimumTlsVersion` property for the Event Hubs namespace, a user must have permissions to create and manage Event Hubs namespaces. Azure role-based access control (Azure RBAC) roles that provide these permissions include the **Microsoft.EventHub/namespaces/write** or **Microsoft.EventHub/namespaces/\*** action. Built-in roles with this action include:
+
+- The Azure Resource Manager [Owner](../role-based-access-control/built-in-roles.md#owner) role
+- The Azure Resource Manager [Contributor](../role-based-access-control/built-in-roles.md#contributor) role
+- The [Azure Event Hubs Data Owner](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-owner) role
+
+Role assignments must be scoped to the level of the Event Hubs namespace or higher to permit a user to require a minimum version of TLS for the Event Hubs namespace. For more information about role scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md).
+
+Be careful to restrict assignment of these roles only to those who require the ability to create an Event Hubs namespace or update its properties. Use the principle of least privilege to ensure that users have the fewest permissions that they need to accomplish their tasks. For more information about managing access with Azure RBAC, see [Best practices for Azure RBAC](../role-based-access-control/best-practices.md).
+
+> [!NOTE]
+> The classic subscription administrator roles Service Administrator and Co-Administrator include the equivalent of the Azure Resource Manager [**Owner**](../role-based-access-control/built-in-roles.md#owner) role. The **Owner** role includes all actions, so a user with one of these administrative roles can also create and manage Event Hubs namespaces. For more information, see [**Classic subscription administrator roles, Azure roles, and Azure AD administrator roles**](../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles).
+
+## Network considerations
+
+When a client sends a request to Event Hubs namespace, the client establishes a connection with the public endpoint of the Event Hubs namespace first, before processing any requests. The minimum TLS version setting is checked after the connection is established. If the request uses an earlier version of TLS than that specified by the setting, the connection will continue to succeed, but the request will eventually fail.
+
+> [!NOTE]
+> Due to limitations in the confluent library, errors coming from an invalid TLS version will not surface when connecting through the Kafka protocol. Instead a general exception will be shown.
+
+## Next steps
+
+See the following documentation for more information.
+
+- [Configure the minimum TLS version for an Event Hubs namespace](transport-layer-security-configure-minimum-version.md)
+- [Configure Transport Layer Security (TLS) for an Event Hubs client application](transport-layer-security-configure-client-version.md)
+- [Use Azure Policy to audit for compliance of minimum TLS version for an Event Hubs namespace](transport-layer-security-audit-minimum-version.md)
expressroute Designing For Disaster Recovery With Expressroute Privatepeering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md
The solution is illustrated in the following diagram. As illustrated, you can ar
[![10]][10]
+> [!IMPORTANT]
+> When one or multiple ExpressRoute circuits are connected to multiple virtual networks, virtual network to virtual network traffic can route via ExpressRoute. However, this is not recommended. To enable virtual network to virtual network connectivity, [configure virtual network peering](https://docs.microsoft.com/azure/virtual-network/virtual-network-manage-peering).
+>
+ ## Next steps
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **Service provider** | **Microsoft Azure** | **Microsoft 365** | **Locations** | | | | | |
-| **[AARNet](https://www.aarnet.edu.au/network-and-services/connectivity-services/azure-expressroute)** |Supported |Supported |Melbourne, Sydney |
+| **[AARNet](https://www.aarnet.edu.au/network-and-services/connectivity-services/azure-expressroute)** |Supported |Supported | Melbourne, Sydney |
| **[Airtel](https://www.airtel.in/business/#/)** | Supported | Supported | Chennai2, Mumbai2 | | **[AIS](https://business.ais.co.th/solution/en/azure-expressroute.html)** | Supported | Supported | Bangkok |
-| **[Aryaka Networks](https://www.aryaka.com/)** |Supported |Supported |Amsterdam, Chicago, Dallas, Hong Kong SAR, Sao Paulo, Seattle, Silicon Valley, Singapore, Tokyo, Washington DC |
+| **[Aryaka Networks](https://www.aryaka.com/)** |Supported |Supported | Amsterdam, Chicago, Dallas, Hong Kong SAR, Sao Paulo, Seattle, Silicon Valley, Singapore, Tokyo, Washington DC |
| **[Ascenty Data Centers](https://www.ascenty.com/en/cloud/microsoft-express-route)** |Supported |Supported | Campinas, Sao Paulo, Sao Paulo2 |
-| **[AT&T NetBond](https://www.synaptic.att.com/clouduser/html/productdetail/ATT_NetBond.htm)** |Supported |Supported |Amsterdam, Chicago, Dallas, Frankfurt, London, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC |
+| **[AT&T NetBond](https://www.synaptic.att.com/clouduser/html/productdetail/ATT_NetBond.htm)** |Supported |Supported | Amsterdam, Chicago, Dallas, Frankfurt, London, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC |
| **[AT TOKYO](https://www.attokyo.com/connectivity/azure.html)** | Supported | Supported | Osaka, Tokyo2 | | **[BICS](https://bics.com/bics-solutions-suite/cloud-connect/bics-cloud-connect-an-official-microsoft-azure-technology-partner/)** | Supported | Supported | Amsterdam2, London2 |
-| **[BBIX](https://www.bbix.net/en/service/ix/)** | Supported | Supported | Osaka, Tokyo |
-| **[BCX](https://www.bcx.co.za/solutions/connectivity/data-networks)** |Supported |Supported |Cape Town, Johannesburg|
-| **[Bell Canada](https://business.bell.ca/shop/enterprise/cloud-connect-access-to-cloud-partner-services)** |Supported |Supported |Montreal, Toronto, Quebec City, Vancouver |
-| **[British Telecom](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)** |Supported |Supported |Amsterdam, Amsterdam2, Chicago, Frankfurt, Hong Kong SAR, Johannesburg, London, London2, Newport(Wales), Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC |
-| **[BSNL](https://www.bsnl.co.in/opencms/bsnl/BSNL/services/enterprises/cloudway.html)** |Supported |Supported |Chennai, Mumbai |
-| **[C3ntro](https://www.c3ntro.com/)** |Supported |Supported |Miami |
+| **[BBIX](https://www.bbix.net/en/service/ix/)** | Supported | Supported | Osaka, Tokyo, Tokyo2 |
+| **[BCX](https://www.bcx.co.za/solutions/connectivity/data-networks)** |Supported |Supported | Cape Town, Johannesburg|
+| **[Bell Canada](https://business.bell.ca/shop/enterprise/cloud-connect-access-to-cloud-partner-services)** |Supported |Supported | Montreal, Toronto, Quebec City, Vancouver |
+| **[British Telecom](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)** |Supported |Supported | Amsterdam, Amsterdam2, Chicago, Frankfurt, Hong Kong SAR, Johannesburg, London, London2, Newport(Wales), Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC |
+| **[BSNL](https://www.bsnl.co.in/opencms/bsnl/BSNL/services/enterprises/cloudway.html)** |Supported |Supported | Chennai, Mumbai |
+| **[C3ntro](https://www.c3ntro.com/)** |Supported |Supported | Miami |
| **CDC** | Supported | Supported | Canberra, Canberra2 |
-| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** |Supported |Supported |Amsterdam2, Bogota, Chicago, Dublin, Frankfurt, Hong Kong, Las Vegas, London, London2, Montreal, New York, Paris, Phoenix, San Antonio, Seattle, Silicon Valley, Singapore2, Tokyo, Toronto, Washington DC, Washington DC2 |
+| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** |Supported |Supported | Amsterdam2, Bogota, Chicago, Dallas, Dublin, Frankfurt, Hong Kong, Las Vegas, London, London2, Montreal, New York, Paris, Phoenix, San Antonio, Seattle, Silicon Valley, Singapore2, Tokyo, Toronto, Washington DC, Washington DC2 |
| **[Chief Telecom](https://www.chief.com.tw/)** |Supported |Supported | Hong Kong, Taipei | | **China Mobile International** |Supported |Supported | Hong Kong, Hong Kong2, Singapore | | **China Telecom Global** |Supported |Supported | Hong Kong, Hong Kong2 |
The following table shows locations by service provider. If you want to view ava
| **[Chunghwa Telecom](https://www.cht.com.tw/en/home/cht/about-cht/products-and-services/International/Cloud-Service)** |Supported |Supported | Taipei | | **[Claro](https://www.usclaro.com/enterprise-mnc/connectivity/mpls/)** |Supported |Supported | Miami | | **[Cologix](https://www.cologix.com/hyperscale/microsoft-azure/)** |Supported |Supported | Chicago, Dallas, Minneapolis, Montreal, Toronto, Vancouver, Washington DC |
-| **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Berlin, Chicago, Dublin, Frankfurt, Geneva, Hong Kong, London, London2, Marseille, Milan, Munich, Newport, Osaka, Paris, Silicon Valley, Silicon Valley2, Singapore2, Tokyo, Tokyo2, Washington DC, Zurich |
+| **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Berlin, Chicago, Dublin, Frankfurt, Geneva, Hong Kong, London, London2, Marseille, Milan, Munich, Newport, Osaka, Paris, Seoul, Silicon Valley, Singapore2, Tokyo, Tokyo2, Washington DC, Zurich |
| **[Comcast](https://business.comcast.com/landingpage/microsoft-azure)** |Supported |Supported | Chicago, Silicon Valley, Washington DC | | **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** |Supported |Supported | Chicago, Chicago2, Denver, Los Angeles, New York, Silicon Valley, Silicon Valley2, Washington DC, Washington DC2 | | **[Cox Business Cloud Port](https://www.cox.com/business/networking/cloud-connectivity.html)** |Supported |Supported | Dallas, Phoenix, Silicon Valley, Washington DC |
-| **[DE-CIX](https://www.de-cix.net/en/de-cix-service-world/cloud-exchange/find-a-cloud-service/detail/microsoft-azure)** | Supported |Supported | Amsterdam2, Chennai, Dubai2, Frankfurt, Frankfurt2, Madrid, Marseille, Mumbai, Munich, New York, Singapore2 |
+| **[DE-CIX](https://www.de-cix.net/en/de-cix-service-world/cloud-exchange/find-a-cloud-service/detail/microsoft-azure)** | Supported |Supported | Amsterdam2, Chennai, Dallas, Dubai2, Frankfurt, Frankfurt2, Madrid, Marseille, Mumbai, Munich, New York, Singapore2 |
| **[Devoli](https://devoli.com/expressroute)** | Supported |Supported | Auckland, Melbourne, Sydney | | **[Deutsche Telekom AG IntraSelect](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported |Supported | Frankfurt | | **[Deutsche Telekom AG](https://www.t-systems.com/de/en/cloud-and-infrastructure/manage-it-efficiently/managed-azure/cloudconnect-for-azure)** | Supported |Supported | Frankfurt2 | | **du datamena** |Supported |Supported | Dubai2 |
-| **[eir](https://www.eirevo.ie/cloud-services/cloud-connectivity)** |Supported |Supported |Dublin|
+| **[eir](https://www.eirevo.ie/cloud-services/cloud-connectivity)** |Supported |Supported | Dublin|
| **[Epsilon Global Communications](https://www.epsilontel.com/solutions/direct-cloud-connect)** |Supported |Supported | Singapore, Singapore2 | | **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Toronto, Washington DC, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* |
-| **Etisalat UAE** |Supported |Supported |Dubai|
+| **Etisalat UAE** |Supported |Supported | Dubai |
| **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** |Supported |Supported | Amsterdam, Amsterdam2, Dublin, Frankfurt, London | | **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** |Supported |Supported | Taipei | | **[Fastweb](https://www.fastweb.it/grandi-aziende/cloud/scheda-prodotto/fastcloud-interconnect/)** | Supported |Supported | Milan |
-| **[Fibrenoire](https://fibrenoire.ca/en/services/cloudextn-2/)** |Supported |Supported | Montreal, Toronto2 |
+| **[Fibrenoire](https://fibrenoire.ca/en/services/cloudextn-2/)** |Supported |Supported | Montreal, Quebec City, Toronto2 |
| **[GBI](https://www.gbiinc.com/microsoft-azure/)** |Supported |Supported | Dubai2, Frankfurt | | **[GÉANT](https://www.geant.org/Networks)** |Supported |Supported | Amsterdam, Amsterdam2, Dublin, Frankfurt, Marseille | | **[GlobalConnect](https://www.globalconnect.no/tjenester/nettverk/cloud-access)** | Supported |Supported | Oslo, Stavanger |
-| **GTT** |Supported |Supported |London2 |
+| **[GlobalConnect DK](https://www.globalconnect.no/tjenester/nettverk/cloud-access)** | Supported |Supported | Amsterdam |
+| **GTT** |Supported |Supported | Amsterdam, London2, Washington DC |
| **[Global Cloud Xchange (GCX)](https://globalcloudxchange.com/cloud-platform/cloud-x-fusion/)** | Supported| Supported | Chennai, Mumbai | | **[iAdvantage](https://www.scx.sunevision.com/)** | Supported | Supported | Hong Kong2 | | **Intelsat** | Supported | Supported | London2, Washington DC2 | | **[InterCloud](https://www.intercloud.com/)** |Supported |Supported | Amsterdam, Chicago, Dallas, Frankfurt, Geneva, Hong Kong, London, New York, Paris, Sao Paulo, Silicon Valley, Singapore, Tokyo, Washington DC, Zurich |
-| **[Internet2](https://internet2.edu/services/cloud-connect/#service-cloud-connect)** |Supported |Supported |Chicago, Dallas, Silicon Valley, Washington DC |
-| **[Internet Initiative Japan Inc. - IIJ](https://www.iij.ad.jp/en/news/pressrelease/2015/1216-2.html)** |Supported |Supported |Osaka, Tokyo |
-| **[Internet Solutions - Cloud Connect](https://www.is.co.za/solution/cloud-connect/)** |Supported |Supported |Cape Town, Johannesburg, London |
-| **[Interxion](https://www.interxion.com/why-interxion/colocate-with-the-clouds/Microsoft-Azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Copenhagen, Dublin, Dublin2, Frankfurt, London, London2, Madrid, Marseille, Paris, Zurich |
+| **[Internet2](https://internet2.edu/services/cloud-connect/#service-cloud-connect)** |Supported |Supported | Chicago, Dallas, Silicon Valley, Washington DC |
+| **[Internet Initiative Japan Inc. - IIJ](https://www.iij.ad.jp/en/news/pressrelease/2015/1216-2.html)** |Supported |Supported | Osaka, Tokyo, Tokyo2 |
+| **[Internet Solutions - Cloud Connect](https://www.is.co.za/solution/cloud-connect/)** |Supported |Supported | Cape Town, Johannesburg, London |
+| **[Interxion](https://www.interxion.com/why-interxion/colocate-with-the-clouds/Microsoft-Azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Copenhagen, Dublin, Dublin2, Frankfurt, London, London2, Madrid, Marseille, Paris, Zurich |
| **[IRIDEOS](https://irideos.it/)** |Supported |Supported | Milan | | **Iron Mountain** | Supported |Supported | Washington DC | | **[IX Reach](https://www.ixreach.com/partners/cloud-partners/microsoft-azure/)**|Supported |Supported | Amsterdam, London2, Silicon Valley, Tokyo2, Toronto, Washington DC |
-| **Jaguar Network** |Supported |Supported |Marseille, Paris |
-| **[Jisc](https://www.jisc.ac.uk/microsoft-azure-expressroute)** |Supported |Supported | London, Newport(Wales) |
+| **Jaguar Network** |Supported |Supported | Marseille, Paris |
+| **[Jisc](https://www.jisc.ac.uk/microsoft-azure-expressroute)** |Supported |Supported | London, London2, Newport(Wales) |
| **[KINX](https://www.kinx.net/service/cloudhub/ms-expressroute/?lang=en)** |Supported |Supported | Seoul | | **[Kordia](https://www.kordia.co.nz/cloudconnect)** | Supported |Supported | Auckland, Sydney | | **[KPN](https://www.kpn.com/zakelijk/cloud/connect.htm)** | Supported | Supported | Amsterdam |
The following table shows locations by service provider. If you want to view ava
| **LG CNS** |Supported |Supported | Busan, Seoul | | **[Liquid Telecom](https://www.liquidtelecom.com/products-and-services/cloud.html)** |Supported |Supported | Cape Town, Johannesburg | | **[LGUplus](http://www.uplus.co.kr/)** |Supported |Supported | Seoul |
-| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported |Supported | Amsterdam, Atlanta, Auckland, Chennai, Chicago, Dallas, Denver, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong, Hong Kong2, Las Vegas, London, London2, Los Angeles, Madrid, Melbourne, Miami, Minneapolis, Montreal, Munich, New York, Osaka, Oslo, Paris, Perth, Phoenix, Quebec City, San Antonio, Seattle, Silicon Valley, Singapore, Singapore2, Stavanger, Stockholm, Sydney, Sydney2, Tokyo, Tokyo2 Toronto, Vancouver, Washington DC, Washington DC2, Zurich |
-| **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** |Supported |Supported |London |
-| **MTN Global Connect** |Supported |Supported |Cape Town, Johannesburg|
-| **[National Telecom](https://www.nc.ntplc.co.th/cat/category/264/855/CAT+Direct+Cloud+Connect+for+Microsoft+ExpressRoute?lang=en_EN)** |Supported |Supported |Bangkok |
-| **[Neutrona Networks](https://flo.net/)** |Supported |Supported |Dallas, Los Angeles, Miami, Sao Paulo, Washington DC |
+| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported |Supported | Amsterdam, Atlanta, Auckland, Chicago, Dallas, Denver, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong, Hong Kong2, Las Vegas, London, London2, Los Angeles, Madrid, Melbourne, Miami, Minneapolis, Montreal, Munich, New York, Osaka, Oslo, Paris, Perth, Phoenix, Quebec City, San Antonio, Seattle, Silicon Valley, Singapore, Singapore2, Stavanger, Stockholm, Sydney, Sydney2, Tokyo, Tokyo2 Toronto, Vancouver, Washington DC, Washington DC2, Zurich |
+| **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** |Supported |Supported | London |
+| **MTN Global Connect** |Supported |Supported | Cape Town, Johannesburg|
+| **[National Telecom](https://www.nc.ntplc.co.th/cat/category/264/855/CAT+Direct+Cloud+Connect+for+Microsoft+ExpressRoute?lang=en_EN)** |Supported |Supported | Bangkok |
+| **[Neutrona Networks](https://flo.net/)** |Supported |Supported | Dallas, Los Angeles, Miami, Sao Paulo, Washington DC |
| **[Next Generation Data](https://vantage-dc-cardiff.co.uk/)** |Supported |Supported | Newport(Wales) | | **[NEXTDC](https://www.nextdc.com/services/axon-ethernet/microsoft-expressroute)** |Supported |Supported | Melbourne, Perth, Sydney, Sydney2 | | **NL-IX** |Supported |Supported | Amsterdam2 |
-| **[NOS](https://www.nos.pt/empresas/corporate/cloud/cloud/Pages/nos-cloud-connect.aspx)** |Supported |Supported | Amsterdam2 |
-| **[NTT Communications](https://www.ntt.com/en/services/network/virtual-private-network.html)** |Supported |Supported | Amsterdam, Hong Kong SAR, Jakarta, London, Los Angeles, Osaka, Singapore, Sydney, Tokyo, Washington DC |
+| **[NOS](https://www.nos.pt/empresas/corporate/cloud/cloud/Pages/nos-cloud-connect.aspx)** |Supported |Supported | Amsterdam2, Madrid |
+| **[NTT Communications](https://www.ntt.com/en/services/network/virtual-private-network.html)** |Supported |Supported | Amsterdam, Hong Kong SAR, London, Los Angeles, Osaka, Singapore, Sydney, Tokyo, Washington DC |
+| **NTT Communications India Network Services Pvt Ltd |Supported |Supported | Mumbai |
+| **NTT Communications - Flexible InterConnect |Supported |Supported | Jakarta, Osaka, Singapore2, Tokyo, Tokyo2 |
| **[NTT EAST](https://business.ntt-east.co.jp/service/crossconnect/)** |Supported |Supported | Tokyo | | **[NTT Global DataCenters EMEA](https://hello.global.ntt/)** |Supported |Supported | Amsterdam2, Berlin, Frankfurt, London2 | | **[NTT SmartConnect](https://cloud.nttsmc.com/cxc/azure.html)** |Supported |Supported | Osaka |
-| **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** |Supported |Supported | Marseille |
+| **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** |Supported |Supported | London2, Marseille |
| **[Optus](https://www.optus.com.au/enterprise/)** |Supported |Supported | Melbourne, Sydney |
-| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |Supported |Supported | Amsterdam, Amsterdam2, Dallas, Dubai2, Frankfurt, Hong Kong SAR, Johannesburg, London, London2, Mumbai2, Melbourne, Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC |
+| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |Supported |Supported | Amsterdam, Amsterdam2, Chicago, Dallas, Dubai2, Frankfurt, Hong Kong SAR, Johannesburg, London, London2, Mumbai2, Melbourne, Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC |
| **[Orixcom](https://www.orixcom.com/cloud-solutions/)** | Supported | Supported | Dubai2 |
-| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** |Supported |Supported | Chicago, Dallas, Denver, Las Vegas, Silicon Valley, Washington DC |
+| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** |Supported |Supported | Amsterdam, Chicago, Dallas, Denver, Las Vegas, Silicon Valley, Washington DC |
| **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** |Supported |Supported | Chicago, Hong Kong, Hong Kong2, London, Singapore2, Tokyo2 | | **[REANNZ](https://www.reannz.co.nz/products-and-services/cloud-connect/)** | Supported | Supported | Auckland | | **[Reliance Jio](https://www.jio.com/business/jio-cloud-connect)** | Supported | Supported | Mumbai | | **[Retelit](https://www.retelit.it/EN/Home.aspx)** | Supported | Supported | Milan |
-| **[Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms)** |Supported |Supported |Seoul |
+| **[Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms)** |Supported |Supported | Seoul |
| **[SES](https://www.ses.com/networks/signature-solutions/signature-cloud/ses-and-azure-expressroute)** | Supported |Supported | London2, Washington DC |
-| **[SIFY](http://telecom.sify.com/azure-expressroute.html)** |Supported |Supported |Chennai, Mumbai2 |
+| **[SIFY](http://telecom.sify.com/azure-expressroute.html)** |Supported |Supported | Chennai, Mumbai2 |
| **[SingTel](https://www.singtel.com/about-us/news-releases/singtel-provide-secure-private-access-microsoft-azure-public-cloud)** |Supported |Supported | Hong Kong2, Singapore, Singapore2 | | **[SK Telecom](http://b2b.tworld.co.kr/bizts/solution/solutionTemplate.bs?solutionId=0085)** |Supported |Supported | Seoul | | **[Softbank](https://www.softbank.jp/biz/cloud/cloud_access/direct_access_for_az/)** |Supported |Supported | Osaka, Tokyo |
-| **[Sohonet](https://www.sohonet.com/fastlane/)** |Supported |Supported |London2 |
-| **[Spark NZ](https://www.sparkdigital.co.nz/solutions/connectivity/cloud-connect/)** |Supported |Supported |Auckland, Sydney |
+| **[Sohonet](https://www.sohonet.com/fastlane/)** |Supported |Supported | Los Angeles, London2 |
+| **[Spark NZ](https://www.sparkdigital.co.nz/solutions/connectivity/cloud-connect/)** |Supported |Supported | Auckland, Sydney |
| **[Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/cloud-data-center/microsoft-cloud-services/microsoft-azure-von-swisscom.html)** | Supported | Supported | Geneva, Zurich | | **[Tata Communications](https://www.tatacommunications.com/solutions/network/cloud-ready-networks/)** |Supported |Supported | Amsterdam, Chennai, Hong Kong SAR, London, Mumbai, Pune, Sao Paulo, Silicon Valley, Singapore, Washington DC | | **[Telefonica](https://www.telefonica.com/es/home)** |Supported |Supported | Amsterdam, Sao Paulo, Madrid | | **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** |Supported |Supported | London, London2, Singapore2, Tokyo | | **Telenor** |Supported |Supported | Amsterdam, London, Oslo |
-| **[Telia Carrier](https://www.teliacarrier.com/)** | Supported | Supported | Amsterdam, Chicago, Dallas, Frankfurt, Hong Kong, London, Oslo, Paris, Silicon Valley, Stockholm, Washington DC |
-| **[Telin](https://www.telin.net/product/data-connectivity/telin-cloud-exchange)** | Supported | Supported |Jakarta |
+| **[Telia Carrier](https://www.teliacarrier.com/)** | Supported | Supported | Amsterdam, Chicago, Dallas, Frankfurt, Hong Kong, London, Oslo, Paris, Seattle, Silicon Valley, Stockholm, Washington DC |
+| **[Telin](https://www.telin.net/product/data-connectivity/telin-cloud-exchange)** | Supported | Supported | Jakarta |
| **Telmex Uninet**| Supported | Supported | Dallas | | **[Telstra Corporation](https://www.telstra.com.au/business-enterprise/network-services/networks/cloud-direct-connect/)** |Supported |Supported | Melbourne, Singapore, Sydney |
-| **[Telus](https://www.telus.com)** |Supported |Supported | Montreal, Seattle, Quebec City, Toronto, Vancouver |
+| **[Telus](https://www.telus.com)** |Supported |Supported | Montreal, Quebec City, Seattle, Toronto, Vancouver |
| **[Teraco](https://www.teraco.co.za/services/africa-cloud-exchange/)** |Supported |Supported | Cape Town, Johannesburg | | **[TIME dotCom](https://www.time.com.my/enterprise/connectivity/direct-cloud)** | Supported | Supported | Kuala Lumpur | | **[Tokai Communications](https://www.tokai-com.co.jp/en/)** | Supported | Supported | Osaka, Tokyo2 | | **TPG Telecom**| Supported | Supported | Melbourne, Sydney | | **[Transtelco](https://transtelco.net/enterprise-services/)** |Supported |Supported | Dallas, Queretaro(Mexico)|
-| **[T-Mobile](https://www.t-mobile.com/business/solutions/networking/cloud-networking )** |Supported |Supported |Chicago, Silicon Valley, Washington DC |
+| **[T-Mobile/Sprint](https://www.t-mobile.com/business/solutions/networking/cloud-networking )** |Supported |Supported | Chicago, Silicon Valley, Washington DC |
| **[T-Systems](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** |Supported |Supported | Frankfurt |
-| **[UOLDIVEO](https://www.uoldiveo.com.br/)** |Supported |Supported |Sao Paulo |
+| **[UOLDIVEO](https://www.uoldiveo.com.br/)** |Supported |Supported | Sao Paulo |
| **[UIH](https://www.uih.co.th/en/network-solutions/global-network/cloud-direct-for-microsoft-azure-expressroute)** | Supported | Supported | Bangkok | | **[Verizon](https://enterprise.verizon.com/products/network/application-enablement/secure-cloud-interconnect/)** |Supported |Supported | Amsterdam, Chicago, Dallas, Hong Kong SAR, London, Mumbai, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC | | **[Viasat](http://www.directcloud.viasatbusiness.com/)** | Supported | Supported | Washington DC2 | | **[Vocus Group NZ](https://www.vocus.co.nz/business/cloud-data-centres)** | Supported | Supported | Auckland, Sydney |
-| **Vodacom** |Supported |Supported |Cape Town, Johannesburg|
-| **[Vodafone](https://www.vodafone.com/business/global-enterprise/global-connectivity/vodafone-ip-vpn-cloud-connect)** |Supported |Supported | Amsterdam2, London, Singapore |
-| **[Vodafone Idea](https://www.vodafone.in/business/enterprise-solutions/connectivity/vpn-extended-connect)** | Supported | Supported | Mumbai2 |
+| **Vodacom** |Supported |Supported | Cape Town, Johannesburg|
+| **[Vodafone](https://www.vodafone.com/business/global-enterprise/global-connectivity/vodafone-ip-vpn-cloud-connect)** |Supported |Supported | Amsterdam2, London, Milan, Singapore |
+| **[Vodafone Idea](https://www.vodafone.in/business/enterprise-solutions/connectivity/vpn-extended-connect)** | Supported | Supported | Chennai, Mumbai2 |
| **XL Axiata** | Supported | Supported | Jakarta | | **[Zayo](https://www.zayo.com/solutions/industries/cloud-connectivity/microsoft-expressroute)** |Supported |Supported | Amsterdam, Chicago, Dallas, Denver, Dublin, Hong Kong, London, London2, Los Angeles, Montreal, New York, Paris, Phoenix, San Antonio, Seattle, Silicon Valley, Toronto, Vancouver, Washington DC, Washington DC2, Zurich|
hdinsight Sample Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/sample-script.md
+
+ Title: Sample script for Azure HDInsight when cluster creation fails
+description: Sample script to run when Azure HDInsight cluster creation fails with DomainNotFound error.
++ Last updated : 04/25/2022++
+# Sample Script
+
+Use this script to run when Azure HDInsight cluster creation fails with an error **DomainNotFound** error.
+
+```
+domainName=$1
+userName=$2
+
+if [[ -z "$domainName" ]]; then
+ echo "Domain name is a required parameter"
+ exit
+fi
+
+if [[ -z "$userName" ]]; then
+ echo "User name is a required parameter"
+ exit
+fi
+
+echo -n Password:
+read -s password
+echo
+
+echo $password
+
+echo "Domain join $domainName"
+
+ping -q -c 1 $domainName
+pingStatus=$?
+
+if [ $pingStatus -eq 0 ]; then
+ echo "Ping for domain $domainName succeeded"
+else
+ echo "Domain controller for $domainName was not resolvable"
+ exit
+fi
+
+shortDomainName="${domainName%%.*}"
+shortUserName="${userName%%@*}"
+sambaConfFileName="/etc/samba/smb.conf"
+
+echo "Preparing the $sambaConfFileName file"
+cp $sambaConfFileName "$sambaConfFileName.bak"
+echo "[global]" > $sambaConfFileName
+echo " security = ads" >> $sambaConfFileName
+echo " realm = ${domainName^^}" >> $sambaConfFileName
+echo "# If the system doesn't find the domain controller automatically, you may need the following line" >> $sambaConfFileName
+echo " password server = *" >> $sambaConfFileName
+echo "# note that workgroup is the 'short' domain name" >> $sambaConfFileName
+echo " workgroup = ${shortDomainName^^}" >> $sambaConfFileName
+echo "# winbind separator = +" >> $sambaConfFileName
+echo " winbind enum users = yes" >> $sambaConfFileName
+echo " winbind enum groups = yes" >> $sambaConfFileName
+echo " template homedir = /home/%D/%U" >> $sambaConfFileName
+echo " template shell = /bin/bash" >> $sambaConfFileName
+echo " client use spnego = yes" >> $sambaConfFileName
+echo " client ntlmv2 auth = yes" >> $sambaConfFileName
+echo " encrypt passwords = yes" >> $sambaConfFileName
+echo " restrict anonymous = 2" >> $sambaConfFileName
+echo " log level = 2" >> $sambaConfFileName
+echo " log file = /var/log/samba/sambadebug.log.%m" >> $sambaConfFileName
+echo " debug timestamp = yes" >> $sambaConfFileName
+echo " max log size = 50" >> $sambaConfFileName
+echo " winbind use default domain = yes" >> $sambaConfFileName
+echo " nt pipe support = no" >> $sambaConfFileName
+echo >> $sambaConfFileName
+echo "# Placeholder for domains" >> $sambaConfFileName
+echo "idmap config ${shortDomainName^^} : backend = rid" >> $sambaConfFileName
+echo "idmap config ${shortDomainName^^} : schema_mode = rid" >> $sambaConfFileName
+echo "idmap config ${shortDomainName^^} : range = 100000-1100000" >> $sambaConfFileName
+echo "idmap config ${shortDomainName^^} : base_rid = 0" >> $sambaConfFileName
+echo "idmap config * : backend = tdb" >> $sambaConfFileName
+echo "idmap config * : schema_mode = rid" >> $sambaConfFileName
+echo "idmap config * : range = 10000-99999" >> $sambaConfFileName
+echo "idmap config * : base_rid = 0" >> $sambaConfFileName
+
+export KRB5_TRACE=/tmp/krb.log
+reformattedUserName="$shortUserName@${domainName^^}"
+echo net ads join -w $domainName -U $reformattedUserName%$password
+
+netJoinResult=$?
+
+if [ $netJoinResult -ne 0 ]
+then
+ echo "Net join failed with result: $netJoinResult"
+ exit
+fi
+
+echo "Net join succeeded"
+
+net ads info
+```
hdinsight Troubleshoot Domainnotfound https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/troubleshoot-domainnotfound.md
Title: Cluster creation fails with DomainNotFound error in Azure HDInsight
description: Troubleshooting steps and possible resolutions for issues when interacting with Azure HDInsight clusters Previously updated : 01/23/2020 Last updated : 04/26/2022 # Scenario: Cluster creation fails with DomainNotFound error in Azure HDInsight
When the domain joined clusters are deployed, HDI creates an internal user name
* Deploy an Ubuntu VM in the same subnet and domain join the machine * SSH into the machine * sudo su
- * Run the script with username and password
+ * Run the [script](./sample-script.md) with username and password
* The script will ping, create the required configuration files and then domain. If it succeeds, your DNS settings are good. ## Next steps
iot-central Howto Authorize Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-authorize-rest-api.md
Using the REST API:
You can use the REST API to list and delete API tokens in an application. > [!TIP]
-> The [preview API](/rest/api/iotcentral/1.1-previewdataplane/api-tokens) includes support for the new [organizations feature](howto-create-organizations.md).
+> The [preview API](/rest/api/iotcentral/1.2-previewdataplane/api-tokens) includes support for the new [organizations feature](howto-create-organizations.md).
## Use a bearer token
iot-central Howto Control Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-control-devices-with-rest-api.md
Every IoT Central REST API call requires an authorization header. To learn more,
For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/). > [!TIP]
-> The [preview API](/rest/api/iotcentral/1.1-previewdataplane/devices) includes support for the new [organizations feature](howto-create-organizations.md).
+> The [preview API](/rest/api/iotcentral/1.2-previewdataplane/devices) includes support for the new [organizations feature](howto-create-organizations.md).
## Components and modules
iot-central Howto Create Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-organizations.md
To reassign an organization to a new parent, select **Edit** and choose a new pa
To delete an organization, you must delete or move to another organization any associated items such as dashboards, devices, users, device groups, and jobs. > [!TIP]
-> You can also use the REST API to [create and manage organizations](/rest/api/iotcentral/1.1-previewdataplane/organizations).
+> You can also use the REST API to [create and manage organizations](/rest/api/iotcentral/1.2-previewdataplane/organizations).
## Assign devices
iot-central Howto Manage Data Export With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-data-export-with-rest-api.md
Each data export definition can send data to one or more destinations. Create th
Use the following request to create or update a destination definition: ```http
-PUT https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=1.1-preview
+PUT https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=1.2-preview
``` * destinationId - Unique ID for the destination.
The response to this request looks like the following example:
Use the following request to retrieve details of a destination from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=1.1-preview
+GET https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=1.2-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve a list of destinations from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/dataExport/destinations?api-version=1.1-preview
+GET https://{subdomain}.{baseDomain}/api/dataExport/destinations?api-version=1.2-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
### Patch a destination ```http
-PATCH https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=1.1-preview
+PATCH https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=1.2-preview
``` You can use this to perform an incremental update to an export. The sample request body looks like the following example which updates the `displayName` to a destination:
The response to this request looks like the following example:
Use the following request to delete a destination: ```http
-DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=1.1-preview
+DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=1.2-preview
``` ### Create or update an export definition
DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destination
Use the following request to create or update a data export definition: ```http
-PUT https://{subdomain}.{baseDomain}/api/dataExport/exports/{exportId}?api-version=1.1-preview
+PUT https://{subdomain}.{baseDomain}/api/dataExport/exports/{exportId}?api-version=1.2-preview
``` The following example shows a request body that creates an export definition for device telemetry:
The response to this request looks like the following example:
Use the following request to retrieve details of an export definition from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/dataExport/exports/{exportId}?api-version=1.1-preview
+GET https://{subdomain}.{baseDomain}/api/dataExport/exports/{exportId}?api-version=1.2-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve a list of export definitions from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/dataExport/exports?api-version=1.1-preview
+GET https://{subdomain}.{baseDomain}/api/dataExport/exports?api-version=1.2-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
### Patch an export definition ```http
-PATCH https://{subdomain}.{baseDomain}/dataExport/exports/{exportId}?api-version=1.1-preview
+PATCH https://{subdomain}.{baseDomain}/dataExport/exports/{exportId}?api-version=1.2-preview
``` You can use this to perform an incremental update to an export. The sample request body looks like the following example which updates the `enrichments` to an export:
The response to this request looks like the following example:
Use the following request to delete an export definition: ```http
-DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=1.1-preview
+DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=1.2-preview
``` ## Next steps
iot-central Howto Manage Device Templates With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md
The response to this request looks like the following example:
You can use ODATA filters to filter the results returned by the list device templates API. > [!NOTE]
-> Currently, ODATA support is only available for `api-version=1.1-preview`.
+> Currently, ODATA support is only available for `api-version=1.2-preview`.
### $top
Use the **$top** filter to set the result size. The maximum returned result size
Use the following request to retrieve the top 10 device templates from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.1-preview&$top=10
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.2-preview&$top=10
``` The response to this request looks like the following example:
The response to this request looks like the following example:
}, ... ],
- "nextLink": "https://custom-12qmyn6sm0x.azureiotcentral.com/api/deviceTemplates?api-version=1.1-preview&%24top=1&%24skiptoken=%7B%22token%22%3A%22%2BRID%3A%7EJWYqAKZQKp20qCoAAAAACA%3D%3D%23RT%3A1%23TRC%3A1%23ISV%3A2%23IEO%3A65551%23QCF%3A4%22%2C%22range%22%3A%7B%22min%22%3A%2205C1DFFFFFFFFC%22%2C%22max%22%3A%22FF%22%7D%7D"
+ "nextLink": "https://custom-12qmyn6sm0x.azureiotcentral.com/api/deviceTemplates?api-version=1.2-preview&%24top=1&%24skiptoken=%7B%22token%22%3A%22%2BRID%3A%7EJWYqAKZQKp20qCoAAAAACA%3D%3D%23RT%3A1%23TRC%3A1%23ISV%3A2%23IEO%3A65551%23QCF%3A4%22%2C%22range%22%3A%7B%22min%22%3A%2205C1DFFFFFFFFC%22%2C%22max%22%3A%22FF%22%7D%7D"
} ```
$filter=contains(displayName, 'template1) eq false
The following example shows how to retrieve all the device templates where the display name contains the string `thermostat`: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.1-preview&$filter=contains(displayName, 'thermostat')
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.2-preview&$filter=contains(displayName, 'thermostat')
``` The response to this request looks like the following example:
$orderby=displayName desc
The following example shows how to retrieve all the device templates where the result is sorted by `displayName` : ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.1-preview&$orderby=displayName
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.2-preview&$orderby=displayName
``` The response to this request looks like the following example:
You can also combine two or more filters.
The following example shows how to retrieve the top 2 device templates where the display name contains the string `thermostat`. ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.1-preview&$filter=contains(displayName, 'thermostat')&$top=2
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.2-preview&$filter=contains(displayName, 'thermostat')&$top=2
``` The response to this request looks like the following example:
iot-central Howto Manage Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-with-rest-api.md
The response to this request looks like the following example:
You can use ODATA filters to filter the results returned by the list devices API. > [!NOTE]
-> Currently, ODATA support is only available for `api-version=1.1-preview`
+> Currently, ODATA support is only available for `api-version=1.2-preview`
### $top
Use the **$top** to set the result size, the maximum returned result size is 100
Use the following request to retrieve a top 10 device from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/devices?api-version=1.1-preview&$top=10
+GET https://{subdomain}.{baseDomain}/api/devices?api-version=1.2-preview&$top=10
``` The response to this request looks like the following example:
The response to this request looks like the following example:
}, ... ],
- "nextLink": "https://custom-12qmyn6sm0x.azureiotcentral.com/api/devices?api-version=1.1-preview&%24top=1&%24skiptoken=%257B%2522token%2522%253A%2522%252BRID%253A%7EJWYqAOis7THQbBQAAAAAAg%253D%253D%2523RT%253A1%2523TRC%253A1%2523ISV%253A2%2523IEO%253A65551%2523QCF%253A4%2522%252C%2522range%2522%253A%257B%2522min%2522%253A%2522%2522%252C%2522max%2522%253A%252205C1D7F7591D44%2522%257D%257D"
+ "nextLink": "https://custom-12qmyn6sm0x.azureiotcentral.com/api/devices?api-version=1.2-preview&%24top=1&%24skiptoken=%257B%2522token%2522%253A%2522%252BRID%253A%7EJWYqAOis7THQbBQAAAAAAg%253D%253D%2523RT%253A1%2523TRC%253A1%2523ISV%253A2%2523IEO%253A65551%2523QCF%253A4%2522%252C%2522range%2522%253A%257B%2522min%2522%253A%2522%2522%252C%2522max%2522%253A%252205C1D7F7591D44%2522%257D%257D"
} ```
$filter=indexof(displayName, 'device1') ge 0
The following example shows how to retrieve all the devices where the display name has index the string `thermostat`: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.1-preview&$filter=index(displayName, 'thermostat')
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.2-preview&$filter=index(displayName, 'thermostat')
``` The response to this request looks like the following example:
$orderby=displayName desc
The following example shows how to retrieve all the device templates where the result is sorted by `displayName` : ```http
-GET https://{subdomain}.{baseDomain}/api/devices?api-version=1.1-preview&$orderby=displayName
+GET https://{subdomain}.{baseDomain}/api/devices?api-version=1.2-preview&$orderby=displayName
``` The response to this request looks like the following example:
You can also combine two or more filters.
The following example shows how to retrieve the top 2 device where the display name contains the string `thermostat`. ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.1-preview&$filter=contains(displayName, 'thermostat')&$top=2
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.2-preview&$filter=contains(displayName, 'thermostat')&$top=2
``` The response to this request looks like the following example:
iot-central Howto Manage Jobs With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-jobs-with-rest-api.md
The IoT Central REST API lets you develop client applications that integrate wit
- Stop, resume, and rerun jobs in your application. > [!IMPORTANT]
-> The jobs API is currently in preview. All The REST API calls described in this article should include `?api-version=preview`.
+> The jobs API is currently in preview. All The REST API calls described in this article should include `?api-version=1.2-preview`.
This article describes how to use the `/jobs/{job_id}` API to control devices in bulk. You can also control devices individually.
Every IoT Central REST API call requires an authorization header. To learn more,
For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/). > [!TIP]
-> The [preview API](/rest/api/iotcentral/1.1-previewdataplane/jobs) includes support for the new [organizations feature](howto-create-organizations.md).
+> The [preview API](/rest/api/iotcentral/1.2-previewdataplane/jobs) includes support for the new [organizations feature](howto-create-organizations.md).
To learn how to create and manage jobs in the UI, see [Manage devices in bulk in your Azure IoT Central application](howto-manage-devices-in-bulk.md).
The following table describes the fields in the previous JSON snippet:
Use the following request to retrieve the list of the jobs in your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/jobs?api-version=preview
+GET https://{your app subdomain}.azureiotcentral.com/api/jobs?api-version=1.2-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve an individual job by ID: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/jobs/job-004?api-version=preview
+GET https://{your app subdomain}.azureiotcentral.com/api/jobs/job-004?api-version=1.2-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve the details of the devices in a job: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/jobs/job-004/devices?api-version=preview
+GET https://{your app subdomain}.azureiotcentral.com/api/jobs/job-004/devices?api-version=1.2-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve the details of the devices in a job: ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006?api-version=preview
+PUT https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006?api-version=1.2-preview
+```
+
+```json
{ "displayName": "Set target temperature", "description": "Set target temperature device property",
The response to this request looks like the following example. The initial job s
Use the following request to stop a running job: ```http
-POST https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006/stop?api-version=preview
+POST https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006/stop?api-version=1.2-preview
``` If the request succeeds, it returns a `204 - No Content` response.
If the request succeeds, it returns a `204 - No Content` response.
Use the following request to resume a stopped job: ```http
-POST https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006/resume?api-version=preview
+POST https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006/resume?api-version=1.2-preview
``` If the request succeeds, it returns a `204 - No Content` response.
If the request succeeds, it returns a `204 - No Content` response.
Use the following command to rerun an existing job on any failed devices: ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006/rerun/rerun-001?api-version=preview
+PUT https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006/rerun/rerun-001?api-version=1.2-preview
``` ## Next steps
iot-central Howto Manage Organizations With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-organizations-with-rest-api.md
The IoT Central REST API lets you develop client applications that integrate with IoT Central applications. You can use the REST API to manage organizations in your IoT Central application. > [!TIP]
-> The [organizations feature](howto-create-organizations.md) is currently available in [preview API](/rest/api/iotcentral/1.1-previewdataplane/users).
+> The [organizations feature](howto-create-organizations.md) is currently available in [preview API](/rest/api/iotcentral/1.2-previewdataplane/users).
Every IoT Central REST API call requires an authorization header. To learn more, see [How to authenticate and authorize IoT Central REST API calls](howto-authorize-rest-api.md).
The IoT Central REST API lets you:
The REST API lets you create organizations in your IoT Central application. Use the following request to create an organization in your application: ```http
-PUT https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=1.1-preview
+PUT https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=1.2-preview
``` * organizationId - Unique ID of the organization
The response to this request looks like the following example:
Use the following request to retrieve details of an individual organization from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=1.1-preview
+GET https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=1.2-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to update details of an organization in your application: ```http
-PATCH https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=1.1-preview
+PATCH https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=1.2-preview
``` The following example shows a request body that updates an organization.
The response to this request looks like the following example:
Use the following request to retrieve a list of organizations from your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/organizations?api-version=1.1-preview
+GET https://{your app subdomain}.azureiotcentral.com/api/organizations?api-version=1.2-preview
``` The response to this request looks like the following example.
The response to this request looks like the following example.
Use the following request to delete an organization: ```http
-DELETE https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=1.1-preview
+DELETE https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=1.2-preview
``` ## Next steps
iot-central Howto Manage Users Roles With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md
Every IoT Central REST API call requires an authorization header. To learn more,
For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/). > [!TIP]
-> The [preview API](/rest/api/iotcentral/1.1-previewdataplane/users) includes support for the new [organizations feature](howto-create-organizations.md).
+> The [preview API](/rest/api/iotcentral/1.2-previewdataplane/users) includes support for the new [organizations feature](howto-create-organizations.md).
## Manage roles
iot-central Howto Query With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-query-with-rest-api.md
For the reference documentation for the IoT Central REST API, see [Azure IoT Cen
Use the following request to run a query: ```http
-POST https://{your app subdomain}.azureiotcentral.com/api/query?api-version=1.1-preview
+POST https://{your app subdomain}.azureiotcentral.com/api/query?api-version=1.2-preview
``` The query is in the request body and looks like the following example: ```json {
- "query": "SELECT $id, $ts, temperature, humidity FROM urn:modelDefinition:fupmoiu28b:ymju9efv9 WHERE WITHIN_WINDOW(P1D)"
+ "query": "SELECT $id, $ts, temperature, humidity FROM dtmi:azurertos:devkit:hlby5jgib2o WHERE WITHIN_WINDOW(P1D)"
} ```
-The `urn:modelDefinition:fupmoiu28b:ymju9efv9` value in the `FROM` clause is a *device template ID*. To find a device template ID, navigate to the **Devices** page in your IoT Central application and hover over a device that uses the template. The card includes the device template ID:
+The `dtmi:azurertos:devkit:hlby5jgib2o` value in the `FROM` clause is a *device template ID*. To find a device template ID, navigate to the **Devices** page in your IoT Central application and hover over a device that uses the template. The card includes the device template ID:
:::image type="content" source="media/howto-query-with-rest-api/show-device-template-id.png" alt-text="Screenshot that shows how to find the device template ID in the page URL.":::
If your device template uses components such as the **Device information** compo
```json {
- "query": "SELECT deviceInformation.model, deviceInformation.swVersion FROM urn:modelDefinition:fupmoiu28b:ymju9efv9"
+ "query": "SELECT deviceInformation.model, deviceInformation.swVersion FROM dtmi:azurertos:devkit:hlby5jgib2o"
} ```
Use the `AS` keyword to define an alias for an item in the `SELECT` clause. The
```json {
- "query": "SELECT $id as ID, $ts as timestamp, temperature as t, pressure as p FROM urn:modelDefinition:fupmoiu28b:ymju9efv9 WHERE WITHIN_WINDOW(P1D) AND t > 0 AND p > 50"
+ "query": "SELECT $id as ID, $ts as timestamp, temperature as t, pressure as p FROM dtmi:azurertos:devkit:hlby5jgib2o WHERE WITHIN_WINDOW(P1D) AND t > 0 AND p > 50"
} ```
Use the `TOP` to limit the number of results the query returns. For example, the
```json {
- "query": "SELECT TOP 10 $id as ID, $ts as timestamp, temperature, humidity FROM urn:modelDefinition:fupmoiu28b:ymju9efv9"
+ "query": "SELECT TOP 10 $id as ID, $ts as timestamp, temperature, humidity FROM dtmi:azurertos:devkit:hlby5jgib2o"
} ```
To find a device template ID, navigate to the **Devices** page in your IoT Centr
:::image type="content" source="media/howto-query-with-rest-api/show-device-template-id.png" alt-text="Screenshot that shows how to find the device template ID in the page URL.":::
-You can also use the [Devices - Get](/rest/api/iotcentral/1.1-previewdataplane/devices/get) REST API call to get the device template ID for a device.
+You can also use the [Devices - Get](/rest/api/iotcentral/1.2-previewdataplane/devices/get) REST API call to get the device template ID for a device.
## WHERE clause
To get telemetry received by your application within a specified time window, us
```json {
- "query": "SELECT $id, $ts, temperature, humidity FROM urn:modelDefinition:fupmoiu28b:ymju9efv9 WHERE WITHIN_WINDOW(P1D)"
+ "query": "SELECT $id, $ts, temperature, humidity FROM dtmi:azurertos:devkit:hlby5jgib2o WHERE WITHIN_WINDOW(P1D)"
} ```
You can get telemetry or property values based on specific values. For example,
```json {
- "query": "SELECT $id, $ts, temperature AS t, pressure AS p FROM urn:modelDefinition:fupmoiu28b:ymju9efv9 WHERE WITHIN_WINDOW(P1D) AND t > 0 AND p > 50 AND $id IN ['sample-002', 'sample-003']"
+ "query": "SELECT $id, $ts, temperature AS t, pressure AS p FROM dtmi:azurertos:devkit:hlby5jgib2o WHERE WITHIN_WINDOW(P1D) AND t > 0 AND p > 50 AND $id IN ['sample-002', 'sample-003']"
} ```
Aggregation functions let you calculate values such as average, maximum, and min
```json {
- "query": "SELECT AVG(temperature), AVG(pressure) FROM urn:modelDefinition:fupmoiu28b:ymju9efv9 WHERE WITHIN_WINDOW(P1D) AND $id='{{DEVICE_ID}}' GROUP BY WINDOW(PT10M)"
+ "query": "SELECT AVG(temperature), AVG(pressure) FROM dtmi:azurertos:devkit:hlby5jgib2o WHERE WITHIN_WINDOW(P1D) AND $id='{{DEVICE_ID}}' GROUP BY WINDOW(PT10M)"
} ```
The `ORDER BY` clause lets you sort the query results by a telemetry value, the
```json {
- "query": "SELECT $id as ID, $ts as timestamp, temperature, humidity FROM urn:modelDefinition:fupmoiu28b:ymju9efv9 ORDER BY timestamp DESC"
+ "query": "SELECT $id as ID, $ts as timestamp, temperature, humidity FROM dtmi:azurertos:devkit:hlby5jgib2o ORDER BY timestamp DESC"
} ```
iot-central Overview Iot Central Api Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-api-tour.md
This article introduces you to Azure IoT Central REST API. Use the API to create
The REST API operations are grouped into the: -- *Data plane* operations that let you work with resources inside IoT Central applications. Data plane operations let you automate tasks that can also be completed using the IoT Central UI. Currently, there are [generally available](/rest/api/iotcentral/1.0dataplane/api-tokens) and [preview](/rest/api/iotcentral/1.1-previewdataplane/api-tokens) versions of the data plane API.
+- *Data plane* operations that let you work with resources inside IoT Central applications. Data plane operations let you automate tasks that can also be completed using the IoT Central UI. Currently, there are [generally available](/rest/api/iotcentral/1.0dataplane/api-tokens) and [preview](/rest/api/iotcentral/1.2-previewdataplane/api-tokens) versions of the data plane API.
- *Control plane* operations that let you work with the Azure resources associated with IoT Central applications. Control plane operations let you automate tasks that can also be completed in the Azure portal. ## Data plane operations
iot-dps Libraries Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/libraries-sdks.md
+
+ Title: IoT Hub Device Provisioning Service libraries and SDKs
+description: Information about the device and service libraries available for developing solutions with Device Provisioning Service (CPS).
++ Last updated : 01/26/2022++++++
+# Microsoft SDKs for IoT Hub Device Provisioning Service
+
+The Device Provisioning Service (DPS) libraries and SDKs help developers build IoT solutions using various programming languages on multiple platforms. The following tables include links to samples and quickstarts to help you get started.
+
+## Device SDKs
+
+| Platform | Package | Code repository | Samples | Quickstart | Reference |
+| --|--|--|--|--|--|
+| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Client/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/device)|[Quickstart](/azure/iot-dps/quick-create-simulated-device-x509?tabs=windows&pivots=programming-language-csharp)| [Reference](/dotnet/api/microsoft.azure.devices.provisioning.client) |
+| C|[apt-get, MBED, Arduino IDE or iOS](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#packages-and-libraries)|[GitHub](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning\_client)|[Samples](https://github.com/Azure/azure-iot-sdk-c/tree/main/provisioning_client/samples)|[Quickstart](/azure/iot-dps/quick-create-simulated-device-x509?tabs=windows&pivots=programming-language-ansi-c)|[Reference](/azure/iot-hub/iot-c-sdk-ref/) |
+| Java|[Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot.provisioning/provisioning-device-client)|[GitHub](https://github.com/Azure/azure-iot-sdk-java/blob/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/provisioning/provisioning-samples)|[Quickstart](/azure/iot-dps/quick-create-simulated-device-x509?tabs=windows&pivots=programming-language-java)|[Reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.device) |
+| Node.js|[npm](https://www.npmjs.com/package/azure-iot-provisioning-device) |[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/device/samples)|[Quickstart](/azure/iot-dps/quick-create-simulated-device-x509?tabs=windows&pivots=programming-language-nodejs)|[Reference](/javascript/api/overview/azure/iothubdeviceprovisioning) |
+| Python|[pip](https://pypi.org/project/azure-iot-device/) |[GitHub](https://github.com/Azure/azure-iot-sdk-python)|[Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-device/samples/async-hub-scenarios)|[Quickstart](/azure/iot-dps/quick-create-simulated-device-x509?tabs=windows&pivots=programming-language-python)|[Reference](/python/api/azure-iot-device/azure.iot.device.provisioningdeviceclient) |
+
+Microsoft also provides embedded device SDKs to facilitate development on resource-constrained devices. To learn more, see the [IoT Device Development Documentation](../iot-develop/about-iot-sdks.md).
+
+## Service SDKs
+
+| Platform | Package | Code repository | Samples | Quickstart | Reference |
+| --|--|--|--|--|--|
+| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/service)|[Quickstart](/azure/iot-dps/quick-enroll-device-tpm?tabs=symmetrickey&pivots=programming-language-csharp)|[Reference](/dotnet/api/microsoft.azure.devices.provisioning.service) |
+| Java|[Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot.provisioning/provisioning-service-client)|[GitHub](https://github.com/Azure/azure-iot-sdk-java/blob/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/provisioning/provisioning-samples)|[Quickstart](/azure/iot-dps/quick-enroll-device-tpm?tabs=symmetrickey&pivots=programming-language-java)|[Reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.service) |
+| Node.js|[npm](https://www.npmjs.com/package/azure-iot-provisioning-service)|[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/service/samples)|[Quickstart](/azure/iot-dps/quick-enroll-device-tpm?tabs=symmetrickey&pivots=programming-language-nodejs)|[Reference](/javascript/api/overview/azure/iothubdeviceprovisioning) |
+
+## Management SDKs
+
+| Platform | Package | Code repository | Reference |
+| --|--|--|--|
+| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Management.DeviceProvisioningServices) |[GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/deviceprovisioningservices/Microsoft.Azure.Management.DeviceProvisioningServices)| -- |
+| Node.js|[npm](https://www.npmjs.com/package/@azure/arm-deviceprovisioningservices)|[GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/deviceprovisioningservices/arm-deviceprovisioningservices)|[Reference](/javascript/api/@azure/arm-deviceprovisioningservices) |
+| Python|[pip](https://pypi.org/project/azure-mgmt-iothubprovisioningservices/) |[GitHub](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/iothub/azure-mgmt-iothubprovisioningservices)|[Reference](/python/api/azure-mgmt-iothubprovisioningservices) |
+
+## Next steps
+
+The Device Provisioning Service documentation also provides [tutorials](how-to-legacy-device-symm-key.md) and [additional samples](quick-create-simulated-device-tpm.md) that you can use to try out the SDKs and libraries.
key-vault Soft Delete Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/soft-delete-overview.md
Last updated 01/25/2022
# Azure Key Vault soft-delete overview > [!IMPORTANT]
-> You must enable soft-delete on your key vaults immediately. The ability to opt out of soft-delete is deprecated and will be removed in February 202. See full details [here](soft-delete-change.md)
+> You must enable soft-delete on your key vaults immediately. The ability to opt out of soft-delete is deprecated and will be removed in February 2020. See full details [here](soft-delete-change.md)
> [!IMPORTANT] > When a Key Vault is soft-deleted, services that are integrated with the Key Vault will be deleted. For example: Azure RBAC roles assignments and Event Grid subscriptions. Recovering a soft-deleted Key Vault will not restore these services. They will need to be recreated.
lab-services Class Type React Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-react-linux.md
[!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)]
-[React](https://reactjs.org/) is a popular JavaScript library for building user interfaces (UI). React is a declarative way to create reusable components for your website. There are many other popular libraries for JavaScript-based front-end development. We'll use a few of these libraries while creating our lab. [Redux](https://redux.js.org/) is a library that provides predictable state container for JavaScript apps and is often used in compliment with React. [JSX](https://reactjs.org/docs/introducing-jsx.html) is a library syntax extension to JavaScript often used with React to describe what the UI should look like. [NodeJS](https://nodejs.org/) is a convenient way to run a webserver for your React application.
+[React](https://reactjs.org/) is a popular JavaScript library for building user interfaces (UI). React is a declarative way to create reusable components for your website. There are many other popular libraries for JavaScript-based front-end development. We'll use a few of these libraries while creating our lab. [Redux](https://redux.js.org/) is a library that provides predictable state container for JavaScript apps and is often used in compliment with React. [JSX](https://reactjs.org/docs/introducing-jsx.html) is a library syntax extension to JavaScript often used with React to describe what the UI should look like. [NodeJS](https://nodejs.org/) is a convenient way to run a webserver for your React application.
This article shows you how to install [Visual Studio Code](https://code.visualstudio.com/) for your development environment, the tools, and libraries needed for a React web development class.
To set up this lab, you need an Azure subscription to get started. If you don't
### Lab plan settings
-Once you get have Azure subscription, you can create a new lab plan in Azure Lab Services. For more information about creating a new lab plan, see the tutorial on [how to set up a lab plan](./tutorial-setup-lab-plan.md). You can also use an existing lab plan.
+Once you have an Azure subscription, you can create a new lab plan in Azure Lab Services. For more information on creating a new lab plan, see the tutorial on [how to set up a lab plan](./tutorial-setup-lab-plan.md). You can also use an existing lab plan.
Enable your lab plan settings as described in the following table. For more information about how to enable Azure Marketplace images, see [Specify the Azure Marketplace images available to lab creators](./specify-marketplace-images.md).
Enable your lab plan settings as described in the following table. For more info
### Lab settings
-For instructions on how to create a lab, see [Tutorial: Set up a lab](tutorial-setup-lab.md). Use the following settings when creating the lab.
+For instructions on how to create a lab, see [Tutorial: Set up a lab](tutorial-setup-lab.md). Use the following settings when creating the lab.
| Lab setting | Value | | | |
sudo iptables -I INPUT -p tcp -m tcp --dport 3000 -j ACCEPT
## Cost
-LetΓÇÖs cover an example cost estimate for this class. The virtual machine size we chose was **Small**, which is 20 lab units.
+Let's cover an example cost estimate for this class. The virtual machine size we chose was **Small**, which is 20 lab units.
For a class of 25 students with 20 hours of scheduled class time and 10 hours of quota for homework or assignments, the cost estimate would be:
lab-services How To Create Schedules Within Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-schedules-within-teams.md
Title: Create Azure Lab Services schedules within Teams description: Learn how to create Lab Services schedules within Teams. Previously updated : 02/05/2022 Last updated : 04/25/2022+ # Create and manage Lab Services schedules within Teams
-Schedules allow you to configure a lab such that VMs in the lab automatically start and shut down at a specified time. You can define a one-time schedule or a recurring schedule. The article covers the procedures to create and manage schedules for a lab.
+Schedules allow you to configure a classroom lab such that the VMs automatically start and shut down at a specified time. You can define a one-time schedule or a recurring schedule. The article covers the procedures to create and manage schedules for a lab.
Here's how schedules affect lab virtual machines: -- Template virtual machine isn't included in schedules.-- Only assigned virtual machines are started. If a machine is not claimed by user (student), the machine won't start on the scheduled hours.
+- Template VM isn't included in schedules.
+- Only assigned virtual machines are started. If a machine isn't claimed by user (student), the machine won't start on the scheduled hours.
- All virtual machines (whether claimed by a user or not) are stopped based on the lab schedule. > [!IMPORTANT]
-> The scheduled running time of VMs does not count against the quota allotted to a user. The quota is for the time outside of schedule hours that a student spends on VMs.
+> The scheduled run time of VMs doesn't count against the quota allotted to a user. The alloted quota is for the time outside of schedule hours that a student spends on VMs.
Users can create, edit, and delete lab schedules within Teams as in the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com). For more information, see [creating and managing schedules](how-to-create-schedules-within-teams.md).
For more information, see the article on [configuring auto-shutdown settings for
## Next steps -- [Use Azure Lab Services within Teams overview](lab-services-within-teams-overview.md)
+- [Use Azure Lab Services within Teams overview](lab-services-within-teams-overview.md).
- As an educator, [manage the VM pool within Teams](how-to-manage-vm-pool-within-teams.md). - As an educator, [manage lab user lists from Teams](how-to-manage-user-lists-within-teams.md).-- As an admin or educator, [delete labs within Teams](how-to-delete-lab-within-teams.md)-- As student, [access a VM within Teams](how-to-access-vm-for-students-within-teams.md)
+- As an admin or educator, [delete the labs within Teams](how-to-delete-lab-within-teams.md).
+- As a student, [access a VM within Teams](how-to-access-vm-for-students-within-teams.md).
lab-services How To Delete Lab Within Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-delete-lab-within-teams.md
Title: Delete an Azure Lab Services lab from Teams description: Learn how to delete an Azure Lab Services lab from Teams. Previously updated : 02/05/2022 Last updated : 04/25/2022+ # Delete labs within Teams
This article shows how to delete a lab from the **Azure Lab Services** app.
## Delete labs
-A lab created within Teams can be deleted in the [Lab Services portal](https://labs.azure.com) directly. For more information, see [Delete a lab](manage-labs.md#delete-a-lab).
+A lab created within Teams can be deleted in the [Lab Services portal](https://labs.azure.com) directly. For more information, see [Delete a lab](manage-labs.md#delete-a-lab).
-Lab deletion is also triggered when the team is deleted. If the associated team is deleted, the lab will be automatically deleted 24 hours later when the automatic user list sync is triggered.
+Lab deletion is also triggered by the team deletion. The lab will be automatically deleted after 24 hours of the team deletion when the automatic user list sync is triggered.
> [!IMPORTANT] > Deletion of the tab or uninstalling the app will not result in deletion of the lab.
-If the *tab* is deleted in Teams, users can still access the lab VMs on the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com). When the team is deleted or the lab is explicitly deleted, users can no longer access their VMs through the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com).
+If the *tab* is deleted in Teams, users can still access the lab VMs on the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com). However, users can't access their VMs through the Lab Services web portal if the team or the lab is deleted explicitly.
## Next steps -- As an educator, [create a lab within Teams](how-to-get-started-create-lab-within-teams.md).-- As an educator, [manage the VM pool within Teams](how-to-manage-vm-pool-within-teams.md).-- As an educator, [create and manage schedules within Teams](how-to-create-schedules-within-teams.md).-- As an educator, [manage lab user lists from Teams](how-to-manage-user-lists-within-teams.md).-- As student, [access a VM within Teams](how-to-access-vm-for-students-within-teams.md)
+* As an educator, [create a lab within Teams](how-to-get-started-create-lab-within-teams.md).
+* As an educator, [manage the VM pool within Teams](how-to-manage-vm-pool-within-teams.md).
+* As an educator, [create and manage schedules within Teams](how-to-create-schedules-within-teams.md).
+* As an educator, [manage lab user lists from Teams](how-to-manage-user-lists-within-teams.md).
+* As student, [access a VM within Teams](how-to-access-vm-for-students-within-teams.md).
lab-services How To Manage Vm Pool Within Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-vm-pool-within-teams.md
# Manage a VM pool in Lab Services from Teams
-Virtual Machine (VM) creation starts as soon as the template VM is first published. VMs equaling the number of users in the lab user list will be created. VMs are automatically assigned to students when they first access the Azure Lab Services lab.
+Virtual Machine (VM) creation starts as soon as the template VM is first published.
+The number of VMs equivalent to the number of users in the lab's user list will be created. VMs are also automatically assigned to students when they first access the Azure Lab Services lab.
## Publish a template and manage a VM pool
-To publish the template, go to the Teams Lab Services window, select **Template** tab > **...** > **Publish**.
+To publish the template, go to the Teams's **Azure Lab Services** window, select **Template** tab > **...** > **Publish**.
-Once the lab is published and VMs are created, users will be automatically registered to the lab. Lab VMs will be assigned to users the first time they first access the tab having **Azure Lab Services** App.
+Once the lab is published and VMs are created, users will be automatically registered to the lab. Lab VMs will be assigned to users the first time they access the tab having **Azure Lab Services** App.
-Team membership and lab user list are kept in sync. The lab capacity (number of VMs in the lab) will be automatically updated based on the changes to the team membership. New VMs will be created as new users are added to the team. VMs assigned to the users removed from the team will be deleted. For more information, see [How to manage users within Teams](how-to-manage-user-lists-within-teams.md).
+Team membership and lab user list are kept in sync. The lab capacity (number of VMs in the lab) is automatically updated based on the changes to the team membership. New VMs are created whenever new users are added to the team. VMs of users that are no longer part of the team are deleted. For more information, see [How to manage users within Teams](how-to-manage-user-lists-within-teams.md).
-Educators can continue to access student VMs directly from the VM Pool tab. And educators can access VMs assigned to themselves either from the **Virtual machine pool** tab or by clicking on the **My Virtual Machines** button (top-right corner of the screen).
+Educators can continue to access student VMs directly from the VM Pool tab. Educators can also access VMs assigned to themselves either from the **Virtual machine pool** tab or by clicking on the **My Virtual Machines** button (top-right corner of the screen).
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/how-to-manage-vm-pool-with-teams/vm-pool.png" alt-text="Screenshot of the VM pool.":::
lab-services Lab Account Owner Support Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-account-owner-support-information.md
The support information includes:
1. Enter detailed **support instructions** (optional). Lab owners and users will see this text along with the support contact information. URLs will be automatically turned into links. 1. Select **Save** on the toolbar.
- :::image type="content" source="./media/lab-account-owner-support-information/internal-support-page.png" alt-text="Screenshot of Internal support page.":::
+ :::image type="content" source="./media/lab-account-owner-support-information/lab-account-internal-support-page.png" alt-text="Screenshot of the Internal support page.":::
## Next steps
lab-services Lab Creator Support Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-creator-support-information.md
Title: View support information (lab creator) description: This article explains how lab creators can view support information that they can use to get help. Previously updated : 11/24/2021 Last updated : 04/25/2022
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
For more information, review the [Azurite documentation](https://github.com/Azur
Currently, you can have both Consumption (multi-tenant) and Standard (single-tenant) extensions installed at the same time. The development experiences differ from each other in some ways, but your Azure subscription can include both Standard and Consumption logic app types. Visual Studio Code shows all the deployed logic apps in your Azure subscription, but organizes your apps under each extension, **Azure Logic Apps (Consumption)** and **Azure Logic Apps (Standard)**.
-* To use the [Inline Code Operations action](../logic-apps/logic-apps-add-run-inline-code.md) that runs JavaScript, install [Node.js versions 10.x.x, 11.x.x, or 12.x.x](https://nodejs.org/en/download/releases/).
+* To use the [Inline Code Operations action](../logic-apps/logic-apps-add-run-inline-code.md) that runs JavaScript, install [Node.js versions 12.x.x or 14.x.x](https://nodejs.org/en/download/releases/).
> [!TIP] > For Windows, download the MSI version. If you use the ZIP version instead, you have to
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md
ms.suite: integration Previously updated : 04/04/2022 Last updated : 04/26/2022
The following table briefly summarizes differences between the **Logic App (Stan
## Logic App (Standard) resource
-The **Logic App (Standard)** resource type is powered by the redesigned single-tenant Azure Logic Apps runtime. This runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and is hosted as an extension on the Azure Functions runtime. This design provides portability, flexibility, and more performance for your logic app workflows plus other capabilities and benefits inherited from the Azure Functions platform and Azure App Service ecosystem. For example, you can create, deploy, and run single-tenant based logic apps and their workflows in [Azure App Service Environment v3](../app-service/environment/overview.md).
+The **Logic App (Standard)** resource type is powered by the redesigned single-tenant Azure Logic Apps runtime. This runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and is hosted as an extension on the Azure Functions runtime. This design provides portability, flexibility, and more performance for your logic app workflows plus other capabilities and benefits inherited from the Azure Functions platform and Azure App Service ecosystem. For example, you can create, deploy, and run single-tenant based logic apps and their workflows in [Azure App Service Environment v3 (Windows plans only)](../app-service/environment/overview.md).
The Standard resource type introduces a resource structure that can host multiple workflows, similar to how an Azure function app can host multiple functions. With a 1-to-many mapping, workflows in the same logic app and tenant share compute and processing resources, providing better performance due to their proximity. This structure differs from the **Logic App (Consumption)** resource where you have a 1-to-1 mapping between a logic app resource and a workflow.
To learn more about portability, flexibility, and performance improvements, cont
### Portability and flexibility
-When you create logic apps using the **Logic App (Standard)** resource type, you can deploy and run your workflows in other environments, such as [Azure App Service Environment v3](../app-service/environment/overview.md). If you use Visual Studio Code with the **Azure Logic Apps (Standard)** extension, you can *locally* develop, build, and run your workflows in your development environment without having to deploy to Azure. If your scenario requires containers, [create single-tenant based logic apps using Azure Arc-enabled Logic Apps](azure-arc-enabled-logic-apps-create-deploy-workflows.md). For more information, review [What is Azure Arc enabled Logic Apps?](azure-arc-enabled-logic-apps-overview.md)
+When you create logic apps using the **Logic App (Standard)** resource type, you can deploy and run your workflows in other environments, such as [Azure App Service Environment v3 (Windows plans only)](../app-service/environment/overview.md). If you use Visual Studio Code with the **Azure Logic Apps (Standard)** extension, you can *locally* develop, build, and run your workflows in your development environment without having to deploy to Azure. If your scenario requires containers, [create single-tenant based logic apps using Azure Arc-enabled Logic Apps](azure-arc-enabled-logic-apps-create-deploy-workflows.md). For more information, review [What is Azure Arc enabled Logic Apps?](azure-arc-enabled-logic-apps-overview.md)
These capabilities provide major improvements and substantial benefits compared to the multi-tenant model, which requires you to develop against an existing running resource in Azure. Also, the multi-tenant model for automating **Logic App (Consumption)** resource deployment is completely based on Azure Resource Manager templates (ARM templates), which combine and handle resource provisioning for both apps and infrastructure.
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
To secure the training environment, use the following steps:
1. If your compute cluster or compute instance does not use a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#required-public-internet-access) so that management services can submit jobs to your compute resources. > [!TIP]
- > Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, they communicate with the Azure Batch Services over the public IP. If created without a public IP, they communicate with Azure Batch Services over the private IP. When using a private IP, you need to allow inbound communications from Azure Batch.
+ > Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, you get a load balancer with a public IP to accept the inbound access from Azure batch service and Azure Machine Learning service. You need to configure User Defined Routing (UDR) if you use a firewall. If created without a public IP, you get a private link service to accept the inbound access from Azure batch service and Azure Machine Learning service without a public IP.
:::image type="content" source="./media/how-to-network-security-overview/secure-training-environment.svg" alt-text="Diagram showing how to secure managed compute clusters and instances.":::
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
When the creation process finishes, you train your model by using the cluster in
When you enable **No public IP**, your compute cluster doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem and service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute cluster nodes from the internet thus eliminating a significant threat vector. **No public IP** clusters help comply with no public IP policies many enterprises have.
+> [!WARNING]
+> By default, you do not have public internet access from No Public IP Compute Cluster. You need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview) with a public IP.
+ A compute cluster with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork** and any port source, to destination of **VirtualNetwork**, and destination port of **29876, 29877**. **No public IP** clusters are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace.
For steps on how to create a compute instance deployed in a virtual network, see
When you enable **No public IP**, your compute instance doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem and service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute instance node from the internet thus eliminating a significant threat vector. Compute instances will also do packet filtering to reject any traffic from outside virtual network. **No public IP** instances are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace.
+> [!WARNING]
+> By default, you do not have public internet access from No Public IP Compute Instance. You need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview) with a public IP.
+ For **outbound connections** to work, you need to set up an egress firewall such as Azure firewall with user defined routes. For instance, you can use a firewall set up with [inbound/outbound configuration](how-to-access-azureml-behind-firewall.md) and route traffic there by defining a route table on the subnet in which the compute instance is deployed. The route table entry can set up the next hop of the private IP address of the firewall with the address prefix of 0.0.0.0/0. A compute instance with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork**, any port source, destination of **VirtualNetwork**, and destination port of **29876, 29877, 44224**.
marketplace Azure App Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-managed.md
Title: Configure a managed application plan
-description: Configure a managed application plan for your Azure application offer in Partner Center (Azure Marketplace).
+description: Configure a managed application plan for an Azure application offer in Partner Center.
Previously updated : 11/02/2021 Last updated : 03/29/2022 # Configure a managed application plan
Prices are set in USD (USD = United States Dollar) are converted into the local
Prices set in USD (USD = United States Dollar) are converted into the local currency of all selected markets using the current exchange rates when saved. Validate these prices before publishing by exporting the pricing spreadsheet and reviewing the price in each market. If you would like to set custom prices in an individual market, modify and import the pricing spreadsheet.
-Review your prices carefully before publishing, as there are some restrictions on what can change after a plan is published.
-
-> [!NOTE]
-> After a price for a market in your plan is published, it can't be changed later.
- To set custom prices in an individual market, export, modify, and then import the pricing spreadsheet. You're responsible for validating this pricing and owning these settings. For detailed information, see [Custom prices](plans-pricing.md#custom-prices). 1. You must first save your pricing changes to enable export of pricing data. Near the bottom of the **Pricing and availability** tab, select **Save draft**.
marketplace Azure App Metered Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-metered-billing.md
The attributes, which define the dimension itself, are shared across all plans f
* Name * Unit of measure
-The other attributes of a dimension are specific to each plan and can have different values from plan to plan. Before you publish the plan, you can edit these values and only this plan will be affected. Once you publish the plan, these attributes will no longer be editable. The attributes are:
+The other attributes of a dimension are specific to each plan and can have different values from plan to plan. Before you publish the plan, you can edit these values and only this plan will be affected. Once you publish the plan, the following attributes will no longer be editable:
-* Price per unit
-* Included quantity for monthly customers
-* Included quantity for annual customers
+* Included quantity for monthly customers
+* Included quantity for annual customers
Dimensions also have two special concepts, "enabled" and "infinite": * **Enabled** indicates that this plan participates in this dimension. You might want to leave this option un-checked if you are creating a new plan that does not send usage events based on this dimension. Also, any new dimensions added after a plan was first published will show up as "not enabled" on the already published plan. A disabled dimension will not show up in any lists of dimensions for a plan seen by customers. * **Infinite**, represented by the infinity symbol "∞", indicates that this plan participates in this dimension, without metered usage against this dimension. If you want to indicate to your customers that the functionality represented by this dimension is included in the plan, but with no limit on usage. A dimension with infinite usage will show up in lists of dimensions for a plan seen by customers. This plan will never incur a charge.
->[!Note]
->The following scenarios are explicitly supported: <br> - You can add a new dimension to a new plan. The new dimension will not be enabled for any already published plans. <br> - You can publish a plan with a fixed monthly fee and without any dimensions, then add a new plan and configure a new dimension for that plan. The new dimension will not be enabled for already published plans.
+>[!Note]
+>The following scenarios are explicitly supported:
+>- You can add a new dimension to a new plan. The new dimension will not be enabled for any already published plans.
+>- You can publish a plan with a fixed monthly fee and without any dimensions, then add a new plan and configure a new dimension for that plan. The new dimension will not be enabled for already-published plans.
## Constraints ### Locking behavior
-A dimension used with the Marketplace metering service represents an understanding of how a customer will be paying for the service. All details of a dimension are no longer editable once an offer is published. Before publishing your offer, it's important that you have your dimensions fully defined.
+A dimension used with the Marketplace metering service represents an understanding of how a customer will be paying for the service. All details of a dimension are no longer editable once an offer is published. Before publishing your offer, it's important that you have your dimensions fully defined.
Once an offer is published with a dimension, the offer-level details for that dimension can no longer be changed:
Once an offer is published with a dimension, the offer-level details for that di
Once a plan is published, the plan-level details can no longer be changed:
-* Price per unit
* Included quantity for monthly term * Whether the dimension is enabled for the plan
Follow the instruction in [Support for the commercial marketplace program in Par
## Next steps - See [Marketplace metering service APIs](marketplace-metering-service-apis.md) for more information.+
marketplace Azure Vm Plan Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-plan-manage.md
Complete these steps when you are notified that new core sizes are now supported
1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002). 1. On the Home page, select the **Marketplace offers** tile.
- [ ![Illustrates the Marketplace offers tile on the Partner Center Home page.](./media/workspaces/partner-center-home.png) ](./media/workspaces/partner-center-home.png#lightbox)
+ [![Screenshot shows the marketplace offers tile on the Partner Center home page.](./media/workspaces/partner-center-home.png)](./media/workspaces/partner-center-home.png#lightbox)
1. On the **Overview** page, select your VM offer. 1. On the **Offer overview** page, under **Plan overview**, select a plan within your offer.
-1. In the left-nav, select **Pricing and availability**.
+1. In the left-nav menu, select **Pricing and availability**.
1. Do one of the following: - If either the _Per core size_ or _Per market and core size_ price entry options are used, under **Pricing**, verify the price and make any necessary adjustments for the new core sizes that have been added. - If your price entry option is set to _Free_, _Flat rate_, or _Per core_, go to step 7.
-1. Select **Save draft** and then select **Review and publish**. After the offer is republished, the new core sizes will be available to your customers at the prices that you have set.
+1. Select **Save draft** and then **Review and publish**. After the offer is republished, the new core sizes will be available to your customers at the prices that you have set.
marketplace Azure Vm Plan Pricing And Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-plan-pricing-and-availability.md
Title: Configure pricing and availability for a virtual machine offer on Azure Marketplace
-description: Configure pricing and availability for a virtual machine offer in the Microsoft commercial marketplace.
+ Title: Configure pricing and availability for a virtual machine offer in Partner Center
+description: Configure pricing and availability for a virtual machine offer in Partner Center.
Select **Save** to continue.
## Pricing
-For the **License model**, select **Usage-based monthly billed plan** to configure pricing for this plan, or **Bring your own license** to let customers use this plan with their existing license.
+For the **License model**, select **Usage-based monthly billed plan** to configure pricing for this plan, or **Bring your own license** to let customers use this plan with their existing license.
-For a usage-based monthly billed plan, Microsoft will charge the customer for their hourly usage and they're billed monthly. This is our _Pay-as-you-go_ plan, where customers are only billed for the hours that they've used. When you select this plan, choose one of the following pricing options:
+For a usage-based monthly billed plan, Microsoft will charge the customer for their hourly usage and they're billed monthly. This is our *Pay-as-you-go* plan, where customers are only billed for the hours that they've used. When you select this plan, choose one of the following pricing options:
- **Free** ΓÇô Your VM offer is free. - **Flat rate** ΓÇô Your VM offer is the same hourly price regardless of the hardware it runs on.
For a usage-based monthly billed plan, Microsoft will charge the customer for th
- **Per core size** ΓÇô Your VM offer is priced based on the number of CPU cores on the hardware it's deployed on. - **Per market and core size** ΓÇô Assign prices based on the number of CPU cores on the hardware it's deployed on, and also for all markets. Currency conversion is done by you, the publisher. This option is easier if you use the import pricing feature.
-For **Per core size** and **Per market and core size**, enter a **Price per core**, and then select **Generate prices**. The tables of price/hour calculations are populated for you. You can then adjust the price per core, if you choose. If using the _Per market and core size_ pricing option, you can additionally customize the price/hour calculation tables for each market thatΓÇÖs selected for this plan.
+For **Per core size** and **Per market and core size**, enter a **Price per core**, and then select **Generate prices**. The tables of price/hour calculations are populated for you. You can then adjust the price per core, if you choose. If using the *Per market and core size* pricing option, you can additionally customize the price/hour calculation tables for each market thatΓÇÖs selected for this plan.
> [!NOTE]
-> To ensure that the prices are right before you publish them, export the pricing spreadsheet and review the prices in each market. Before you export pricing data, first select **Save draft** near the bottom of the page to save pricing changes.
+> To ensure the prices are right before you publish them, export the pricing spreadsheet and review them in each market. Before you export pricing data, first select **Save draft** to save pricing changes.
When selecting a pricing option, Microsoft does the currency conversion for the Flat rate, Per core, and Per core size pricing options.
Private offers aren't supported with Azure subscriptions established through a r
If your virtual machine is meant to be used only indirectly when it's referenced through another solution template or managed application, select this check box to publish the virtual machine but hide it from customers who might be searching or browsing for it directly.
-Any Azure customer can deploy the offer using either PowerShell or CLI. If you wish to make this offer available to a limited set of customers, then set the plan to **Private**.
+Any Azure customer can deploy the offer using either PowerShell or CLI. If you wish to make this offer available to a limited set of customers, then set the plan to **Private**.
Hidden plans don't generate preview links. However, you can test them by [following these steps](azure-vm-create-faq.yml#how-do-i-test-a-hidden-preview-image-). Select **Save draft** before continuing to the next tab in the left-nav Plan menu, **Technical configuration**.
-## Next step
+## Next steps
- [Technical configuration](azure-vm-plan-technical-configuration.md)
marketplace Azure Vm Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-pricing.md
Title: Configuring prices for monthly billing in Azure Marketplace
-description: Learn how to Configuring prices for VM.
+ Title: Configuring prices for virtual machines in Partner Center.
+description: Learn how to configure prices for virtual machines in Partner Center.
Previously updated : 09/28/2021 Last updated : 03/28/2022 # Configure prices for usage-based monthly billing
Some things to consider when selecting a pricing option:
- In the first four options, Microsoft does the currency conversion. - Microsoft suggests using a flat rate pricing for software solutions.-- Prices are fixed, so once published they cannot be adjusted. However, if you would like to reduce prices for your VM offers you can open a [support ticket](./support.md).
+- See [Changing prices in active commercial marketplace offers](price-changes.md) for details and limitations on changing prices in active offers.
## New offering pricing Microsoft Azure is regularly adding new VM infrastructure. Occasionally we add a machine that has a CPU count that wasn't offered before. Microsoft determines the price for the new core size based on previous pricing and adds them as suggested prices.
-Publishers receive an email when the price is set for new core sizes and will have some time to review and make adjustments as needed. After the deadline passes microsoft publishes the prices for the newly added core sizes.
+Publishers receive an email when the price is set for newly added core sizes and will have some time to review and make adjustments as needed. After the deadline passes, Microsoft publishes the new prices.
-If the publisher chose Free, Flat or Per core size, then the publisher has already provided the necessary details on how to price the offer for new core sizes and no further action is needed. However, if the publisher previously selected the Per core size, or Per market and core size, then they would need to contact Microsoft with their updated pricing information.
+If the publisher chose Free, Flat or Per-core size, the publisher has already provided the necessary details on how to price the offer for new core sizes and no further action is needed. However, if the publisher previously selected the Per-core size, or Per-market and core size, they need to contact us (see below link) with their updated pricing information.
## Next steps -- If you have any questions, open a ticket with [support](./support.md).
+- If you have questions, [contact support](https://go.microsoft.com/fwlink/?linkid=2056405).
marketplace Determine Your Listing Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/determine-your-listing-type.md
The following offer types support usage-based pricing:
- SaaS offers support for Metered billing and per user (per seat) pricing. For more information about metered billing, see [Metered billing for SaaS using the commercial marketplace metering service](partner-center-portal/saas-metered-billing.md). - Azure virtual machine offers support for **Per core**, **Per core size**, and **Per market and core size** pricing. These options are priced per hour and billed monthly.
-When you create a transactable offer, it's important to understand the pricing, billing, invoicing, and payout considerations before you select an offer type and create your offer. To learn more, see [Commercial marketplace online stores](overview.md#commercial-marketplace-online-stores).
+When you create a transactable offer, it's important to understand the pricing, billing, invoicing, and payout considerations before you select an offer type and create your offer. To learn more, see [Commercial marketplace online stores](overview.md#commercial-marketplace-online-stores) and [Changing prices in active commercial marketplace offers](price-changes.md).
## Sample offer
Non-transactable offers earn benefits based on whether or not a free trial is at
## Next steps
-To choose an offer type to create, see [Publishing guide by offer type](publisher-guide-by-offer-type.md).
+- To choose an offer type, see [Publishing guide by offer type](publisher-guide-by-offer-type.md).
marketplace Isv Csp Reseller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-csp-reseller.md
The payout amount and agency fee that Microsoft charges is based on the price af
## Next steps - [Frequently Asked Questions](./isv-csp-faq.yml) about configuring ISV to CSP partner private offers
+- Video series (YouTube):
+ - [Private Offers for CSP Partners Overview](https://youtu.be/UYOsdTPiPnQ)
+ - [Private Offer Creation by ISVs for CSP Partners](https://youtu.be/rwp8eDfmYb8)
+ - [The CSP Partner Private Offer Purchase Process](https://youtu.be/_Zqphs6ZG6A)
marketplace Isv Customer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-customer.md
Creating a private offer for a customer has these prerequisites:
## Supported offer types
-Private Offers can be created for all transactable marketplace offer types: SaaS, Azure Virtual Machines, and Azure Applications.
+Private offers can be created for all transactable marketplace offer types: SaaS, Azure Virtual Machines, and Azure Applications.
> [!NOTE] > Discounts are applied on all custom meter dimensions your offer may use. They are only applied on the software charges set by you, not on the associated Azure infrastructure hardware charges. ## Private offers dashboard
-Create and manage private offers from the **Private Offers** dashboard in Partner Center's left-nav menu. This dashboard has two tabs:
+Create and manage private offers from the **Private offers** dashboard in Partner Center's left-nav menu. This dashboard has two tabs:
- **Customers** ΓÇô Create a private offer for a customer in Azure Marketplace. This opens the Customers private offer dashboard, which lets you:
Create and manage private offers from the **Private Offers** dashboard in Partne
1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview). 2. Select the **Marketplace offers** workspace.
-3. Select **Private Offers** from the left-nav menu.
+3. Select **Private offers** from the left-nav menu.
4. Select the **Customers** tab.
-5. Select **+ New Private Offer**.
+5. Select **+ New private offer**.
6. Enter a private offer name. This is a descriptive name for use within Partner Center and will be visible to your customer in the Azure portal. ### Offer setup
Use this page to define private offer terms, notification contacts, and pricing
- Choose to provide a custom price or discount at either an offer level (all current and future plans under that offer will have a discount associated to it) or at a plan level (only the plan you selected will have a private price associated with it). - Choose up to 10 offers/plans and select **Add**. - Enter the discount percentage or configure the absolute price for each item in the pricing table.
+ - Absolute pricing lets you input a specific price for the private offer. You can only customize the price based on the same pricing model, billing term, and dimensions of the public offer. You can't change to a new pricing model or billing term or add dimensions.
> [!NOTE] > Only public offers/plans that are transactable in Microsoft Azure Marketplace appear in the selection menu.
When you're ready, select **Submit**. You'll be returned to the dashboard where
You can clone an existing offer and update its customer information to send it to different customers so you don't have to start from scratch. Or, update the offer/plan pricing to send additional discounts to the same customer.
-1. Select **Private Offers** from the left-nav menu.
+1. Select **Private offers** from the left-nav menu.
2. Select the **Customers** tab. 3. Check the box of the private offer to clone. 4. Select **Clone**.
Withdrawing a private offer means your customer will no longer be able to access
To withdraw a private offer:
-1. Select **Private Offers** from the left-nav menu.
+1. Select **Private offers** from the left-nav menu.
2. Select the **Customers** tab. 3. Check the box of the private offer to withdraw. 4. Select **Withdraw**.
Once you withdraw a private offer, your customer will no longer be able to acces
To delete a private offer in **Draft** status:
-1. Select **Private Offers** from the left-nav menu.
+1. Select **Private offers** from the left-nav menu.
2. Select the **Customers** tab. 3. Check the box of the private offer to delete. 4. Select **Delete**.
This action will permanently delete your private offer. You can only delete priv
To view the status of a private offer:
-1. Select **Private Offers** from the left-nav menu.
+1. Select **Private offers** from the left-nav menu.
2. Select the **Customer** tab. 3. Check the **Status** column.
The payout amount and agency fee that Microsoft charges is based on the private
## Next steps -- [Frequently Asked Questions](isv-customer-faq.yml) about configuring ISV to customer private offers
+- [Frequently Asked Questions](isv-customer-faq.yml) about configuring ISV to customer private offers
+- Video series (YouTube):
+ - [ISV to Customer Private Offer Creation](https://youtu.be/M_h8g5_5K90)
+ - [ISV to Customer Private Offer Acceptance](https://youtu.be/l2zhmDqtB4U)
+ - [ISV to Customer Private Offer Purchase Experience](https://youtu.be/vm1MNZhK028)
marketplace Marketplace Commercial Transaction Capabilities And Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-commercial-transaction-capabilities-and-considerations.md
Depending on the transaction option used, subscription charges are as follows:
> [!NOTE] > Offers that are billed according to consumption after a solution has been used are not eligible for refunds.
-Publishers who want to change the usage fees associated with an offer, should first remove the offer (or the specific plan within the offer) from the commercial marketplace. Removal should be done in accordance with the requirements of the [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement). Then the publisher can publish a new offer (or plan within an offer) that includes the new usage fees. For information, about removing an offer or plan, see [Stop distribution of an offer or plan](./update-existing-offer.md#stop-distribution-of-an-offer-or-plan).
+To change the prices associated with an active transactable offer, see [Changing prices in active commercial marketplace offers](price-changes.md).
### Determine offer type and pricing plan
marketplace Marketplace Geo Availability Currencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-geo-availability-currencies.md
For all paid offer types, you have the option of entering prices in USD or uploa
To adjust any price before you publish, just export the pricing spreadsheet, modify it, and upload it with changes. > [!NOTE]
-> After a price for a market in your plan is published, it can't be changed. To ensure that the prices are right before you publish them, export the pricing spreadsheet and review the prices in each market.
+> To ensure prices are right before you publish them, export the pricing spreadsheet and review the prices in each market. See [Changing prices in active commercial marketplace offers](price-changes.md) for details and limitations on changing prices in active transactable offers.
-The price of an offer is always shown to customers in their local currency. The price you select in Partner Center is converted to the local currency of customers according to the exchange rate at the time you saved the price in Partner Center. The price shown to customers in the online stores doesn't change, unless you republish your offer.
+The price of an offer is always shown to customers in their local currency. The price you select in Partner Center is converted to the local currency of customers according to the exchange rate at the time you saved the price in Partner Center. The price shown to customers in the online stores doesn't change unless you republish your offer.
Microsoft receives payments from customers in their local currency, and pays you in the currency you selected in Partner Center. Microsoft converts the customer local currency using the exchange rate of the day of purchase.
marketplace Orders Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/orders-dashboard.md
Previously updated : 02/14/2022 Last updated : 04/26/2022 # Orders dashboard in commercial marketplace analytics
-This article provides information on the Orders dashboard in Partner Center. This dashboard displays information about your offer - subscriptions, orders, pricing model including growth trends, presented in a graphical and downloadable format.
+This article provides information on the Orders dashboard in Partner Center. This dashboard displays information about your offer in a graphical and downloadable format, such as subscriptions, orders, and pricing model including growth trends.
>[!NOTE] > For detailed definitions of analytics terminology, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.yml). ## Orders dashboard
-The [Orders dashboard](https://go.microsoft.com/fwlink/?linkid=2165914) displays the current orders for all your offers, including software as a service (SaaS), with subscription-based billing model. You can view graphical representations of the following items:
+The [Orders dashboard](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/order) displays the current orders for all your offers, including software as a service (SaaS), with subscription-based billing model. You can view graphical representations of the following items:
- Subscription trend - Subscription per seat and site trend
The following sections describe how to use the Orders dashboard and how to read
### Month range
-You can find a month range selection at the top-right corner of each page. Customize the output of the **Orders** page graphs by selecting a month range based on the past 6 or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range is six months.
+A month range selection is at the top-right corner of each page. Customize the output of the **Orders** page graphs by selecting a month range based on the past 6 or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range is six months.
[ ![Illustrates the month filters on the Orders dashboard.](./media/orders-dashboard/order-workspace-filters.png) ](./media/orders-dashboard/order-workspace-filters.png#lightbox) ### Public and Private offer
-you can choose to view subscription and order details of public offers, private offers, or both by selecting the **Public offer** sub-tab, **Private offer** sub-tab, and the **All** sub-tab respectively.
+You can choose to view subscription and order details of public offers, private offers, or both by selecting the **Public offer** sub-tab, **Private offer** sub-tab, and the **All** sub-tab respectively.
[ ![Illustrates other filters on the Orders dashboard.](./media/orders-dashboard/offer-tabs.png) ](./media/orders-dashboard/offer-tabs.png#lightbox) > [!NOTE]
-> All metrics in the visualization widgets and export reports are as per the month range selected by the user.
+> - Private offers are different from Private plans. Purchases for Private plans are shown on the **All** tab and not on the **Private offers** tab.
+> - All metrics in the visualization widgets and export reports are as per the month range selected by the user.
### Subscription trend
-In this section, you will find the **Subscription** chart that shows the trend of your active and canceled subscriptions for the selected month range. Metrics and growth trends are represented by a line chart and will display the value for each month by hovering over the line on the chart. The percentage value below the subscription metrics in the widget represents the amount of growth or decline during the selected month range.
+This section has a **Subscription** chart that shows the trend of your active and canceled subscriptions for the selected month range. Metrics and growth trends are represented by a line chart and will display the value for each month by hovering over the line on the chart. The percentage value below the subscription metrics in the widget represents the amount of growth or decline during the selected month range.
There are two subscription counters: _Active_ and _Canceled_. - **Active** equals the number of subscriptions that are currently in use by customers for the selected month range. - **Canceled** equals the total number of subscriptions that were purchased but got canceled during the selected date range.
+Select the ellipsis (three dots) to copy the widget image, download aggregated widget data as a .CSV file, and download the image as a .PDF.
+ [![Illustrates the Orders widget on the Orders dashboard that shows the trend of active and canceled orders.](./media/orders-dashboard/orders-trend.png)](./media/orders-dashboard/orders-trend.png#lightbox)
-### Subscription by per seat and site trend
+### Seats, sites, and quantity
+
+This widget represents the following:
-The **Subscriptions by per seat and site-based** line chart represents the metric and trend of per-site (flat rate pricing) and per-seat (per user pricing) customer subscriptions for the selected month range.
+- Seats (per user pricing) for subscription-based offers
+- Metrics and trend of Sites with flat rate pricing
+- Purchased quantity for VM software reservations.
+
+Use the month range selections and filters to change the data in this widget.
+
+> [!NOTE]
+> Information on quantity is displayed only if a purchase was recorded against your published offer with a [VM software reservation](marketplace-virtual-machines.md#reservation-pricing-optional).
Each monthly data point on the line graph represents the total count of seats or sites. The widget includes data for only active subscriptions in the selected month range.
-**Tooltips:**
+Here are some things to keep in mind:
- Seats indicate seat count (asset quantity) of per user-based subscriptions-- Sites indicate site count (asset quantity) of flat rate based subscriptions-- Represents count and trend of change in total seats and site for the month range-- Growth % of these subscription seats and site count for the selected month range
+- Sites indicate site count (asset quantity) of flat rate-based subscriptions
+- Quantity indicates the quantity of purchased VM software reservations
+- Represents count and trend of total purchased seats, sites, and quantity
+- Growth % of seats and site count of subscription-based offers and quantity of VM software reservations
- Month over month trend of these orders for the selected month range
-[![Illustrates the Orders widget on the Orders dashboard that shows the orders per seat and site trend.](./media/orders-dashboard/seats-per-site.png)](./media/orders-dashboard/seats-per-site.png#lightbox)
+[![Illustrates the Orders widget on the Orders dashboard that shows the orders per seat and site trend.](./media/orders-dashboard/seats-sites-quantity.png)](./media/orders-dashboard/seats-sites-quantity.png#lightbox)
-Subscription offers can use one of two pricing models with each plan: either site-based (flat rate) or seat-based (per user).
+Subscription offers can use one of two pricing models with each plan: either site-based (flat rate) or seat-based (per user). Quantity is relevant for VM software reservations.
- **Flat rate**: Enable access to your offer with a single monthly or annual flat rate price. This is sometimes referred to as site-based pricing. - **Per user**: Enable access to your offer with a price based on the number of users who can access the offer or occupy seats. With this usage-based model, you can set the minimum and maximum number of users supported by the plan.<br> You can create multiple plans to configure different price points based on the number of users. These fields are optional during creation of an offer in Partner center. If left unselected, the number of users will be interpreted as not having a limit (min of 1 and max of as many as your service can support). These fields can be edited as part of an update to your plan. - **Metered Billing**: On top of Flat Rate pricing. With this pricing model, you can optionally define metered plans that use the marketplace metering service API to charge customers for usage that isn't covered by the flat rate. Higher the consumption of metered units may lead to higher charges for the customer.
+- **Quantity**: Quantity of the VM software reservations purchased by customers.
+
+Select the ellipsis (three dots) to copy the widget image, download aggregated widget data as a .CSV file, and download the image as a .PDF.
-For more details on seat, site, and metered-based billing, see [How to plan a SaaS offer for the commercial marketplace](plan-saas-offer.md).
+For more details on seat, site, and metered-based billing, see [How to plan a SaaS offer for the commercial marketplace](plan-saas-offer.md) and [Changing prices in active commercial marketplace offers](price-changes.md).
### Orders by offers
-The Orders by offers widget shows information about offers and SKU. This chart shows the measures and trends of all offers. Orders are categorized under different statuses: New, Convert, Renewed, Canceled.
+The Orders by offers widget shows information about offers and SKUs (also known as plans). This widget shows the measures and trends of all purchased orders. Orders are categorized under different statuses: New, Convert, Renewed, Canceled.
-For different statuses, the _Orders_ sub-tab provides information about the count of orders, _Quantity_ sub-tab provides information about the number of seats added and/or removed by customers for existing subscription assets, and the _Revenue_ sub-tab provides information about the billed revenue of orders for the selected month range. Each order is categorized with one of below statuses:
+For different statuses, the _Orders_ tab provides information about the count of purchased orders, the _Quantity_ tab provides information about the number of seats added and/or removed by customers for existing active subscriptions, and the _Revenue_ tab provides information about the billed revenue of orders for the selected month range. Each order is categorized with one of below statuses:
-- **New**: New orders purchased by customers for the selected month range.
+- **New**: Indicates new orders purchased by customers for the selected month range.
- **Convert**: This indicates orders for which customers purchased an offer after its trial period was over-- **Renewed**: This indicates orders that were renewed for the selected month range. They do not include converted orders.
+- **Renewed**: This indicates orders for subscriptions that were renewed in the selected month range. These orders do not include converted orders.
- **Cancelled**: Orders that were canceled during the selected month range. Revenue of canceled orders is calculated using billed revenue of last term before order cancellation. - **Seats/Sites added**: Seats or Sites that were added by customers to existing subscription orders.-- **Seats/Sites removed**: Seats or Sites that were removed by customers from existing subscription orders. Seats or sites removed due to orders cancellations are not taken into consideration.
+- **Seats/Sites removed**: Seats or Sites that were removed by customers from existing subscription orders. It doesn't include seats or sites that were removed due to orders cancellations.
**Additional information:**
For different statuses, the _Orders_ sub-tab provides information about the coun
- You can select specific offers in the legend to display only that offer and the associated SKUs in the graph. - You can select any offer and a maximum of three SKUs of that offer to view the month-over-month trend for the offer, SKUs, and seats. - Hovering over a slice in the graph displays the number of orders and percentage of that offer compared to your total number of orders across all offers.-- The **orders by offers trend** displays month-by-month growth trends. The month column represents the number of orders by offer name. The line chart displays the growth percentage trend plotted alongside the bar graphs.
+- The **Orders by offers trend** displays month-by-month growth trends. The month column represents the number of orders by offer name. The line chart displays the growth percentage trend plotted alongside the bar graphs.
-### Order by offers - Orders subtab
+### Order by offers - Orders tab
-In this widget you can view information of All offers with different order statuses under the **Orders** subtab.
+In this widget you can view information of All offers with different order statuses under the **Orders** tab.
:::image type="content" source="./media/orders-dashboard/orders-by-offers.png" alt-text="Illustrates the Orders by Offers chart on the Orders dashboard.":::
-In this widget you can view information of **a selected offer and its SKU or offer plans** (from the drop-down) with different order statuses under the **Orders** subtab.
+In this widget you can view information of **a selected offer and its SKU or offer plans** (from the drop-down) with different order statuses under the **Orders** tab.
:::image type="content" source="./media/orders-dashboard/offer-trends-private.png" alt-text="Illustrates the Orders by Private Offers chart on the Orders dashboard.":::
-In this widget you can view information of **All offers** (from the drop-down) with seats/sites added or removed for existing subscriptions.
+This widget shows **All offers** (from the drop-down) with seats/sites added or removed for existing subscriptions.
:::image type="content" source="./media/orders-dashboard/orders-tab-all-offers.png" alt-text="Illustrates the quantity of Orders by Offers chart on the Orders tab of the Orders dashboard.":::
-In this widget you can view information of a selected offer and its SKU or offer plans (from the drop-down) with seats and sites added or removed for existing subscriptions.
+This widget shows a selected offer and its SKU or offer plans (from the drop-down) with seats and sites added or removed for existing subscriptions.
[ ![Shows the orders tab with information about the selected offer and its SKU or offer plans.](./media/orders-dashboard/orders-tab-selected-offers.png) ](./media/orders-dashboard/orders-tab-selected-offers.png#lightbox)
-[ ![Shows the orders tab with information about the selected offer with it's SKU and offer plans, if any.](./media/orders-dashboard/orders-tab-selected-offers-2.png) ](./media/orders-dashboard/orders-tab-selected-offers-2.png#lightbox)
- In this widget you can view information of **All offers** (from the drop-down) with billed revenue of orders purchased in the selected month range.
+[ ![Shows the orders tab with information about the selected offer with it's SKU and offer plans, if any.](./media/orders-dashboard/orders-by-offers-revenue.png) ](./media/orders-dashboard/orders-by-offers-revenue.png#lightbox)
-In this widget you can view information of **a selected offer and its offer plans** (from the drop-down) with billed revenue of orders purchased in the selected month range.
+This widget shows a **selected offer and its offer plans** (from the drop-down) with billed revenue of orders purchased in the selected month range.
[ ![Illustrates the Orders tab with the selected offer and its offer plans on the Orders dashboard.](./media/orders-dashboard/orders-tab-selected-offer-with-plans.png) ](./media/orders-dashboard/orders-tab-selected-offer-with-plans.png#lightbox)
In this widget you can view information of **a selected offer and its offer plan
### Orders by geography
-For the selected month range, the heatmap displays the total number of subscriptions, and the growth percentage of newly added subscriptions against a geography. The light to dark color on the map represents the low to high value of the subscriptions count. Select a record in the table to zoom in on a specific country or region.
+For the selected month range, the heatmap displays the total number of subscriptions, and the growth percentage of newly added subscriptions against a geography. The light to dark color on the map represents the low to high value of the subscriptions count. Select a record in the table to zoom in on a specific country or region.
-[![Illustrates the Geographical spread chart on the Orders dashboard.](./media/orders-dashboard/geographical-spread.png)](./media/orders-dashboard/views-across-countries.png#lightbox)
+[![Illustrates the Geographical spread chart on the Orders dashboard.](./media/orders-dashboard/geographical-spread.png)](./media/orders-dashboard/geographical-spread.png#lightbox)
Note the following:
Note the following:
### Orders details table
-The Order details table displays a numbered list of the 500 top orders sorted by date of acquisition.
+This table displays a numbered list of the 500 top orders sorted by date of acquisition.
- Each column in the grid is sortable. - The data can be extracted to a .CSV or .TSV file if the count of the records is less than 500.
The Order details table displays a numbered list of the 500 top orders sorted by
| Marketplace License Type | Marketplace License Type | The billing method of the commercial marketplace offer. The possible values are:<ul><li>Billed through Azure</li><li>Bring Your Own License</li><li>Free</li><li>Microsoft as Reseller</li></ul> | MarketplaceLicenseType | | SKU | SKU | The plan associated with the offer | SKU | | Customer Country | Customer Country/Region | The country/region name provided by the customer. Country/region could be different than the country/region in a customer's Azure subscription. | CustomerCountry |
-| Is Preview SKU | Is Preview SKU | The value will let you know if you have tagged the SKU as "preview". Value will be "Yes" if the SKU has been tagged accordingly, and only Azure subscriptions authorized by you can deploy and use this image. Value will be "No" if the SKU has not been identified as "preview". | IsPreviewSKU |
+| Is Preview SKU | Is Preview SKU | The value will let you know if you tagged the SKU as "preview". Value will be "Yes" if the SKU has been tagged accordingly, and only Azure subscriptions authorized by you can deploy and use this image. Value will be "No" if the SKU has not been identified as "preview". | IsPreviewSKU |
| Asset ID | Asset ID | The unique identifier of the customer order for your commercial marketplace service. Virtual Machine usage-based offers are not associated with an order. | AssetId | | Quantity | Quantity | Number of assets associated with the order ID for active orders | OrderQuantity | | Cloud Instance Name | Cloud Instance Name | The Microsoft Cloud in which a VM deployment occurred. | CloudInstanceName | | Is New Customer | Is New Customer | The value identifies whether a new customer acquired one or more of your offers for the first time. Value will be "Yes" if within the same calendar month for "Date Acquired". Value will be "No" if the customer has purchased any of your offers prior to the calendar month reported. | IsNewCustomer |
-| Order Status | Order Status | The status of a commercial marketplace order at the time the data was last refreshed. Possible values are: <ul><li>**Active**: Subscription asset is active and used by customer</li><li>**Cancelled**: Subscription of an asset is canceled by customer</li><li>**Expired**: Subscription for an offer expired in the system automatically post trial period</li><li>**Abandoned**: Indicates a system error during offer creation or subscription fulfillment was not completed<li><li>**Warning**: </li>Subscription order is still active but customer has defaulted in payments</ul> | OrderStatus |
+| Order Status | Order Status | The status of a commercial marketplace order at the time the data was last refreshed. Possible values are: <ul><li>**Active**: Subscription asset is active and used by customer</li><li>**Canceled**: Subscription of an asset is canceled by customer</li><li>**Expired**: Subscription for an offer expired in the system automatically post trial period</li><li>**Abandoned**: Indicates a system error during offer creation or subscription fulfillment was not completed<li><li>**Warning**: </li>Subscription order is still active but customer has defaulted in payments</ul> | OrderStatus |
| Order Cancel Date | Order Cancel Date | The date the commercial marketplace order was canceled. | OrderCancelDate | | Customer Company Name | Customer Company Name | The company name provided by the customer. Name could be different than the city in a customer's Azure subscription. | CustomerCompanyName | | Order Purchase Date | Order Purchase Date | The date the commercial marketplace order was created. The format is yyyy-mm-dd. | OrderPurchaseDate |
The Order details table displays a numbered list of the 500 top orders sorted by
| Term End Date | TermEndDate | Indicates the end date of a term for an order. | TermEndDate | | Not available | purchaseRecordId | The identifier of the purchase record for an order purchase | purchaseRecordId | | Not available | purchaseRecordLineItemId | The identifier of the purchase record line item related to this order. | purchaseRecordLineItemId |
-| Billed Revenue USD | EstimatedCharges | The price the customer will be charged for all order units before taxation. This is calculated in customer transaction currency. In tax inclusive countries, this price includes the tax, otherwise it does not. | EstimatedCharges |
+| Billed Revenue USD | EstimatedCharges | The price the customer will be charged for all order units before taxation. This is calculated in customer transaction currency. In tax-inclusive countries, this price includes the tax, otherwise it does not. | EstimatedCharges |
| Not available | Currency | Billing currency for the order purchase | Currency | | Not available | HasTrial | Represents whether an offer has trial period enabled | HasTrial | | Is Trial | IsTrial | Represents whether an offer SKU is in trial period | IsTrial |
The Order details table displays a numbered list of the 500 top orders sorted by
| Trial End Date | Trial End Date | The date the trial period for this order will end or has ended. | TrialEndDate | | Customer ID | Customer ID | The unique identifier assigned to a customer. A customer may have zero or more Azure Marketplace subscriptions. | CustomerID | | Billing Account ID | Billing Account ID | The identifier of the account on which billing is generated. Map **Billing Account ID** to **customerID** to connect your Payout Transaction Report with the Customer, Order, and Usage Reports. | BillingAccountId |
-|||||
+| PlanId | PlanId | The display name of the plan entered when the offer was created in Partner Center. Note that PlanId was originally a numeric number. | PlanId |
### Orders page filters
-The **Orders** page filters are applied at the Orders page level. You can select one or multiple filters to render the chart for the criteria you choose to view and the data you want to see in 'Detailed orders data' grid / export. Filters are applied on the data extracted for the month range that you have selected on the top-right corner of the orders page.
+These filters are applied at the Orders page level. You can select one or multiple filters to render the chart for the criteria you choose to view and the data you want to see in 'Detailed orders data' grid / export. Filters are applied on the data extracted for the month range that you have selected on the top-right corner of the orders page.
> [!TIP] > You can use the download icon in the upper-right corner of any widget to download the data. You can provide feedback on each of the widgets by clicking on the ΓÇ£thumbs upΓÇ¥ or ΓÇ£thumbs downΓÇ¥ icon.
The **Orders** page filters are applied at the Orders page level. You can select
## Next steps - For an overview of analytics reports available in the commercial marketplace, see [Access analytic reports for the commercial marketplace in Partner Center](analytics.md).-- For graphs, trends, and values of aggregate data that summarize marketplace activity for your offer, see [Summary dashboard in commercial marketplace analytics](./summary-dashboard.md).-- For information about your orders in a graphical and downloadable format, see [Orders Dashboard in commercial marketplace analytics](orders-dashboard.md).-- For Virtual Machine (VM) offers usage and metered billing metrics, see [Usage dashboard in commercial marketplace analytics](./usage-dashboard.md).-- For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](downloads-dashboard.md).-- To see a consolidated view of customer feedback for offers on Azure Marketplace and AppSource, see [Ratings & Reviews analytics dashboard in Partner Center](ratings-reviews.md). - For frequently asked questions about commercial marketplace analytics and for a comprehensive dictionary of data terms, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.yml).
marketplace Saas Metered Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/saas-metered-billing.md
Title: Metered billing for SaaS offers using the Microsoft commercial marketplace metering service
-description: Learn about flexible billing models for SaaS offers using the commercial marketplace metering service.
+ Title: Metered billing for SaaS offers in Partner Center
+description: Learn about flexible billing models using a metering service for SaaS offers in Partner Center.
Previously updated : 12/16/2021 Last updated : 03/29/2022
Once an offer is published with a dimension, the offer-level details for that di
Once a plan is published, the plan-level details can no longer be changed: -- Price per unit in USD - Monthly quantity included in base - Annual quantity included in base - Whether the dimension is enabled for the plan or not
marketplace Plans Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plans-pricing.md
Review your prices carefully before publishing, as there are some restrictions o
- After a plan is published, the pricing model can't be changed. - After a billing term is published for a plan, it can't be removed later.-- After a price for a market in your plan is published, it can't be changed later.
+- See [Changing prices in active commercial marketplace offers](price-changes.md) for details and limitations on changing prices in active transactable offers.
Prices set in United States Dollars (USD) are converted into the local currency of all selected markets using the current exchange rates when saved. Validate these prices before publishing by exporting the pricing spreadsheet and reviewing the price in each market you selected.
marketplace Price Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/price-changes.md
+
+ Title: Learn about changing the prices of offers in the commercial marketplace (Partner Center)
+description: Learn about changing the prices of offers in the commercial marketplace using Partner Center.
++++++ Last updated : 04/01/2022++
+# Changing prices in active commercial marketplace offers
+
+The price change feature allows publishers to change the prices of marketplace offers transacted through Microsoft. This article describes how to change the price of an offer.
+
+Publishers can update the prices of previously published plans and publish the price changes to the marketplace. Microsoft schedules price changes in the future to align with future billing cycles.
+
+If the price of an offer increased, existing customers of the offer receive an email notification prior to the increase becoming effective. The product listing page on Microsoft AppSource and Azure Marketplace will start displaying a notice of the upcoming increased price.
+
+Once the price change becomes effective, customers will be billed at the new price. If locked into a contract, they will continue to receive the contract price for the length of the contract term. Contract renewals will receive the new price.
+
+The price change experience for publishers and customers:
++
+### Feature benefits
+
+The price change feature provides the following benefits:
+
+- **Easy to change prices** ΓÇô Publishers can increase or decrease the prices of offers without having to create a new plan with the new price and retire the previous plan, including offers published solely to the preview phase.
+- **Automatic billing of the new price** ΓÇô Once the price change becomes effective, existing customers will automatically be billed the new price without any action needed on their part.
+- **Customer notifications** ΓÇô Customers will be notified of price increases through email and on the product listing page of the marketplace.
+
+### Sample scenarios
+
+The price change feature supports the following scenarios:
+
+- Increase or decrease the [monthly/yearly flat fee](#changing-the-flat-fee-of-a-saas-or-azure-app-offer).
+- Increase or decrease the [per-user monthly/yearly SaaS fee](#changing-the-per-user-fee-of-a-saas-offer).
+- Increase or decrease the [price per unit of a meter dimension](#changing-the-meter-dimension-of-a-saas-or-azure-app-offer).
+- Increase or decrease the [price per core or per core size](#changing-the-core-price-of-a-virtual-machine).
+
+### Supported offer types
+
+The ability to change prices is available for both public and private plans of all offers transacted through Microsoft: Azure application (Managed App), Software as a service, and Virtual Machine.
+
+### Unsupported scenarios and limitations
+
+The price change feature does not support the following scenarios:
+
+- Price changes on hidden plans.
+- Price changes on plans available in Azure Government cloud.
+- Price increase and decrease on the same plan. To make both changes, first schedule the price decrease. Once it becomes effective, publish the price increase. See [Plan for a price change](#plan-a-price-change) below.
+- Canceling and modifying a price change through Partner Center. To cancel a price update, contact [support](https://go.microsoft.com/fwlink/?linkid=2056405).
+- Changing prices from free or $0 to paid.
+- Changing prices via APIs.
+
+Price changes will go through full certification. To avoid delays in scheduling it, don't make other changes to the offer along with the price change.
+
+## Plan a price change
+
+When planning a price change, consider the following:
+
+| Consideration | Impact | Behavior |
+| | | |
+| Type of price change | This dictates how far into the future the price will be scheduled. | - Price decreases are scheduled for the first of the next month.<br> - Price increases are scheduled for the first of a future month, at least 90 days after the price change is published.<br> |
+| Offer type | This dictates when you need to publish the price change via Partner Center. | Price changes must be published before the cut-off times below to be scheduled for the next month (based on type of price change):<br> - Software as a service offer: Four days before the end of the month.<br> - Virtual machine offer: Six days before the end of the month.<br> - Azure application offer: 14 days before the end of the month.<br> |
+|
+
+#### Examples
+
+For a price decrease to a Software as a service offer to take effect on the first of the next month, publish the price change at least four days before the end of the current month.
+
+For a price increase to a Software as a service offer to take effect on the first of a future month, 90 days out, publish the price change at least four days before the end of the current month.
+
+## Changing the flat fee of a SaaS or Azure app offer
+
+To update the monthly or yearly price of a SaaS or Azure app offer:
+
+1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview).
+2. Select the offer to update from the table of offers.
+3. Select the plan to update from the **Plan Overview** page.
+4. Select the plan's **Pricing and Availability** page.
+5. Scroll to the **Pricing** section of the page and locate the billing term and price.
+6. To change prices specific for a market:
+ 1. Export the prices using **Export pricing data**.
+ 2. Update the prices for each market in the downloaded spreadsheet and save it.
+ 3. Import the spreadsheet using **Import pricing data**.
+7. To change prices across all markets, edit the desired **billing term price** box.
+
+ > [!NOTE]
+ > If the plan is available in multiple markets, the new price for each market is calculated according to current exchange rates.
+
+8. Select **Save draft**.
+9. Confirm you understand the effects of changing the price by entering the **ID of the plan**.
+10. Verify the current and new prices on the **Compare** page, which is accessible from the top of the pricing and availability page.
+11. When you're ready to publish your updated offer pricing, select **Review and publish** from any page.
+12. Select **Publish** to submit the updated offer. Your offer will go through the standard [validation and publishing steps](review-publish-offer.md#validation-and-publishing-steps).
+13. Review the offer preview once it's available and select **Go-live** to publish the new prices.
+
+Once publishing is complete, you will receive an email with the effective date of the new price.
+
+### How this price change affects customers
+
+Existing customers maintain their contract price for the length of the term. A contract renewal receives the new price in effect at that time.
+
+New customers are billed the price in effect when they purchase.
+
+## Changing the per-user fee of a SaaS offer
+
+To update the per user monthly or yearly fee of a SaaS offer:
+
+1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview).
+2. Select the offer to update from the table of offers.
+3. Select the plan to update from the **Plan Overview** page.
+4. Select the plan's **Pricing and Availability** page.
+5. Scroll to the **Pricing** section of the page and locate the billing term and price.
+6. To change prices specific for a market:
+ 1. Export the prices using **Export pricing data**.
+ 2. Update the prices for each market in the downloaded spreadsheet and save it.
+ 3. Import the spreadsheet using **Import pricing data**.
+7. To change prices across all markets, edit the desired **billing term price** box.
+
+ > [!NOTE]
+ > If the plan is available in multiple markets, the new price for each market is calculated according to current exchange rates.
+
+8. Select **Save draft**.
+9. Confirm you understand the effects of changing the price by entering the **ID of the plan**.
+10. Verify the current and new prices on the **Compare** page, which is accessible from the top of the pricing and availability page.
+11. When you're ready to publish your updated offer pricing, select **Review and publish** from any page.
+12. Select **Publish** to submit the updated offer. Your offer will go through the standard [validation and publishing steps](review-publish-offer.md#validation-and-publishing-steps).
+13. Review the offer preview once it's available and select **Go-live** to publish the new prices.
+
+Once publishing is complete, you will receive an email with the effective date of the new price.
+
+### How this price change affects customers
+
+Existing customers maintain their contract price for the length of the term. If a customer adds or removes a user while in the contract, the new seat number will use the contract price. A contract renewal receives the new price in effect at that time.
+
+New customers are billed the price in effect when they purchase.
+
+## Changing the meter dimension of a SaaS or Azure app offer
+
+To update the price per unit of a meter dimension of a SaaS or Azure app offer:
+
+1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview).
+2. Select the offer to update from the table of offers.
+3. Select the plan to update from the **Plan Overview** page.
+4. Select the plan's **Pricing and Availability** page.
+5. Scroll to the **Marketplace Metering Service dimension** section of the page.
+6. To change prices specific for a market:
+ 1. Export the prices using **Export pricing data**.
+ 2. Locate the sheet for the dimension to update in the downloaded spreadsheet; it will be labeled with the dimension ID.
+ 3. Update the price per unit for each market and save it.
+ 4. Import the spreadsheet using **Import pricing data**.
+1. To change prices across all markets:
+ 1. Locate the dimension to update.
+ 1. Edit the **Price per unit in USD** box.
+
+ > [!NOTE]
+ > If the plan is available in multiple markets, the new price for each market is calculated according to current exchange rates.
+
+8. Select **Save draft**.
+9. Confirm you understand the effects of changing the price by entering the **ID of the plan**.
+10. Verify the current and new prices on the **Compare** page, which is accessible from the top of the pricing and availability page.
+11. When you're ready to publish your updated offer pricing, select **Review and publish** from any page.
+12. Select **Publish** to submit the updated offer. Your offer will go through the standard [validation and publishing steps](review-publish-offer.md#validation-and-publishing-steps).
+13. Review the offer preview once it's available and select **Go-live** to publish the new prices.
+
+Once publishing is complete, you will receive an email with the effective date of the new price.
+
+### How this price change affects customers
+
+Customers are billed the new price for overage usage if it is consumed after the new price is in effect.
+
+## Changing the core price of a virtual machine
+
+To update the price per core or per core size of a VM offer.
+
+1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview).
+2. Select the offer to update from the table of offers.
+3. Select the plan to update from the **Plan Overview** page.
+4. Select the plan's **Pricing and Availability** page.
+5. Scroll to the **Pricing** section of the page.
+6. To change prices specific for a market:
+
+ **Option 1**: You do the currency conversion:
+ 1. Under **Select a price entry**, select the **Per market and core size** option.
+ 2. Under **Select a market to customize prices**, select the **market** you want to change the price for.
+ 3. Update the price per hour for each core size.
+ 4. Repeat if you want to update prices for several markets.
+
+ **Option 2**: Export to a spreadsheet:
+ 1. Export the prices using **Export pricing data**.
+ 2. Update the market and core size prices in the downloaded spreadsheet and save it.
+ 3. Import the spreadsheet using **Import pricing data**.
+
+7. To change prices across all markets:
+
+ > [!NOTE]
+ > If the plan is available in multiple markets, the new price for each market is calculated according to current exchange rates.
+
+ 1. **Per core**: Edit the price per core in the **USD/hour** box.
+ 2. **Per core size**: Edit each core size in the **Price per hour in USD** box.
+
+8. Select **Save draft**.
+9. Confirm you understand the effects of changing the price by entering the **ID of the plan**.
+10. Verify the current and new prices on the **Compare** page, which is accessible from the top of the pricing and availability page.
+11. When you're ready to publish your updated offer pricing, select **Review and publish** from any page.
+12. Select **Publish** to submit the updated offer. Your offer will go through the standard [validation and publishing steps](review-publish-offer.md#validation-and-publishing-steps).
+13. Review the offer preview once it's available and select **Go-live** to publish the new prices.
+
+Once publishing is complete, you will receive an email with the effective date of the new price.
+
+### How this price change affects customers
+
+Customers are billed the new price for consumption of the resource that happens after the new price is in effect.
+
+## Canceling or modifying a price change
+
+To modify an already scheduled price change, request the cancellation by submitting a [support request](https://partner.microsoft.com/support/?stage=1) that includes the Plan ID, price, and the market (if the change was market-specific).
+
+If the price change was an increase, we will email customers a second time to inform them the price increase has been canceled.
+
+After the price change is canceled, follow the steps in the appropriate part of this document to schedule a new price change with the needed modifications.
+
+## Next steps
+
+- Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002).
marketplace Summary Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/summary-dashboard.md
Previously updated : 09/27/2021 Last updated : 04/26/2022 # Summary dashboard in commercial marketplace analytics
This article provides information on the Summary dashboard in Partner Center. Th
## Summary dashboard
-The [Summary dashboard](https://go.microsoft.com/fwlink/?linkid=2165765) presents an overview of Azure Marketplace and Microsoft AppSource offersΓÇÖ business performance. The dashboard provides a broad overview of the following:
+The [Summary dashboard](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/summary) presents an overview of Azure Marketplace and Microsoft AppSource offersΓÇÖ business performance. The dashboard provides a broad overview of the following:
- Customers' orders - Customers
The [Summary dashboard](https://go.microsoft.com/fwlink/?linkid=2165765) present
1. In the left menu, select **Summary**.
+ :::image type="content" source="./media/summary-dashboard/summary-left-nav.png" alt-text="Screenshot of the link for the Summary dashboard in the left nav.":::
+ ## Elements of the Summary dashboard The following sections describe how to use the summary dashboard and how to read the data.
-### Month range
+### Download
+
+To download of the data for this dashboard, select **Download as PDF** from the **Download** list.
++
+Alternatively, you can go to the [Downloads dashboard](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/downloads) to download the report.
+
+### Share
+
+To share the dashboard widgets data via email, in the top menu, select **Share**.
++
+In the dialog box that appears, provide the recipient email address and message. To share the report URL, select the **Copy link** or **Share to teams** button. To take a snapshot of the charts data, select the **Copy as image** button.
+
+### What's new
+
+To learn about changes and enhancements that were made to the dashboard, select **WhatΓÇÖs new**. The _WhatΓÇÖs new_ side panel appears.
++
+### About data refresh
+
+To view the data source and the data refresh details, such as the frequency of the data refresh, select the ellipsis (three dots) and then select **Data refresh details**.
-You can find a month range selection at the top-right corner of each page. Customize the output of the **Summary** page graphs by selecting a month range based on the past specified number of months, or by selecting a custom month range with a maximum duration of 12 months. The default month range (computation period) is six months.
-[ ![Illustrates the monthly range options on the summary dashboard.](./media/summary-dashboard/summary-dashboard-filters.png) ](./media/summary-dashboard/summary-dashboard-filters.png#lightbox)
+### Got feedback?
+
+To provide instant feedback about the report/dashboard, select the ellipsis (three dots), and then select the Got feedback? link.
++
+Provide your feedback in the dialog box that appears.
> [!NOTE]
-> All metrics in the visualization widgets and export reports honor the computation period selected by the user.
+> A screenshot is automatically sent to us with your feedback.
+
+### Month range
+
+You can find a month range selection at the top-right corner of each page. Customize the output of the **Summary** page graphs by selecting a month range based on the past 6 or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range is six months.
+
+[ ![Illustrates the monthly range options on the summary dashboard.](./media/summary-dashboard/time-range.png) ](./media/summary-dashboard/time-range.png#lightbox)
### Orders widget The Orders widget on the **Summary** dashboard displays the current orders for all your transact-based offers. The Orders widget displays a count and trend of all purchased orders (excluding canceled orders) for the selected computation period. The percentage value **Orders** represents the amount of growth during the selected computation period.
-[![Illustrates the Orders widget on the summary dashboard.](./media/summary-dashboard/orders-widget.png)](./media/summary-dashboard/orders-widget.png#lightbox)
+Select the ellipsis (three dots) to copy the widget image, download aggregated widget data as a .CSV file, and download the image as a .PDF.
-You can also go to the Orders report by selecting the **Orders Dashboard** link in the lower-left corner of the widget.
+[![Illustrates the Orders widget on the summary dashboard.](./media/summary-dashboard/orders-widget-ellipsis.png)](./media/summary-dashboard/orders-widget-ellipsis.png#lightbox)
+
+You can also go to the Orders report by selecting the [Orders Dashboard](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/order) link in the lower-left corner of the widget.
### Customers widget The **Customers** widget of the **Summary** dashboard displays the total number of customers who have acquired your offers for the selected computation period. The Customers widget displays a count and trend of total number of active (including new and existing) customers (excluding churned customers) for the selected computation period. The percentage value under **Customers** represents the amount of growth during the selected computation period.
-[![Illustrates the customers widget on the summary dashboard.](./media/summary-dashboard/customers-widget.png)](./media/summary-dashboard/customers-widget.png#lightbox)
+Select the ellipsis (three dots) to copy the widget image, download aggregated widget data as a .csv file, and download the image as a PDF.
+
+[![Illustrates the customers widget on the summary dashboard.](./media/summary-dashboard/customers-widget-ellipsis.png)](./media/summary-dashboard/customers-widget-ellipsis.png#lightbox)
-You can also go to the detailed Customers report by selecting the **Customers dashboard** link in the lower-left corner of the widget.
+You can also go to the detailed Customers report by selecting the [Customers retention dashboard](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/customerretention) link in the lower-left corner of the widget.
### Usage widget
The usage summary table displays the customer usage hours for all offers they ha
The percentage value below the total usage hours represents the amount of growth in usage hours during the selected computation period.
-[![Illustrates the usage widget on the summary dashboard.](./media/summary-dashboard/usage-widget.png)](./media/summary-dashboard/usage-widget.png#lightbox)
+Select the ellipsis (three dots) to copy the widget image, download aggregated widget data as a .CSV file, and download the image as a .PDF.
+
+[![Illustrates the usage widget on the summary dashboard.](./media/summary-dashboard/usage-widget-ellipsis.png)](./media/summary-dashboard/usage-widget-ellipsis.png#lightbox)
-You can also go to the Usage report by selecting the **Usage dashboard** link in the lower-left corner of the widget.
+You can also go to the Usage report by selecting the [Usage dashboard](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/usage) link in the lower-left corner of the widget.
### Marketplace insights Marketplace Insights show the number of users who have visited your offersΓÇÖ pages in Azure Marketplace and AppSource. **Page visit count** shows a summary of commercial marketplace web analytics that enables publishers to measure customer engagement for their respective product detail pages listed on the commercial marketplace online stores: Microsoft AppSource and Azure Marketplace. This widget displays a count and trend of total page visits during the selected computation period.
-[![Illustrates the Page visit count widget on the summary dashboard.](./media/summary-dashboard/page-visit-count.png)](./media/summary-dashboard/page-visit-count.png#lightbox)
+Select the ellipsis (three dots) to copy the widget image, download aggregated widget data as a .CSV file, and download the image as a .PDF.
-You can also go to the Marketplace Insights report by selecting the **Marketplace insights dashboard** link in the lower-left corner of the widget.
+[![Illustrates the Page visit count widget on the summary dashboard.](./media/summary-dashboard/marketplace-insights-elipsis.png)](./media/summary-dashboard/marketplace-insights-elipsis.png#lightbox)
+
+You can also go to the Marketplace Insights report by selecting the [Marketplace insights dashboard](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/marketplaceinsights) link in the lower-left corner of the widget.
### Geographical spread
-For the selected computation period, the heatmap displays the total number of customers, orders, and normalized usage hours against geography dimension.
+For the selected computation period, the geographical spread heatmap displays the total number of customers, orders, and normalized usage hours against geography dimension.
+
+Select the ellipsis (three dots) to copy the widget image, download aggregated widget data as a .CSV file, and download the image as a .PDF.
Note the following:
Note the following:
- The heatmap has a supplementary grid to view the details of customer count, order count, and normalized usage hours for the specific location. - You can search and select a country/region in the grid to zoom to the location in the map. Revert to the original view by selecting the **Home** button in the map.
-> [!TIP]
-> You can use the download icon in the upper-right corner of any widget to download the data. You can provide feedback on each of the widgets by selecting the ΓÇ£thumbs upΓÇ¥ or ΓÇ£thumbs downΓÇ¥ icon.
- ## Next steps - For an overview of analytics reports available in the commercial marketplace, see [Access analytic reports for the commercial marketplace in Partner Center](analytics.md).-- For information about your orders in a graphical and downloadable format, see [Orders Dashboard in commercial marketplace analytics](orders-dashboard.md).-- For Virtual Machine (VM) offers usage and metered billing metrics, see [Usage Dashboard in commercial marketplace analytics](usage-dashboard.md).-- For detailed information about your customers, including growth trends, see [Customer Dashboard in commercial marketplace analytics](customer-dashboard.md).-- For a list of your download requests over the last 30 days, see [Downloads Dashboard in commercial marketplace analytics](downloads-dashboard.md).-- To see a consolidated view of customer feedback for offers on Azure Marketplace and AppSource, see [Ratings & Reviews analytics dashboard in Partner Center](ratings-reviews.md). - For frequently asked questions about commercial marketplace analytics and for a comprehensive dictionary of data terms, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.yml).
marketplace Update Existing Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/update-existing-offer.md
Complete these steps to update available images in a plan for an offer that you'
## Offer a virtual machine plan at a new price
-After a virtual machine plan is published, its price canΓÇÖt be changed. To offer the same plan at a different price, you must hide the plan and create a new one with the updated price. First, hide the plan with the price you want to change:
+See [Changing prices in active commercial marketplace offers](price-changes.md) for details and limitations on changing prices in active transactable offers.
+
+To change the price of a plan that is hidden or in Azure Government, hide the old plan and create a new one with the updated price.
+
+To hide the plan with the old price:
1. With the **Offer overview** page for your existing offer open, choose the plan that you want to change. If the plan isn't accessible from the **Plan overview** list, select **See all plans**. 1. Select the **Hide plan** checkbox. Save the draft before you continue.
-Now that you have hidden the plan with the old price, create a copy of that plan with the updated price:
+Now create a copy of that plan but with the updated price:
1. In Partner Center, go back to **Plan overview**.
-2. Select **Create new plan**. Enter a **Plan ID** and a **Plan name**, then select **Create**.
+1. Select **Create new plan**. Enter a **Plan ID** and a **Plan name**, then select **Create**.
1. To reuse the technical configuration from the plan youΓÇÖve hidden, select the **Reuse technical configuration** checkbox. Read [Create plans for a VM offer](azure-vm-plan-overview.md) to learn more. > [!IMPORTANT] > If you select **This plan reuses technical configuration from another plan**, you wonΓÇÖt be able to stop distribution of the parent plan later. DonΓÇÖt use this option if you want to stop distribution of the parent plan.
-3. Complete all the required sections for the new plan, including the new price.
+1. Complete all the required sections for the new plan, including the new price.
1. Select **Save draft**. 1. After you've completed all the required sections for the new plan, select **Review and publish**. This will submit your offer for review and publication. Read [Review and publish an offer to the commercial marketplace](review-publish-offer.md) for more details.
marketplace What Is New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/what-is-new.md
Last updated 04/18/2022
# What's new in the Microsoft commercial marketplace
-Learn about important updates in the commercial marketplace program of Partner Center. This page is updated monthly, so be sure to check back!
+Learn about important updates in the commercial marketplace program of Partner Center. This page is updated regularly, so be sure to check back!
## New features | Category | Description | Date | | | | | | Offers | ISVs can now offer custom prices, terms, conditions, and pricing for a specific customer through private offers. See [ISV to customer private offers](isv-customer.md) and the [FAQ](isv-customer-faq.yml). | 2022-04-06 |
+| Offers | Publishers can now [change transactable offer and plan pricing](price-changes.md) without having to discontinue an offer and recreate it with new pricing (also see [this FAQ](price-changes-faq.yml)). | 2022-03-30 |
| Offers | An ISV can now specify time-bound margins for CSP partners to incentivize them to sell it to their customers. When their partner makes a sale to a customer, Microsoft will pay the ISV the wholesale price. See [ISV to CSP Partner private offers](./isv-csp-reseller.md) and [the FAQs](./isv-csp-faq.yml). | 2022-02-15 | | Analytics | We added a new [Customer Retention Dashboard](./customer-retention-dashboard.md) that provides vital insights into customer retention and engagement. See the [FAQ article](./analytics-faq.yml). | 2022-02-15 | | Analytics | We added a Quality of Service (QoS) report query to the [List of system queries](./analytics-system-queries.md) used in the Create Report API. | 2022-01-27 |
Learn about important updates in the commercial marketplace program of Partner C
## Documentation updates | Category | Description | Date |
-| | - | - |
+| | | |
| Policy | We've updated the [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement). For change history, see [Change history for Microsoft Publisher Agreement version 8.0 ΓÇô May 2022 update](/legal/marketplace/mpa-change-history-may-2022). | 2022-04-15 | | Offers | Added new articles to lead you step-by-step through the process of [testing a SaaS offer](test-saas-overview.md). | 2022-03-30 | | Payouts | We updated the payment schedule for [Payout schedules and processes](/partner-center/payout-policy-details). | 2022-01-19 | | Analytics | Added questions and answers to the [Commercial marketplace analytics FAQ](./analytics-faq.yml), such as enrolling in the commercial marketplace, where to create a marketplace offer, getting started with programmatic access to commercial marketplace analytics reports, and more. | 2022-01-07 | | Offers | Added a new article, [Troubleshooting Private Plans in the commercial marketplace](azure-private-plan-troubleshooting.md). | 2021-12-13 |
-| Offers | We have updated the names of [Dynamics 365](./marketplace-dynamics-365.md#licensing-options) offer types: <br><br> - Dynamics 365 for Customer Engagement &amp; PowerApps is now **Dynamics 365 apps on Dataverse and Power Apps** <br> - Dynamics 365 for operations is now **Dynamics 365 Operations Apps** <br> - Dynamics 365 business central is now **Dynamics 365 Business Central** | 2021-12-03 |
+| Offers | We have updated the names of [Dynamics 365](./marketplace-dynamics-365.md#licensing-options) offer types:<br><br>-Dynamics 365 for Customer Engagement &amp; PowerApps is now **Dynamics 365 apps on Dataverse and Power Apps**<br>- Dynamics 365 for operations is now **Dynamics 365 Operations Apps**<br>- Dynamics 365 business central is now **Dynamics 365 Business Central** | 2021-12-03 |
| Policy | WeΓÇÖve created an [FAQ topic](/legal/marketplace/mpa-faq) to answer publisher questions about the Microsoft Publisher Agreement. | 2021-09-27 | | Policy | We've updated the [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement). For change history, see [Microsoft Publisher Agreement Version 8.0 ΓÇô October 2021 Update](/legal/marketplace/mpa-change-history-oct-2021). | 2021-09-14 | | Policy | Updated [certification](/legal/marketplace/certification-policies) policy for September; see [change history](/legal/marketplace/offer-policies-change-history). | 2021-09-10 |
mysql Howto Troubleshoot High Cpu Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-high-cpu-utilization.md
Previously updated : 4/22/2022 Last updated : 4/27/2022 # Troubleshoot high CPU utilization in Azure Database for MySQL
An analysis of this information, by session, is listed in the following table.
Note that if a session is reported as idle, itΓÇÖs no longer executing any statements. At this point, the session has completed any prior work and is waiting for new statements from the client. However, idle sessions are still responsible for some CPU consumption and memory usage. - ## Understanding thread states Transactions that contribute to higher CPU utilization during execution can have threads in various states, as described in the following sections. Use this information to better understand the query lifecycle and various thread states.
This state usually means the open table operation is consuming a long time. Usua
### Sending data
-While this state can mean that the thread is sending data through the network, it can also indicate that the query is reading data from the disk or memory. This state can be caused by a sequential table scan. You should check the values of the innodb_buffer_pool_reads and innodb_buffer_pool_read_requests to determine whether a large number of pages are being served from the disk into the memory.
+While this state can mean that the thread is sending data through the network, it can also indicate that the query is reading data from the disk or memory. This state can be caused by a sequential table scan. You should check the values of the innodb_buffer_pool_reads and innodb_buffer_pool_read_requests to determine whether a large number of pages are being served from the disk into the memory. For more information, see [Troubleshoot low memory issues in Azure Database for MySQL](howto-troubleshoot-low-memory-issues.md).
### Updating
peering-service Location Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/location-partners.md
The table in this article provides information on the Peering Service connectivi
| [NAP Africa](https://www.napafrica.net/technical/microsoft-azure-peering-service/) |Africa| | [Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/wireline/ip-plus.html) |Europe| | [CMC Networks](https://www.cmcnetworks.net/products/microsoft-azure-peering-services.html) |Africa|
-| [MainOne](https://www.mainone.net/connectivity-services/microsoft-azure-peering-service/) |Africa|
+| [MainOne](https://www.mainone.net/connectivity-services/) |Africa|
| [BICS](https://www.bics.com/services/capacity-solutions/cloud-connect/microsoft-azure-cloud-connect/) |Europe| | [Telstra International](https://www.telstra.com.sg/en/products/global-networks/global-internet/global-internet-direct) |Asia | | [Atman](https://www.atman.pl/en/atman-internet-maps/) |Europe|
purview Manage Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-credentials.md
In Microsoft Purview, there are few options to use as authentication method to s
- Service Principal (using [Key Vault](#create-azure-key-vaults-connections-in-your-microsoft-purview-account)) - Consumer Key (using [Key Vault](#create-azure-key-vaults-connections-in-your-microsoft-purview-account))
-Before creating any credentials, consider your data source types and networking requirements to decide which authentication method you need for your scenario. Review the following decision tree to find which credential is most suitable:
-
- :::image type="content" source="media/manage-credentials/manage-credentials-decision-tree-small.png" alt-text="Manage credentials decision tree" lightbox="media/manage-credentials/manage-credentials-decision-tree.png":::
-
+Before creating any credentials, consider your data source types and networking requirements to decide which authentication method you need for your scenario.
## Use Microsoft Purview system-assigned managed identity to set up scans If you're using the Microsoft Purview system-assigned managed identity (SAMI) to set up scans, you won't need to create a credential and link your key vault to Microsoft Purview to store them. For detailed instructions on adding the Microsoft Purview SAMI to have access to scan your data sources, refer to the data source-specific authentication sections below:
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
Previously updated : 11/10/2021 Last updated : 04/26/2022 # Connect to Azure SQL Database in Microsoft Purview
When setting up scan, you can further scope the scan after providing the databas
* Column level lineage is currently not supported in the lineage tab. However, the columnMapping attribute in properties tab of Azure SQL Stored Procedure Run captures column lineage in plain text. * Stored procedures with dynamic SQL, running from remote data integration tools like Azure Data Factory is currently not supported * Data lineage extraction is currently not supported for Functions, Triggers.
-* Lineage extraction scan is scheduled and defaulted to run every six hours. Frequency can't be changed
-* If sql views are referenced in stored procedures, they're captured as sql tables currently
+* Lineage extraction scan is scheduled and defaulted to run every six hours. Frequency can't be changed.
+* If sql views are referenced in stored procedures, they're captured as sql tables currently.
+* Lineage extraction is currently not supported, if Azure SQL Server is configured behind a private endpoint.
## Prerequisites
The following options are supported:
* **Service Principal**- A service principal is an application that can be assigned permissions like any other group or user, without being associated directly with a person. Their authentication has an expiration date, and so can be useful for temporary projects. For more information, see the [service principal documenatation](/active-directory/develop/app-objects-and-service-principals).
-* **SQL Authentication** - connect to the SQL database with a username and password. For more information about SQL Authentication, you can [follow the SQL authentication documenation](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication).If you need to create a login, follow this [guide to query an Azure SQL database](../azure-sql/database/connect-query-portal.md), and use [this guide to create a login using T-SQL.](/sql/t-sql/statements/create-login-transact-sql)
+* **SQL Authentication** - connect to the SQL database with a username and password. For more information about SQL Authentication, you can [follow the SQL authentication documentation](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication).If you need to create a login, follow this [guide to query an Azure SQL database](../azure-sql/database/connect-query-portal.md), and use [this guide to create a login using T-SQL.](/sql/t-sql/statements/create-login-transact-sql)
> [!NOTE] > Be sure to select the Azure SQL Database option on the page.
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Click the resource provider name in the following table to see the list of opera
| [Microsoft.PolicyInsights](#microsoftpolicyinsights) | | [Microsoft.Portal](#microsoftportal) | | [Microsoft.RecoveryServices](#microsoftrecoveryservices) |
+| [Microsoft.ResourceGraph](#microsoftresourcegraph) |
| [Microsoft.Resources](#microsoftresources) | | [Microsoft.Solutions](#microsoftsolutions) | | [Microsoft.Subscription](#microsoftsubscription) |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/vaults/usages/read | Read any Vault Usages | > | Microsoft.RecoveryServices/Vaults/vaultTokens/read | The Vault Token operation can be used to get Vault Token for vault level backend operations. |
+### Microsoft.ResourceGraph
+
+Azure service: [Azure Resource Graph](../governance/resource-graph/index.yml)
+
+> [!div class="mx-tableFixed"]
+> | Action | Description |
+> | | |
+> | Microsoft.ResourceGraph/operations/read | Gets the list of supported operations |
+> | Microsoft.ResourceGraph/queries/read | Gets the specified graph query |
+> | Microsoft.ResourceGraph/queries/delete | Deletes the specified graph query |
+> | Microsoft.ResourceGraph/queries/write | Creates/Updates the specified graph query |
+> | Microsoft.ResourceGraph/resourceChangeDetails/read | Gets the details of the specified resource change |
+> | Microsoft.ResourceGraph/resourceChanges/read | Lists changes to a resource for a given time interval |
+> | Microsoft.ResourceGraph/resources/read | Submits a query on resources within specified subscriptions, management groups or tenant scope |
+> | Microsoft.ResourceGraph/resourcesHistory/read | List all snapshots of resources history within specified subscriptions, management groups or tenant scope |
+ ### Microsoft.Resources Azure service: [Azure Resource Manager](../azure-resource-manager/index.yml)
search Search Dotnet Sdk Migration Version 11 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration-version-11.md
ms.devlang: csharp Previously updated : 03/21/2022 Last updated : 04/25/2022
Field definitions are streamlined: [SearchableField](/dotnet/api/azure.search.do
||--| | [IndexAction](/dotnet/api/microsoft.azure.search.models.indexaction) | [IndexDocumentsAction](/dotnet/api/azure.search.documents.models.indexdocumentsaction) | | [IndexBatch](/dotnet/api/microsoft.azure.search.models.indexbatch) | [IndexDocumentsBatch](/dotnet/api/azure.search.documents.models.indexdocumentsbatch) |
+| [IndexBatchException.FindFailedActionsToRetry()](/dotnet/api/microsoft.azure.search.indexbatchexception.findfailedactionstoretry) | [SearchIndexingBufferedSender](/dotnet/api/azure.search.documents.searchindexingbufferedsender-1) |
### Query requests and responses
search Search Howto Dotnet Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-dotnet-sdk.md
ms.devlang: csharp Previously updated : 12/10/2021 Last updated : 04/26/2021 # How to use Azure.Search.Documents in a C# .NET Application
This article explains how to create and manage search objects using C# and the [
## About version 11
-Azure SDK for .NET includes a [**Azure.Search.Documents**](/dotnet/api/overview/azure/search) client library from the Azure SDK team that is functionally equivalent to the previous client library, [Microsoft.Azure.Search](/dotnet/api/overview/azure/search/client10), but utilizes common approaches and conventions where applicable. Some examples include [`AzureKeyCredential`](/dotnet/api/azure.azurekeycredential) key authentication, and [System.Text.Json.Serialization](/dotnet/api/system.text.json.serialization) for JSON serialization.
+Azure SDK for .NET includes a [**Azure.Search.Documents**](/dotnet/api/overview/azure/search) client library from the Azure SDK team that is functionally equivalent to the previous client library, [Microsoft.Azure.Search](/dotnet/api/overview/azure/search/client10). Version 11 is more consistent in terms of Azure programmability. Some examples include [`AzureKeyCredential`](/dotnet/api/azure.azurekeycredential) key authentication, and [System.Text.Json.Serialization](/dotnet/api/system.text.json.serialization) for JSON serialization.
As with previous versions, you can use this library to: + Create and manage search indexes, data sources, indexers, skillsets, and synonym maps + Load and manage search documents in an index + Execute queries, all without having to deal with the details of HTTP and JSON++ Invoke and manage AI enrichment (skillsets) and outputs The library is distributed as a single [Azure.Search.Documents NuGet package](https://www.nuget.org/packages/Azure.Search.Documents/), which includes all APIs used for programmatic access to a search service.
The `JsonIgnore` attribute on this property tells the `FieldBuilder` to not seri
## Load an index
-The next step in `Main` populates the newly-created "hotels" index. This index population is done in the following method:
+The next step in `Main` populates the newly created "hotels" index. This index population is done in the following method:
(Some code replaced with "..." for illustration purposes. See the full sample solution for the full data population code.) ```csharp
private static void UploadDocuments(SearchClient searchClient)
Thread.Sleep(2000); ```
-This method has four parts. The first creates an array of 3 `Hotel` objects each with 3 `Room` objects that will serve as our input data to upload to the index. This data is hard-coded for simplicity. In an actual application, data will likely come from an external data source such as a SQL database.
+This method has four parts. The first creates an array of 3 `Hotel` objects each with 3 `Room` objects that will serve as our input data to upload to the index. This data is hard-coded for simplicity. In an actual application, data will likely come from an external data source such as an SQL database.
The second part creates an [`IndexDocumentsBatch`](/dotnet/api/azure.search.documents.models.indexdocumentsbatch) containing the documents. You specify the operation you want to apply to the batch at the time you create it, in this case by calling [`IndexDocumentsAction.Upload`](/dotnet/api/azure.search.documents.models.indexdocumentsaction.upload). The batch is then uploaded to the Azure Cognitive Search index by the [`IndexDocuments`](/dotnet/api/azure.search.documents.searchclient.indexdocuments) method.
The second part creates an [`IndexDocumentsBatch`](/dotnet/api/azure.search.docu
> In this example, we are just uploading documents. If you wanted to merge changes into existing documents or delete documents, you could create batches by calling `IndexDocumentsAction.Merge`, `IndexDocumentsAction.MergeOrUpload`, or `IndexDocumentsAction.Delete` instead. You can also mix different operations in a single batch by calling `IndexBatch.New`, which takes a collection of `IndexDocumentsAction` objects, each of which tells Azure Cognitive Search to perform a particular operation on a document. You can create each `IndexDocumentsAction` with its own operation by calling the corresponding method such as `IndexDocumentsAction.Merge`, `IndexAction.Upload`, and so on. >
-The third part of this method is a catch block that handles an important error case for indexing. If your search service fails to index some of the documents in the batch, an `IndexBatchException` is thrown by `IndexDocuments`. This exception can happen if you are indexing documents while your service is under heavy load. **We strongly recommend explicitly handling this case in your code.** You can delay and then retry indexing the documents that failed, or you can log and continue like the sample does, or you can do something else depending on your application's data consistency requirements.
+The third part of this method is a catch block that handles an important error case for indexing. If your search service fails to index some of the documents in the batch, a `RequestFailedException` is thrown. An exception can happen if you are indexing documents while your service is under heavy load. **We strongly recommend explicitly handling this case in your code.** You can delay and then retry indexing the documents that failed, or you can log and continue like the sample does, or you can do something else depending on your application's data consistency requirements. An alternative is to use [SearchIndexingBufferedSender](/dotnet/api/azure.search.documents.searchindexingbufferedsender-1) for intelligent batching, automatic flushing, and retries for failed indexing actions. See [this example](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample05_IndexingDocuments.md#searchindexingbufferedsender) for more context.
Finally, the `UploadDocuments` method delays for two seconds. Indexing happens asynchronously in your search service, so the sample application needs to wait a short time to ensure that the documents are available for searching. Delays like this are typically only necessary in demos, tests, and sample applications.
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Previously updated : 04/22/2022 Last updated : 04/26/2022
These steps create a custom role that augments search query rights to include li
1. Right-click **Search Index Data Reader** (or another role) and select **Clone** to open the **Create a custom role** wizard.
-1. On the Basics tab, provide a name for the custom role, such as "Search Index Explorer", and then click **Next**.
+1. On the Basics tab, provide a name for the custom role, such as "Search Index Data Explorer", and then click **Next**.
1. On the Permissions tab, select **Add permission**. 1. On the Add permissions tab, search for and then select the **Microsoft Search** tile.
-1. Set the permissions for your custom role:
+1. Set the permissions for your custom role. At the top of the page, using the default **Actions** selection:
+ Under Microsoft.Search/operations, select **Read : List all available operations**. + Under Microsoft.Search/searchServices/indexes, select **Read : Read Index**.
+1. On the same page, switch to **Data actions** and under Microsoft.Search/searchServices/indexes/documents, select **Read : Read Documents**.
+ The JSON definition looks like the following example: ```json { "properties": {
- "roleName": "search index explorer",
+ "roleName": "search index data explorer",
"description": "", "assignableScopes": [ "/subscriptions/a5b1ca8b-bab3-4c26-aebe-4cf7ec4791a0/resourceGroups/heidist-free-search-svc/providers/Microsoft.Search/searchServices/demo-search-svc"
These steps create a custom role that augments search query rights to include li
### [**Azure PowerShell**](#tab/custom-role-ps)
-The PowerShell example shows the JSON syntax for creating a custom role.
+The PowerShell example shows the JSON syntax for creating a custom role that's a clone of **Search Index Data Reader**, but withe ability to list all indexes by name.
-1. Review the [list of atomic permissions](../role-based-access-control/resource-provider-operations.md#microsoftsearch) to determine which ones you need.
+1. Review the [list of atomic permissions](../role-based-access-control/resource-provider-operations.md#microsoftsearch) to determine which ones you need. For this example, you'll need the following:
+
+ ```json
+ "Microsoft.Search/operations/read",
+ "Microsoft.Search/searchServices/read",
+ "Microsoft.Search/searchServices/indexes/read"
+ ```
1. Set up a PowerShell session to create the custom role. For detailed instructions, see [Azure PowerShell](../role-based-access-control/custom-roles-powershell.md)
The PowerShell example shows the JSON syntax for creating a custom role.
```json {
- "Name": "Search Index Manager",
+ "Name": "Search Index Data Explorer",
"Id": "88888888-8888-8888-8888-888888888888", "IsCustom": true,
- "Description": "Can manage search indexes and read or write to them",
+ "Description": "List all indexes on the service and query them.",
"Actions": [
- "Microsoft.Search/searchServices/indexes/*",
-
+ "Microsoft.Search/operations/read",
+ "Microsoft.Search/searchServices/read"
], "NotActions": [], "DataActions": [
- "Microsoft.Search/searchServices/indexes/documents/*"
+ "Microsoft.Search/searchServices/indexes/read"
], "NotDataActions": [], "AssignableScopes": [
The PowerShell example shows the JSON syntax for creating a custom role.
} ```
+> [!NOTE]
+> If the assignable scope is at the index level, the data action should be `"Microsoft.Search/searchServices/indexes/documents/read"`.
+ ### [**REST API**](#tab/custom-role-rest) 1. Review the [list of atomic permissions](../role-based-access-control/resource-provider-operations.md#microsoftsearch) to determine which ones you need.
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-search-overview.md
Previously updated : 01/13/2022 Last updated : 04/26/2022 # Semantic search in Azure Cognitive Search
Semantic search and spell check are available on services that meet the criteria
| Feature | Tier | Region | Sign up | Pricing | |||--||-|
-| Semantic search (rank, captions, highlights, answers) | Standard tier (S1, S2, S3) | Australia East, East US, East US 2, North Central US, South Central US, West US, West US 2, North Europe, UK South, West Europe | Required | [Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/) |
-| Spell check | Basic<sup>1</sup> and above | All | None | None (free) |
+| Semantic search | Standard tier (S1, S2, S3) | [Region availability](https://azure.microsoft.com/global-infrastructure/services/?products=search)| Required | [Pricing](https://azure.microsoft.com/pricing/details/search/) <sup>1</sup>|
+| Spell check | Basic <sup>2</sup> and above | All | None | None (free) |
-<sup>1</sup> Due to the provisioning mechanisms and lifespan of shared (free) search services, a small number of services happen to have spell check on the free tier. However, spell check availability on free tier services is not guaranteed and should not be expected.
+<sup>1</sup> At lower query volumes (under 1000 monthly), semantic search is free. To go above that limit, you can opt in to the semantic search standard pricing plan. The pricing page shows you the semantic query billing rate for different currencies and intervals.
+
+<sup>2</sup> Due to the provisioning mechanisms and lifespan of shared (free) search services, a small number of services happen to have spell check on the free tier. However, spell check availability on free tier services is not guaranteed and should not be expected.
Charges for semantic search are levied when query requests include "queryType=semantic" and the search string is not empty (for example, "search=pet friendly hotels in New York". If your search string is empty ("search=*"), you won't be charged, even if the queryType is set to "semantic".
By default, semantic search is disabled on all services. To enable semantic sear
1. Open the [Azure portal](https://portal.azure.com). 1. Navigate to your Standard tier search service.
+1. Determine whether the service region supports semantic search. Search service region is noted on the overview page. Semantic search regions are noted on the [Products Available by Region](https://azure.microsoft.com/global-infrastructure/services/?products=search) page.
1. On the left-nav pane, select **Semantic Search (Preview)**. 1. Select either the **Free plan** or the **Standard plan**. You can switch between the free plan and the standard plan at any time. :::image type="content" source="media/semantic-search-overview/semantic-search-billing.png" alt-text="Screenshot of enabling semantic search in the Azure portal" border="true":::
- Semantic Search's free plan is capped at 1,000 queries per month. After the first 1,000 queries in the free plan, you'll receive an error message letting you know you've exhausted your quota whenever you issue a semantic query. When this happens, you'll need to upgrade to the standard plan to continue using semantic search.
+Semantic Search's free plan is capped at 1,000 queries per month. After the first 1,000 queries in the free plan, you'll receive an error message letting you know you've exhausted your quota whenever you issue a semantic query. When this happens, you'll need to upgrade to the standard plan to continue using semantic search.
Alternatively, you can also enable semantic search using the [Create or Update Service API](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#searchsemanticsearch) that's described in the next section. ## Disable semantic search
-For full protection against accidental usage and charges, you can [disable semantic search](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#searchsemanticsearch) using the Create or Update Service API on your search service. After the feature is disabled, any requests that include the semantic query type will be rejected.
+To reverse feature enablement, or for full protection against accidental usage and charges, you can [disable semantic search](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#searchsemanticsearch) using the Create or Update Service API on your search service. After the feature is disabled, any requests that include the semantic query type will be rejected.
* Management REST API version 2021-04-01-Preview provides this option
sentinel Customize Alert Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/customize-alert-details.md
Title: Customize alert details in Microsoft Sentinel | Microsoft Docs
description: Customize how alerts are named and described, along with their severity and assigned tactics, based on the alerts' content. Previously updated : 11/09/2021 Last updated : 04/26/2022
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-> [!IMPORTANT]
->
-> - The alert details feature is in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Introduction When you define a name and description for your scheduled analytics rules, and you assign them severities and MITRE ATT&CK tactics, all alerts generated by a particular rule - and all incidents created as a result - will be displayed with the same name, description, and so on, without regard to the particular content of a specific instance of the alert.
The procedure detailed below is part of the analytics rule creation wizard. It's
1. In the **Alert Name Format** field, enter the text you want to appear as the name of the alert (the alert text), and include, in double curly brackets, any parameters you want to be part of the alert text.
- Example: `Alert from {{ProviderName}}: {{AccountName}} failed to log on to computer {{ComputerName}} with IP address {{IPAddress}}.`
+ Example: `Alert from {{ProviderName}}: {{AccountName}} failed to log on to computer {{ComputerName}}.`
1. Do the same with the **Alert Description Format** field.
-
+
+ > [!NOTE]
+ > You are currently limited to **three parameters each** in the **Alert Name Format** and **Alert Description Format** fields.
+ 1. Use the **Tactic Column** and **Severity Column** fields only if your query results contain columns with this information in them. For each one, choose the column that contains the corresponding information. If you change your mind, or if you made a mistake, you can remove an alert detail by clicking the trash can icon next to the **Tactic/Severity Column** fields or delete the free text from the **Alert Name/Description Format** fields.
The procedure detailed below is part of the analytics rule creation wizard. It's
1. When you have finished customizing your alert details, continue to the next tab in the wizard. If you're editing an existing rule, click the **Review and create** tab. Once the rule validation is successful, click **Save**. ## Next steps+ In this document, you learned how to customize alert details in Microsoft Sentinel analytics rules. To learn more about Microsoft Sentinel, see the following articles:+ - Get the complete picture on [scheduled query analytics rules](detect-threats-custom.md). - Learn more about [entities in Microsoft Sentinel](entities.md).
sentinel Identify Threats With Entity Behavior Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/identify-threats-with-entity-behavior-analytics.md
Microsoft Sentinel calculates and ranks a user's peers, based on the userΓÇÖs Az
:::image type="content" source="./media/identify-threats-with-entity-behavior-analytics/user-peers-metadata.png" alt-text="Screen shot of user peers metadata table":::
-You can use the [Jupyter notebook](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/BehaviorAnalytics/UserSecurityMetadata) provided in the Microsoft Sentinel GitHub repository to visualize the user peers metadata. For detailed instructions on how to use the notebook, see the [Guided Analysis - User Security Metadata](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/BehaviorAnalytics/UserSecurityMetadata/Guided%20Analysis%20-%20User%20Security%20Metadata.ipynb) notebook.
+You can use the [Jupyter notebook](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/scenario-notebooks/UserSecurityMetadata) provided in the Microsoft Sentinel GitHub repository to visualize the user peers metadata. For detailed instructions on how to use the notebook, see the [Guided Analysis - User Security Metadata](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/scenario-notebooks/UserSecurityMetadata/Guided%20Analysis%20-%20User%20Security%20Metadata.ipynb) notebook.
### Permission analytics - table and notebook
Microsoft Sentinel determines the direct and transitive access rights held by a
:::image type="content" source="./media/identify-threats-with-entity-behavior-analytics/user-access-analytics.png" alt-text="Screen shot of user access analytics table":::
-You can use the [Jupyter notebook](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/BehaviorAnalytics/UserSecurityMetadata) (the same notebook mentioned above) from the Microsoft Sentinel GitHub repository to visualize the permission analytics data. For detailed instructions on how to use the notebook, see the [Guided Analysis - User Security Metadata](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/BehaviorAnalytics/UserSecurityMetadata/Guided%20Analysis%20-%20User%20Security%20Metadata.ipynb) notebook.
+You can use the [Jupyter notebook](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/scenario-notebooks/UserSecurityMetadata) (the same notebook mentioned above) from the Microsoft Sentinel GitHub repository to visualize the permission analytics data. For detailed instructions on how to use the notebook, see the [Guided Analysis - User Security Metadata](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/scenario-notebooks/UserSecurityMetadata/Guided%20Analysis%20-%20User%20Security%20Metadata.ipynb) notebook.
### Hunting queries and exploration queries
sentinel Multiple Workspace View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/multiple-workspace-view.md
To take full advantage of Microsoft SentinelΓÇÖs capabilities, Microsoft recomme
When you open Microsoft Sentinel, you are presented with a list of all the workspaces to which you have access rights, across all selected tenants and subscriptions. To the left of each workspace name is a checkbox. Selecting the name of a single workspace will bring you into that workspace. To choose multiple workspaces, select all the corresponding checkboxes, and then select the **View incidents** button at the top of the page. > [!IMPORTANT]
-> Multiple Workspace View currently supports a maximum of 30 concurrently displayed workspaces.
+> Multiple Workspace View now supports a maximum of 100 concurrently displayed workspaces.
> Note that in the list of workspaces, you can see the directory, subscription, location, and resource group associated with each workspace. The directory corresponds to the tenant.
service-bus-messaging Transport Layer Security Audit Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-audit-minimum-version.md
+
+ Title: Use Azure Policy to audit for compliance of minimum TLS version for an Azure Service Bus namespace
+
+description: Configure Azure Policy to audit compliance of Azure Service Bus for using a minimum version of Transport Layer Security (TLS).
+++++ Last updated : 04/22/2022+++
+# Use Azure Policy to audit for compliance of minimum TLS version for an Azure Service Bus namespace (Preview)
+
+If you have a large number of Microsoft Azure Service Bus namespaces, you may want to perform an audit to make sure that all namespaces are configured for the minimum version of TLS that your organization requires. To audit a set of Service Bus namespaces for their compliance, use Azure Policy. Azure Policy is a service that you can use to create, assign, and manage policies that apply rules to Azure resources. Azure Policy helps you to keep those resources compliant with your corporate standards and service level agreements. For more information, see [Overview of Azure Policy](../governance/policy/overview.md).
+
+## Create a policy with an audit effect
+
+Azure Policy supports effects that determine what happens when a policy rule is evaluated against a resource. The audit effect creates a warning when a resource is not in compliance, but does not stop the request. For more information about effects, see [Understand Azure Policy effects](../governance/policy/concepts/effects.md).
+
+To create a policy with an audit effect for the minimum TLS version with the Azure portal, follow these steps:
+
+1. In the Azure portal, navigate to the Azure Policy service.
+2. Under the **Authoring** section, select **Definitions**.
+3. Select **Add policy definition** to create a new policy definition.
+4. For the **Definition location** field, select the **More** button to specify where the audit policy resource is located.
+5. Specify a name for the policy. You can optionally specify a description and category.
+6. Under **Policy rule** , add the following policy definition to the **policyRule** section.
+
+ ```json
+ {
+ "policyRule": {
+ "if": {
+ "allOf": [
+ {
+ "field": "type",
+ "equals": "Microsoft.ServiceBus/namespaces"
+ },
+ {
+ "not": {
+ "field": " Microsoft.ServiceBus/namespaces/minimumTlsVersion",
+ "equals": "1.2"
+ }
+ }
+ ]
+ },
+ "then": {
+ "effect": "audit"
+ }
+ }
+ }
+ ```
+
+7. Save the policy.
+
+### Assign the policy
+
+Next, assign the policy to a resource. The scope of the policy corresponds to that resource and any resources beneath it. For more information on policy assignment, see [Azure Policy assignment structure](../governance/policy/concepts/assignment-structure.md).
+
+To assign the policy with the Azure portal, follow these steps:
+
+1. In the Azure portal, navigate to the Azure Policy service.
+2. Under the **Authoring** section, select **Assignments**.
+3. Select **Assign policy** to create a new policy assignment.
+4. For the **Scope** field, select the scope of the policy assignment.
+5. For the **Policy definition** field, select the **More** button, then select the policy you defined in the previous section from the list.
+6. Provide a name for the policy assignment. The description is optional.
+7. Leave **Policy enforcement** set to _Enabled_. This setting has no effect on the audit policy.
+8. Select **Review + create** to create the assignment.
+
+### View compliance report
+
+After you have assigned the policy, you can view the compliance report. The compliance report for an audit policy provides information on which Service Bus namespaces are not in compliance with the policy. For more information, see [Get policy compliance data](../governance/policy/how-to/get-compliance-data.md).
+
+It may take several minutes for the compliance report to become available after the policy assignment is created.
+
+To view the compliance report in the Azure portal, follow these steps:
+
+1. In the Azure portal, navigate to the Azure Policy service.
+2. Select **Compliance**.
+3. Filter the results for the name of the policy assignment that you created in the previous step. The report shows how many resources are not in compliance with the policy.
+4. You can drill down into the report for additional details, including a list of Service Bus namespaces that are not in compliance.
+
+## Use Azure Policy to enforce the minimum TLS version
+
+Azure Policy supports cloud governance by ensuring that Azure resources adhere to requirements and standards. To enforce a minimum TLS version requirement for the Service Bus namespaces in your organization, you can create a policy that prevents the creation of a new Service Bus namespace that sets the minimum TLS requirement to an older version of TLS than that which is dictated by the policy. This policy will also prevent all configuration changes to an existing namespace if the minimum TLS version setting for that namespace is not compliant with the policy.
+
+The enforcement policy uses the deny effect to prevent a request that would create or modify a Service Bus namespace so that the minimum TLS version no longer adheres to your organization's standards. For more information about effects, see [Understand Azure Policy effects](../governance/policy/concepts/effects.md).
+
+To create a policy with a deny effect for a minimum TLS version that is less than TLS 1.2, provide the following JSON in the **policyRule** section of the policy definition:
+
+```json
+{
+ "policyRule": {
+ "if": {
+ "allOf": [
+ {
+ "field": "type",
+ "equals": " Microsoft.ServiceBus/namespaces"
+ },
+ {
+ "not": {
+ "field": " Microsoft.ServiceBus/namespaces/minimumTlsVersion",
+ "equals": "1.2"
+ }
+ }
+ ]
+ },
+ "then": {
+ "effect": "deny"
+ }
+ }
+}
+```
+
+After you create the policy with the deny effect and assign it to a scope, a user cannot create a Service Bus namespace with a minimum TLS version that is older than 1.2. Nor can a user make any configuration changes to an existing Service Bus namespace that currently requires a minimum TLS version that is older than 1.2. Attempting to do so results in an error. The required minimum TLS version for the Service Bus namespace must be set to 1.2 to proceed with namespace creation or configuration.
+
+An error will be shown if you try to create a Service Bus namespace with the minimum TLS version set to TLS 1.0 when a policy with a deny effect requires that the minimum TLS version be set to TLS 1.2.
+
+## Next steps
+
+See the following documentation for more information.
+
+- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a Service Bus namespace](transport-layer-security-enforce-minimum-version.md)
+- [Configure the minimum TLS version for a Service Bus namespace](transport-layer-security-configure-minimum-version.md)
+- [Configure Transport Layer Security (TLS) for a Service Bus client application](transport-layer-security-configure-client-version.md)
service-bus-messaging Transport Layer Security Configure Client Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-configure-client-version.md
+
+ Title: Configure Transport Layer Security (TLS) for a Service Bus client application
+
+description: Configure a client application to communicate with Azure Service Bus using a minimum version of Transport Layer Security (TLS).
+++++ Last updated : 04/22/2022+++
+# Configure Transport Layer Security (TLS) for a Service Bus client application (Preview)
+
+For security purposes, an Azure Service Bus namespace may require that clients use a minimum version of Transport Layer Security (TLS) to send requests. Calls to Azure Service Bus will fail if the client is using a version of TLS that is lower than the minimum required version. For example, if a namespace requires TLS 1.2, then a request sent by a client who is using TLS 1.1 will fail.
+
+This article describes how to configure a client application to use a particular version of TLS. For information about how to configure a minimum required version of TLS for an Azure Service Bus namespace, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a Service Bus namespace](transport-layer-security-configure-minimum-version.md).
+
+## Configure the client TLS version
+
+In order for a client to send a request with a particular version of TLS, the operating system must support that version.
+
+The following example shows how to set the client's TLS version to 1.2 from .NET. The .NET Framework used by the client must support TLS 1.2. For more information, see [Support for TLS 1.2](/dotnet/framework/network-programming/tls#support-for-tls-12).
+
+# [.NET](#tab/dotnet)
+
+The following sample shows how to enable TLS 1.2 in a .NET client using the Azure.Messaging.ServiceBus client library of Service Bus:
+
+```csharp
+{
+ // Enable TLS 1.2 before connecting to Service Bus
+ System.Net.ServicePointManager.SecurityProtocol = System.Net.SecurityProtocolType.Tls12;
+
+ // Connection string to your Service Bus namespace
+ string connectionString = "<NAMESPACE CONNECTION STRING>";
+
+ // Name of your Service Bus queue
+ string queueName = "<QUEUE NAME>";
+
+ // The client that owns the connection and can be used to create senders and receivers
+ static ServiceBusClient client = new ServiceBusClient(connectionString);
+
+ // The sender used to publish messages to the queue
+ ServiceBusSender sender = client.CreateSender(queueName);
+
+ // Use the producer client to send a message to the Service Bus queue
+ await sender.SendMessagesAsync(new ServiceBusMessage($"Message for TLS check")));
+}
+```
+++
+## Verify the TLS version used by a client
+
+To verify that the specified version of TLS was used by the client to send a request, you can use [Fiddler](https://www.telerik.com/fiddler) or a similar tool. Open Fiddler to start capturing client network traffic, then execute one of the examples in the previous section. Look at the Fiddler trace to confirm that the correct version of TLS was used to send the request.
+
+## Next steps
+
+See the following documentation for more information.
+
+- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a Service Bus namespace](transport-layer-security-enforce-minimum-version.md)
+- [Configure the minimum TLS version for a Service Bus namespace](transport-layer-security-configure-minimum-version.md)
+- [Use Azure Policy to audit for compliance of minimum TLS version for a Service Bus namespace](transport-layer-security-audit-minimum-version.md)
service-bus-messaging Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-configure-minimum-version.md
+
+ Title: Configure the minimum TLS version for a Service Bus namespace using ARM
+
+description: Configure an Azure Service Bus namespace to use a minimum version of Transport Layer Security (TLS).
+++++ Last updated : 04/22/2022+++
+# Configure the minimum TLS version for a Service Bus namespace using ARM (Preview)
+
+To configure the minimum TLS version for a Service Bus namespace, set the `MinimumTlsVersion` version property. When you create a Service Bus namespace with an Azure Resource Manager template, the `MinimumTlsVersion` property is set to 1.2 by default, unless explicitly set to another version.
+
+> [!NOTE]
+> Namespaces created using an api-version prior to 2022-01-01-preview will have 1.0 as the value for `MinimumTlsVersion`. This behavior was the prior default, and is still there for backwards compatibility.
+
+## Create a template to configure the minimum TLS version
+
+To configure the minimum TLS version for a Service Bus namespace with a template, create a template with the `MinimumTlsVersion` property set to 1.0, 1.1, or 1.2. The following steps describe how to create a template in the Azure portal.
+
+1. In the Azure portal, choose **Create a resource**.
+2. In **Search the Marketplace** , type **custom deployment** , and then press **ENTER**.
+3. Choose **Custom deployment (deploy using custom templates) (preview)**, choose **Create** , and then choose **Build your own template in the editor**.
+4. In the template editor, paste in the following JSON to create a new namespace and set the minimum TLS version to TLS 1.2. Remember to replace the placeholders in angle brackets with your own values.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "variables": {
+ "serviceBusNamespaceName": "[concat(uniqueString(subscription().subscriptionId), 'tls')]"
+ },
+ "resources": [
+ {
+ "name": "[variables('serviceBusNamespaceName')]",
+ "type": "Microsoft.ServiceBus/namespaces",
+ "apiVersion": "2022-01-01-preview",
+ "location": "westeurope",
+ "properties": {
+ "minimumTlsVersion": "1.2"
+ },
+ "dependsOn": [],
+ "tags": {}
+ }
+ ]
+ }
+ ```
+
+5. Save the template.
+6. Specify resource group parameter, then choose the **Review + create** button to deploy the template and create a namespace with the `MinimumTlsVersion` property configured.
+
+> [!NOTE]
+> After you update the minimum TLS version for the Service Bus namespace, it may take up to 30 seconds before the change is fully propagated.
+
+Configuring the minimum TLS version requires api-version 2022-01-01-preview or later of the Azure Service Bus resource provider.
+
+## Check the minimum required TLS version for multiple namespaces
+
+To check the minimum required TLS version across a set of Service Bus namespaces with optimal performance, you can use the Azure Resource Graph Explorer in the Azure portal. To learn more about using the Resource Graph Explorer, see [Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md).
+
+Running the following query in the Resource Graph Explorer returns a list of Service Bus namespaces and displays the minimum TLS version for each namespace:
+
+```kusto
+resources
+| where type =~ 'Microsoft.ServiceBus/namespaces'
+| extend minimumTlsVersion = parse\_json(properties).minimumTlsVersion
+| project subscriptionId, resourceGroup, name, minimumTlsVersion
+```
+
+## Test the minimum TLS version from a client
+
+To test that the minimum required TLS version for a Service Bus namespace forbids calls made with an older version, you can configure a client to use an older version of TLS. For more information about configuring a client to use a specific version of TLS, see [Configure Transport Layer Security (TLS) for a client application](transport-layer-security-configure-client-version.md).
+
+When a client accesses a Service Bus namespace using a TLS version that does not meet the minimum TLS version configured for the namespace, Azure Service Bus returns error code 400 error (Bad Request) and a message indicating that the TLS version that was used is not permitted for making requests against this Service Bus namespace.
+
+> [!NOTE]
+> When you configure a minimum TLS version for a Service Bus namespace, that minimum version is enforced at the application layer. Tools that attempt to determine TLS support at the protocol layer may return TLS versions in addition to the minimum required version when run directly against the Service Bus namespace endpoint.
+
+## Next steps
+
+See the following documentation for more information.
+
+- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a Service Bus namespace](transport-layer-security-enforce-minimum-version.md)
+- [Configure Transport Layer Security (TLS) for a Service Bus client application](transport-layer-security-configure-client-version.md)
+- [Use Azure Policy to audit for compliance of minimum TLS version for a Service Bus namespace](transport-layer-security-audit-minimum-version.md)
service-bus-messaging Transport Layer Security Enforce Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-enforce-minimum-version.md
+
+ Title: Enforce a minimum required version of Transport Layer Security (TLS) for requests to a Service Bus namespace
+
+description: Configure a service bus namespace to require a minimum version of Transport Layer Security (TLS) for clients making requests against Azure Service Bus.
+++++ Last updated : 04/12/2022+++
+# Enforce a minimum required version of Transport Layer Security (TLS) for requests to a Service Bus namespace (Preview)
+
+Communication between a client application and an Azure Service Bus namespace is encrypted using Transport Layer Security (TLS). TLS is a standard cryptographic protocol that ensures privacy and data integrity between clients and services over the Internet. For more information about TLS, see [Transport Layer Security](https://datatracker.ietf.org/wg/tls/about/).
+
+Azure Service Bus supports choosing a specific TLS version for namespaces. Currently Azure Service Bus uses TLS 1.2 on public endpoints by default, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility.
+
+Azure Service Bus namespaces permit clients to send and receive data with TLS 1.0 and above. To enforce stricter security measures, you can configure your Service Bus namespace to require that clients send and receive data with a newer version of TLS. If a Service Bus namespace requires a minimum version of TLS, then any requests made with an older version will fail.
+
+> [!IMPORTANT]
+> If you are using a service that connects to Azure Service Bus, make sure that that service is using the appropriate version of TLS to send requests to Azure Service Bus before you set the required minimum version for a Service Bus namespace.
+
+## Permissions necessary to require a minimum version of TLS
+
+To set the `MinimumTlsVersion` property for the Service Bus namespace, a user must have permissions to create and manage Service Bus namespaces. Azure role-based access control (Azure RBAC) roles that provide these permissions include the **Microsoft.ServiceBus/namespaces/write** or **Microsoft.ServiceBus/namespaces/\*** action. Built-in roles with this action include:
+
+- The Azure Resource Manager [Owner](../role-based-access-control/built-in-roles.md#owner) role
+- The Azure Resource Manager [Contributor](../role-based-access-control/built-in-roles.md#contributor) role
+- The [Azure Service Bus Data Owner](../role-based-access-control/built-in-roles.md#azure-service-bus-data-owner) role
+
+Role assignments must be scoped to the level of the Service Bus namespace or higher to permit a user to require a minimum version of TLS for the Service Bus namespace. For more information about role scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md).
+
+Be careful to restrict assignment of these roles only to those who require the ability to create a Service Bus namespace or update its properties. Use the principle of least privilege to ensure that users have the fewest permissions that they need to accomplish their tasks. For more information about managing access with Azure RBAC, see [Best practices for Azure RBAC](../role-based-access-control/best-practices.md).
+
+> [!NOTE]
+> The classic subscription administrator roles Service Administrator and Co-Administrator include the equivalent of the Azure Resource Manager [**Owner**](../role-based-access-control/built-in-roles.md#owner) role. The **Owner** role includes all actions, so a user with one of these administrative roles can also create and manage Service Bus namespaces. For more information, see [**Classic subscription administrator roles, Azure roles, and Azure AD administrator roles**](../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles).
+
+## Network considerations
+
+When a client sends a request to Service Bus namespace, the client establishes a connection with the public endpoint of the Service Bus namespace first, before processing any requests. The minimum TLS version setting is checked after the connection is established. If the request uses an earlier version of TLS than that specified by the setting, the connection will continue to succeed, but the request will eventually fail.
+
+> [!NOTE]
+> Due to backwards compatibility, namespaces that do not have the `MinimumTlsVersion` setting specified or have specified this as 1.0, we do not do any TLS checks when connecting via the SBMP protocol.
+
+## Next steps
+
+See the following documentation for more information.
+
+- [Configure the minimum TLS version for a Service Bus namespace](transport-layer-security-configure-minimum-version.md)
+- [Configure Transport Layer Security (TLS) for a Service Bus client application](transport-layer-security-configure-client-version.md)
+- [Use Azure Policy to audit for compliance of minimum TLS version for a Service Bus namespace](transport-layer-security-audit-minimum-version.md)
service-fabric Cluster Security Certificate Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/cluster-security-certificate-management.md
Title: Certificate management in a Service Fabric cluster
-description: Learn about managing certificates in a Service Fabric cluster secured with X.509 certificates.
+ Title: Manage certificates in a Service Fabric cluster
+description: Learn about managing certificates in a Service Fabric cluster that's secured with X.509 certificates.
Last updated 04/10/2020
-# Certificate management in Service Fabric clusters
+# Manage certificates in Service Fabric clusters
-This article addresses the management aspects of certificates used to secure communication in Azure Service Fabric clusters, and complements the introduction to [Service Fabric cluster security](service-fabric-cluster-security.md) as well as the explainer on [X.509 certificate-based authentication in Service Fabric](cluster-security-certificates.md). We assume the reader is familiar with fundamental security concepts, and also with the controls that Service Fabric exposes to configure the security of a cluster.
+This article addresses the management aspects of certificates that are used to secure communication in Azure Service Fabric clusters. It complements the introduction to [Service Fabric cluster security](service-fabric-cluster-security.md) and the explainer on [X.509 certificate-based authentication in Service Fabric](cluster-security-certificates.md).
-Aspects covered under this Title:
+## Prerequisites
+Before you begin, you should be familiar with fundamental security concepts and the controls that Service Fabric exposes to configure the security of a cluster.
-* What exactly is 'certificate management'?
-* Roles and entities involved in certificate management
-* The journey of a certificate
-* Deep dive into an example
-* Troubleshooting and Frequently Asked Questions
+## Disclaimer
-But first - a disclaimer: this article attempts to pair its theoretical side with hands-on examples, which require, well, specifics of services, technologies, and so on. Since a sizeable part of the audience is Microsoft-internal, we'll refer to services, technologies, and products specific to Microsoft Azure. Feel free to ask in the comments section for clarifications or guidance, where the Microsoft-specific details do not apply in your case.
+This article pairs theoretical aspects of certificate management with hands-on examples that cover the specifics of services, technologies, and so on. Because much of the audience here is Microsoft-internal, the article refers to services, technologies, and products that are specific to Azure. If certain Microsoft-specific details don't apply to you, feel free to ask for clarification or guidance in the comments section at the end.
## Defining certificate management
-As we've seen in the [companion article](cluster-security-certificates.md), a certificate is a cryptographic object essentially binding an asymmetric key pair with attributes describing the entity it represents. However, it is also a 'perishable' object, in that its lifetime is finite and is susceptible to compromises - accidental disclosure or a successful exploit can render a certificate useless from a security standpoint. This implies the need to change certificates - either routinely or in response to a security incident. Another aspect of management (and is an entire topic on its own) is the safeguarding of certificate private keys, or of secrets protecting the identities of the entities involved in procuring and provisioning certificates. We refer to the processes and procedures used to obtain certificates and to safely (and securely) transport them to where they are needed as 'certificate management'. Some of the management operations - such as enrollment, policy setting, and authorization controls - are beyond the scope of this article. Still others - such as provisioning, renewal, re-keying, or revocation - are only incidentally related to Service Fabric; nonetheless, we'll address them here to some degree, as understanding these operations can help with properly securing one's cluster.
-The goal is to automate certificate management as much as possible to ensure uninterrupted availability of the cluster and offer security assurances, given that the process is user-touch-free. This goal is attainable currently in Azure Service Fabric clusters; the remainder of the article will first deconstruct certificate management, and later will focus on enabling autorollover.
+As you learn in companion article [X.509 Certificate-based authentication in Service Fabric clusters](cluster-security-certificates.md), a certificate is a cryptographic object that essentially binds an asymmetric key pair with attributes that describe the entity it represents.
-Specifically, the topics in scope are:
- - Assumptions related to the separation of attributions between owner and platform, in the context of managing certificates
- - The long pipeline of certificates from issuance to consumption
- - Certificate rotation - why, how and when
- - What could possibly go wrong?
+However, a certificate is also a *perishable* object, because its lifetime is finite and it's susceptible to compromise. Accidental disclosure or a successful exploit can render a certificate useless from a security standpoint. This characteristic implies the need to change certificates either routinely or in response to a security incident.
-Aspects such as securing/managing domain names, enrolling into certificates, or setting up authorization controls to enforce certificate issuance are beyond the scope of this article. Refer to the Registration Authority (RA) of your favorite Public Key Infrastructure (PKI) service. Microsoft-internal consumers: please reach out to Azure Security.
+Another aspect of certificate management, and an entirely separate topic, is the safeguarding of certificate private keys or secrets that protect the identities of the entities involved in procuring and provisioning certificates.
+
+We describe *certificate management* as the processes and procedures that are used to obtain certificates and to transport them safely and securely to where they're needed.
+
+Some management operations, such as enrollment, policy setting, and authorization controls, are beyond the scope of this article. Other operations, such as provisioning, renewal, re-keying, or revocation, are related only incidentally to Service Fabric. Nonetheless, the article addresses them somewhat, because understanding these operations can help you secure your cluster properly.
+
+Your immediate goal is likely to be to automate certificate management as much as possible to ensure uninterrupted availability of the cluster. Because the process is user-touch-free, you'll also want to offer security assurances. With Service Fabric clusters, this goal is attainable.
+
+The rest of the article first deconstructs certificate management, and later focuses on enabling autorollover.
+
+Specifically, it covers the following topics:
+
+- Assumptions about the separation of attributions between owner and platform
+- The long pipeline from certificate issuance to consumption
+- Certificate rotation: Why, how, and when
+- What could possibly go wrong?
+
+The article does not cover these topics:
+
+- Securing and managing domain names
+- Enrolling into certificates
+- Setting up authorization controls to enforce certificate issuance.
+
+For information about these topics, refer to the registration authority (RA) of your favorite public key infrastructure (PKI) service. If you're a Microsoft-internal reader, you can reach out to Azure Security.
## Roles and entities involved in certificate management
-The security approach in a Service Fabric cluster is a case of "cluster owner declares it, Service Fabric runtime enforces it". By that we mean that almost none of the certificates, keys, or other credentials of identities participating in a cluster's functioning come from the service itself; they are all declared by the cluster owner. Furthermore, the cluster owner is also responsible for provisioning the certificates into the cluster, renewing them as needed, and ensuring the security of the certificates at all times. More specifically, the cluster owner must ensure that:
- - Certificates declared in the NodeType section of the cluster manifest can be found on each node of that type, according to the [presentation rules](cluster-security-certificates.md#presentation-rules)
- - Certificates declared above are installed with their corresponding private keys included.
- - Certificates declared in the presentation rules should pass the [validation rules](cluster-security-certificates.md#validation-rules)
+
+The security approach in a Service Fabric cluster is a case of "cluster owner declares it, Service Fabric runtime enforces it." This means that almost none of the certificates, keys, or other credentials of identities that participate in a cluster's functioning come from the service itself. They're all declared by the cluster owner. The cluster owner is also responsible for provisioning the certificates into the cluster, renewing them as needed, and helping ensure their security at all times.
+
+More specifically, the cluster owner must ensure that:
+ - Certificates that are declared in the NodeType section of the cluster manifest can be found on each node of that type, according to the [presentation rules](cluster-security-certificates.md#presentation-rules).
+ - Certificates that are declared as in the preceding bullet point are installed with their corresponding private keys included.
+ - Certificates that are declared in the presentation rules should pass the [validation rules](cluster-security-certificates.md#validation-rules).
Service Fabric, for its part, assumes the following responsibilities: - Locating certificates that match the declarations in the cluster definition
- - Granting access to the corresponding private keys to Service Fabric-controlled entities on a 'need' basis
+ - Granting access to the corresponding private keys to Service Fabric-controlled entities on a *need* basis
- Validating certificates in strict accordance with established security best-practices and the cluster definition - Raising alerts on impending expiration of certificates, or failures to perform the basic steps of certificate validation - Validating (to some degree) that the certificate-related aspects of the cluster definition are met by the underlying configuration of the hosts
-It follows that the certificate management burden (as active operations) falls solely on the cluster owner. In the following sections, we'll take a closer look at each of the management operations, with available mechanisms and their impact on the cluster.
+It follows that the certificate management burden (as active operations) falls solely on the cluster owner. The next sections offer a closer look at each of the management operations, including available mechanisms and their impact on the cluster.
## The journey of a certificate
-Let us quickly revisit the progression of a certificate from issuance to consumption in the context of a Service Fabric cluster:
-
- 1. A domain owner registers with the RA of a PKI a domain or subject that they'd like to associate with ensuing certificates; the certificates will, in turn, constitute proofs of ownership of said domain or subject.
- 2. The domain owner also designates in the RA the identities of authorized requesters - entities that are entitled to request the enrollment of certificates with the specified domain or subject; in Microsoft Azure, the default identity provider is Azure Active Directory, and authorized requesters are designated by their corresponding AAD identity (or via security groups)
- 3. An authorized requester then enrolls into a certificate via a Secret Management Service; in Microsoft Azure, the SMS of choice is Azure Key Vault (AKV), which securely stores and allows the retrieval of secrets and certificates by authorized entities. AKV also renews/re-keys the certificate as configured in the associated certificate policy (AKV uses AAD as the identity provider).
- 4. An authorized retriever - which we'll refer to as a 'provisioning agent' - retrieves the certificate, inclusive of its private key, from the vault, and installs it on the machines hosting the cluster.
- 5. The Service Fabric service (running elevated on each node) grants access to the certificate to allowed Service Fabric entities; these are designated by local groups, and split between ServiceFabricAdministrators and ServiceFabricAllowedUsers
- 6. The Service Fabric runtime accesses and uses the certificate to establish federation, or to authenticate to inbound requests from authorized clients
- 7. The provisioning agent monitors the vault certificate, and triggers the provisioning flow upon detecting renewal; subsequently, the cluster owner updates the cluster definition, if needed, to indicate the intent to roll over the certificate.
- 8. The provisioning agent or the cluster owner is also responsible for cleaning up/deleting unused certificates
+
+Let's quickly outline the progression of a certificate from issuance to consumption in the context of a Service Fabric cluster:
+
+1. A domain owner registers with the RA of a PKI a domain or subject that they want to associate with ensuing certificates. The certificates, in turn, constitute proof of ownership of the domain or subject.
+
+1. The domain owner also designates in the RA the identities of authorized requesters, entities that are entitled to request the enrollment of certificates with the specified domain or subject.
+
+1. An authorized requester then enrolls into a certificate via a secret-management service. In Azure, the secret-management service of choice is Azure Key Vault, which securely stores and allows the retrieval of secrets and certificates by authorized entities. Key Vault also renews and re-keys the certificate as configured in the associated certificate policy. Key Vault uses Azure Active Directory as the identity provider.
+
+1. An authorized retriever, or *provisioning agent*, retrieves the certificate from the key vault, including its private key, and installs it on the machines that host the cluster.
+
+1. The Service Fabric service (running elevated on each node) grants access to the certificate to the allowed Service Fabric entities, which are designated by local groups and split between ServiceFabricAdministrators and ServiceFabricAllowedUsers.
+
+1. The Service Fabric runtime accesses and uses the certificate to establish federation, or to authenticate to inbound requests from authorized clients.
+
+1. The provisioning agent monitors the key vault certificate and, when it detects renewal, triggers the provisioning flow. The cluster owner then updates the cluster definition, if needed, to indicate an intent to roll over the certificate.
+
+1. The provisioning agent or the cluster owner is also responsible for cleaning up and deleting unused certificates.
-For our purposes, the first two steps in the sequence above are largely unrelated; the only connection is that the subject common name of the certificate is the DNS name declared in the cluster definition.
+For the purposes of this article, the first two steps in the preceding sequence are mostly unrelated. Their only connection is that the subject common name of the certificate is the DNS name that's declared in the cluster definition.
+
+Certificate issuance and provisioning flow is illustrated in the following diagrams:
+
+**For certificates that are declared by thumbprint**
-These steps are illustrated below; note the differences in provisioning between certificates declared by thumbprint and common name, respectively.
+![Diagram of provisioning certificates that are declared by thumbprint.][Image1]
-*Fig. 1.* Issuance and provisioning flow of certificates declared by thumbprint.
-![Provisioning certificates declared by thumbprint][Image1]
+**For certificates that are declared by subject common name**
-*Fig. 2.* Issuance and provisioning flow of certificates declared by subject common name.
-![Provisioning certificates declared by subject common name][Image2]
+![Diagram of provisioning certificates that are declared by subject common name.][Image2]
### Certificate enrollment
-This topic is covered in detail in the [Key Vault documentation](../key-vault/certificates/create-certificate.md); we're including a synopsis here for continuity and easier reference. Continuing with Azure as the context, and using Azure Key Vault as the secret management service, an authorized certificate requester must have at least certificate management permissions on the vault, granted by the vault owner; the requester would then enroll into a certificate as follows:
+The topic of certificate enrollment is covered in detail in the [Key Vault documentation](../key-vault/certificates/create-certificate.md). A synopsis is included here for continuity and easier reference.
+
+Continuing with Azure as the context, and using Key Vault as the secret-management service, an authorized certificate requester must have at least certificate management permissions on the key vault, granted by the key vault owner. The requester then enrolls into a certificate as follows:
+
+- The requester creates a certificate policy in Key Vault, which specifies the domain/subject of the certificate, the desired issuer, key type and length, intended key usage, and more. For more information, see [Certificates in Azure Key Vault](../key-vault/certificates/certificate-scenarios.md).
- - Under `{vaultUri}/certificates/{name}`: The certificate including the public key and metadata.
- - Under `{vaultUri}/keys/{name}`: The certificate's private key, available for cryptographic operations (wrap/unwrap, sign/verify).
- - Under `{vaultUri}/secrets/{name}`: The certificate inclusive of its private key, available for downloading as an unprotected pfx or pem file.
+- The requester creates a certificate in the same vault with the policy that's specified in the preceding step. This, in turn, generates a key pair as vault objects and a certificate signing request that's signed with the private key, which is then forwarded to the designated issuer for signing.
+
+- After the issuer, or certificate authority (CA), replies with the signed certificate, the result is merged into the key vault, and the certificate data is available as follows:
+ - Under `{vaultUri}/certificates/{name}`: The certificate, including the public key and metadata
+ - Under `{vaultUri}/keys/{name}`: The certificate's private key, available for cryptographic operations (wrap/unwrap, sign/verify)
+ - Under `{vaultUri}/secrets/{name}`: The certificate, including its private key, available for downloading as an unprotected PFX or PEM file.
-Recall that a certificate in the vault contains a chronological list of certificate instances that share a policy. Certificate versions will be created according to the lifetime and renewal attributes of this policy. It is highly recommended that vault certificates not share subjects or domains/DNS names, as it can be disruptive in a cluster to provision certificate instances from different vault certificates, with identical subjects but substantially different other attributes, such as issuer, key usages etc.
+Recall that a certificate in the key vault contains a chronological list of certificate instances that share a policy. Certificate versions will be created according to the lifetime and renewal attributes of this policy. We highly recommend that vault certificates not share subjects or domains or DNS names, because it can be disruptive in a cluster to provision certificate instances from different vault certificates, with identical subjects but substantially different other attributes, such as issuer, key usages, and so on.
-At this point, a certificate exists in the vault, ready for consumption. Onward to:
+At this point, a certificate exists in the key vault, ready for consumption. Now let's explore the rest of the process.
### Certificate provisioning
-We mentioned a 'provisioning agent', which is the entity that retrieves the certificate, inclusive of its private key, from the vault and installs it on to each of the hosts of the cluster. (Recall that Service Fabric does not provision certificates.) In our context, the cluster will be hosted on a collection of Azure VMs and/or virtual machine scale sets. In Azure, provisioning a certificate from a vault to a VM/VMSS can be achieved with the following mechanisms - assuming, as above, that the provisioning agent was previously granted 'get' permissions on the vault by the vault owner:
+We mentioned a *provisioning agent*, which is the entity that retrieves the certificate, including its private key, from the key vault and installs it on each of the hosts of the cluster. (Recall that Service Fabric doesn't provision certificates.)
+
+In the context of this article, the cluster will be hosted on a collection of Azure virtual machines (VMs) or virtual machine scale sets (VMSS). In Azure, you can provision a certificate from a vault to a VM/VMSS by using the following mechanisms. This assumes, as before, that the provisioning agent was previously granted *secret get* permissions on the key vault by the key vault owner.
+
+- Ad-hoc: An operator retrieves the certificate from the key vault (as PFX/PKCS #12 or PEM) and installs it on each node.
+
+ The ad-hoc mechanism isn't recommended, for multiple reasons, ranging from security to availability, and it won't be discussed here further. For more information, see [FAQ for Azure virtual machine scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml).
- - Ad-hoc: an operator retrieves the certificate from the vault (as pfx/PKCS #12 or pem) and installs it on each node
- - As a virtual machine scale set 'secret' during deployment: Using its first party identity on behalf of the operator, the Compute service retrieves the certificate from a template-deployment-enabled vault and installs it on each node of the virtual machine scale set ([like so](../virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml)); note this allows the provisioning of versioned secrets only
- - Using the [Key Vault VM extension](../virtual-machines/extensions/key-vault-windows.md); this allows the provisioning of certificates using version-less declarations, with periodic refreshing of observed certificates. In this case, the VM/VMSS is expected to have a [managed identity](../virtual-machines/security-policy.md#managed-identities-for-azure-resources), an identity that has been granted access to the vault(s) containing the observed certificates.
+- As a virtual machine scale set *secret* during deployment: By using its first-party identity on behalf of the operator, the compute service retrieves the certificate from a template-deployment-enabled vault and installs it on each node of the virtual machine scale set, as described in [FAQ for Azure virtual machine scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml).
-The ad-hoc mechanism is not recommended for multiple reasons, ranging from security to availability, and won't be discussed here further; for details, refer to [certificates in virtual machine scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml).
+ >[!NOTE]
+ > This approach allows the provisioning of versioned secrets only.
+
+- By using the [Key Vault VM extension](../virtual-machines/extensions/key-vault-windows.md). This lets you provision certificates by using version-less declarations, with periodic refreshing of observed certificates. In this case, the VM/VMSS is expected to have a [managed identity](../virtual-machines/security-policy.md#managed-identities-for-azure-resources), an identity that has been granted access to the key vaults that contain the observed certificates.
-The VMSS-/Compute-based provisioning presents security and availability advantages, but it also presents restrictions. It requires - by design - declaring certificates as versioned secrets, which makes it suitable only for clusters secured with certificates declared by thumbprint. In contrast, the Key Vault VM extension-based provisioning will always install the latest version of each observed certificate, which makes it suitable only for clusters secured with certificates declared by subject common name. To emphasize, do not use an autorefresh provisioning mechanism (such as the KVVM extension) for certificates declared by instance (that is, by thumbprint) - the risk of losing availability is considerable.
+VMSS/compute-based provisioning presents security and availability advantages, but it also presents restrictions. It requires, by design, that you declare certificates as versioned secrets, which makes it suitable only for clusters secured with certificates declared by thumbprint.
-Other provisioning mechanisms may exist; the above are currently accepted for Azure Service Fabric clusters.
+In contrast, Key Vault VM extension-based provisioning always installs the latest version of each observed certificate, which makes it suitable only for clusters secured with certificates declared by subject common name. To emphasize, do not use an autorefresh provisioning mechanism (such as the Key Vault VM extension) for certificates that are declared by instance (that is, by thumbprint). The risk of losing availability is considerable.
+
+Other provisioning mechanisms exist, but the approaches mentioned here are the currently accepted options for Azure Service Fabric clusters.
### Certificate consumption and monitoring
-As mentioned earlier, the Service Fabric runtime is responsible for locating and using the certificates declared in the cluster definition. The article on [certificate-based authentication](cluster-security-certificates.md) explained in detail how Service Fabric implements the presentation and validation rules, respectively, and won't be revisited here. We are going to look at access and permission granting, as well as monitoring.
-Recall that certificates in Service Fabric are used for a multitude of purposes, from mutual authentication in the federation layer to TLS authentication for the management endpoints; this requires various components or system services to have access to the certificate's private key. The Service Fabric runtime regularly scans the certificate store, looking for matches for each of the known presentation rules; for each of the matching certificates, the corresponding private key is located, and its discretionary access control list is updated to include permissions - typically Read and Execute - granted to the respective identity that requires them. (This process is informally referred to as 'ACLing'.) The process runs on a 1-minute cadence, and also covers 'application' certificates, such as those used to encrypt settings or as endpoint certificates. ACLing follows the presentation rules, so it's important to keep in mind that certificates declared by thumbprint and which are autorefreshed without the ensuing cluster configuration update will not be accessible.
+As mentioned earlier, the Service Fabric runtime is responsible for locating and using the certificates that are declared in the cluster definition. The [X.509 Certificate-based authentication in Service Fabric clusters](cluster-security-certificates.md) article explains in detail how Service Fabric implements the presentation and validation rules, and it won't be revisited here. This article is going to look at access and permission granting, as well as monitoring.
+
+Recall that certificates in Service Fabric are used for a multitude of purposes, from mutual authentication in the federation layer to Transport Layer Security (TLS) authentication for the management endpoints. This requires various components or system services to have access to the certificate's private key. The Service Fabric runtime regularly scans the certificate store, looking for matches for each of the known presentation rules.
+
+For each matching certificate, the corresponding private key is located, and its discretionary access control list is updated to include permissions (Read and Execute, ordinarily) that are granted to the identity that requires them.
+
+This process is informally referred to as *ACLing*. The process runs on a one-minute cadence and also covers *application* certificates, such as those used to encrypt settings or as endpoint certificates. ACLing follows the presentation rules, so it's important to keep in mind that certificates declared by thumbprint and which are autorefreshed without the ensuing cluster configuration update will be inaccessible.
### Certificate rotation
-As a side note: IETF [RFC 3647](https://tools.ietf.org/html/rfc3647) formally defines [renewal](https://tools.ietf.org/html/rfc3647#section-4.4.6) as the issuance of a certificate with the same attributes as the certificate being replaced - the issuer, the subject's public key and information is preserved, and [re-keying](https://tools.ietf.org/html/rfc3647#section-4.4.7) as the issuance of a certificate with a new key pair, and no restrictions on whether or not the issuer may change. Given the distinction may be important (consider the case of certificates declared by subject common name with issuer pinning), we'll opt for the neutral term 'rotation' to cover both scenarios. (Do keep in mind that, when informally used, 'renewal' refers, in fact, to re-keying.)
-We've seen earlier that Azure Key Vault supports automatic certificate rotation: the associate certificate policy defines the point in time, whether by days before expiration or percentage of total lifetime, when the certificate is rotated in the vault. The provisioning agent must be invoked after this point in time, and prior to the expiration of the now-previous certificate, to distribute this new certificate to all of the nodes of the cluster. Service Fabric will assist by raising health warnings when the expiration date of a certificate (and which is currently in use in the cluster) occurs sooner than a predetermined interval. An automatic provisioning agent (i.e. the KeyVault VM extension), configured to observe the vault certificate, will periodically poll the vault, detect the rotation, and retrieve and install the new certificate. Provisioning done via VM/VMSS 'secrets' feature will require an authorized operator to update the VM/VMSS with the versioned KeyVault URI corresponding to the new certificate.
+> [!NOTE]
+> The Internet Engineering Task Force (IETF) [RFC 3647](https://tools.ietf.org/html/rfc3647) formally defines [*renewal*](https://tools.ietf.org/html/rfc3647#section-4.4.6) as the issuance of a certificate with the same attributes as the certificate that's being replaced. The issuer, the subject's public key, and the information are preserved. [*Re-keying*](https://tools.ietf.org/html/rfc3647#section-4.4.7) is the issuance of a certificate with a new key pair, without restrictions as to whether the issuer can change. Because the distinction might be important (consider the case of certificates that are declared by subject common name with issuer pinning), this article uses the neutral term *rotation* to cover both scenarios. Do keep in mind that, when *renewal* is used informally, it refers to *re-keying*.
+
+As mentioned earlier, Key Vault supports automatic certificate rotation. That is, the associate certificate policy defines the point in time, whether by days before expiration or percentage of total lifetime, when the certificate is rotated in the key vault. The provisioning agent must be invoked after this point in time, and prior to the expiration of the now-previous certificate, to distribute this new certificate to all nodes of the cluster.
+
+Service Fabric assists in this process by raising health warnings when the expiration date of a certificate, which is currently in use in the cluster, occurs sooner than a predetermined interval. An automatic provisioning agent, the Key Vault VM extension, which is configured to observe the key vault certificate, periodically polls the key vault, detects the rotation, and retrieves and installs the new certificate. Provisioning that takes place via the VM/VMSS *secrets* feature requires an authorized operator to update the VM/VMSS with the versioned Key Vault URI that corresponds to the new certificate.
+
+The rotated certificate is now provisioned to all nodes. Now, assuming that the rotation applied to the cluster certificate was declared by subject common name, let's examine what happens next:
+
+ - For new connections within, as well as into, the cluster, the Service Fabric runtime finds and selects the most recently issued matching certificate (the greatest value of the *NotBefore* property). This is a change from earlier versions of the Service Fabric runtime.
-In either case, the rotated certificate is now provisioned to all of the nodes, and we have described the mechanism Service Fabric employs to detect rotations; let us examine what happens next - assuming the rotation applied to the cluster certificate declared by subject common name
- - for new connections within, as well as into the cluster, the Service Fabric runtime will find and select the most recently issued matching certificate (largest value of the 'NotBefore' property). Note this is a change from previous versions of the Service Fabric runtime.
- - existing connections will be kept alive/allowed to naturally expire or otherwise terminate; an internal handler will have been notified that a new match exists
+ - Existing connections are kept alive or allowed to naturally expire or otherwise terminate, and an internal handler will have been notified that a new match exists.
> [!NOTE]
-> Currently (7.2 CU4+), Service Fabric selects the cert with the largest 'NotBefore' property value (most recently issued). Prior to 7.2CU4 Service Fabric picked the valid cert with the largest NotAfter (furthest expiring).
+> Currently, as of version 7.2 CU4+, Service Fabric selects the certificate with the greatest (most recently issued) *NotBefore* property value. Prior to 7.2 CU4, Service Fabric picked the valid certificate with the greatest (latest expiring) *NotAfter* value.
This translates into the following important observations:
- - The renewal certificate may be ignored if its expiration date is sooner than that of the certificate currently in use.
- - The availability of the cluster, or of the hosted applications, takes precedence over the directive to rotate the certificate; the cluster will converge on the new certificate eventually, but without timing guarantees. It follows that:
- - It may not be immediately obvious to an observer that the rotated certificate completely replaced its predecessor; the only way to ensure that is (for cluster certificates) to reboot the host machines. Note it is not sufficient to restart the Service Fabric nodes, as kernel mode components which form lease connections in a cluster will not be affected. Also note that restarting the VM/VMSS may cause temporary loss of availability. (For application certificates, it is sufficient to restart the respective application instances only.)
- - Introducing a re-keyed certificate that does not meet the validation rules can effectively break the cluster. The most common example of this is the case of an unexpected issuer: the cluster certificates are declared by subject common name with issuer pinning, but the rotated certificate was issued by a new/undeclared issuer.
+
+- The availability of the cluster, or of the hosted applications, takes precedence over the directive to rotate the certificate. The cluster will converge on the new certificate eventually, but without timing guarantees. It follows that:
+
+ - It might not be immediately obvious to an observer that the rotated certificate completely replaced its predecessor. The only way to force the immediate replacement of the certificate currently in use is to reboot the host machines. It's not sufficient to restart the Service Fabric nodes, because the kernel mode components that form lease connections in a cluster will be unaffected. Also, restarting the VM/VMSS might cause temporary loss of availability. For application certificates, it's sufficient to restart only the respective application instances.
+
+ - Introducing a re-keyed certificate that doesn't meet the validation rules can effectively break the cluster. The most common example of this is the case of an unexpected issuer, where the cluster certificates are declared by subject common name with issuer pinning, but the rotated certificate was issued by a new or undeclared issuer.
### Certificate cleanup
-At this time, there are no provisions in Azure for explicit removal of certificates. It is often a non-trivial task to determine whether or not a given certificate is being used at a given time. This is more difficult for application certificates than for cluster certificates. Service Fabric itself, not being the provisioning agent, will not delete a certificate declared by the user under any circumstance. For the standard provisioning mechanisms:
- - Certificates declared as VM/VMSS secrets will be provisioned as long as they are referenced in the VM/VMSS definition, and they are retrievable from the vault (deleting a vault secret/certificate will fail subsequent VM/VMSS deployments; similarly, disabling a secret version in the vault will also fail VM/VMSS deployments, which reference that secret version)
- - Previous versions of certificates provisioned via the KeyVault VM extension may or may not be present on a VM/VMSS node. The agent only retrieves and installs the current version, and does not remove any certificates. Reimaging a node (which typically occurs every month) will reset the certificate store to the content of the OS image, and so previous versions will implicitly be removed. Consider, though, that scaling up a virtual machine scale set will result in only the current version of observed certificates being installed; do not assume homogeneity of nodes with regard to installed certificates.
-## Simplifying management - an autorollover example
-We've described mechanisms, restrictions, outlined intricate rules and definitions, and made dire predictions of outages. It is, perhaps, time to show how to set up automatic certificate management to avoid all of these concerns. We're doing so in the context of an Azure Service Fabric cluster running on an PaaSv2 virtual machine scale set, using Azure Key Vault for secrets management and leveraging managed identities, as follows:
- - Validation of certificates is changed from thumbprint-pinning to subject + issuer pinning: any certificate with a given subject from a given issuer is equally trusted
- - Certificates are enrolled into and obtained from a trusted store (Key Vault), and refreshed by an agent - in this case, the KeyVault VM extension
- - Provisioning of certificates is changed from deployment-time and version-based (as done by ComputeRP) to post-deployment and using version-less KeyVault URIs
- - Access to KeyVault is granted via user-assigned managed identities; the UA identity is created and assigned to the virtual machine scale set during deployment
- - After deployment, the agent (the KV VM extension) will poll and refresh observed certificates on each node of the virtual machine scale set; certificate rotation is thus fully automated, as SF will automatically pick up the farthest valid certificate
+At this time, there are no provisions in Azure for explicit removal of certificates. It's often a non-trivial task to determine whether a specific certificate is being used at a specific time. This is more difficult for application certificates than for cluster certificates. Service Fabric itself, not being the provisioning agent, won't delete a certificate that's declared by the user under any circumstance. For the standard provisioning mechanisms:
+
+ - Certificates that are declared as VM/VMSS secrets are provisioned as long as they're referenced in the VM/VMSS definition and are retrievable from the key vault. Deleting a key vault secret or certificate will fail subsequent VM/VMSS deployments. Similarly, disabling a secret version in the key vault will also fail VM/VMSS deployments that reference the secret version.
+
+ - Earlier versions of certificates that are provisioned via the Key Vault VM extension might or might not be present on a VM/VMSS node. The agent retrieves and installs only the current version, and it doesn't remove any certificates. Re-imaging a node, which ordinarily occurs every month, resets the certificate store to the content of the OS image, and so earlier versions will implicitly be removed. Consider, though, that scaling up a virtual machine scale set will result in only the current version of observed certificates being installed. Don't assume the homogeneity of nodes with regard to installed certificates.
-The sequence is fully scriptable/automated and allows a user-touch-free initial deployment of a cluster configured for certificate autorollover. Detailed steps are provided below. We'll use a mix of PowerShell cmdlets and fragments of json templates. The same functionality is achievable with all supported means of interacting with Azure.
+## Simplifying management: An autorollover example
+
+So far, this article has described mechanisms and restrictions, outlined intricate rules and definitions, and made dire predictions of outages. Now it's time to set up automatic certificate management to avoid all these concerns. Let's do so in the context of an Azure Service Fabric cluster running on a platform as a service (PaaS) v2 virtual machine scale set, using Key Vault for secrets management and leveraging managed identities, as follows:
+
+- Validation of certificates is changed from thumbprint-pinning to subject + issuer-pinning. Any certificate with a specific subject from a specific issuer is equally trusted.
+- Certificates are enrolled into and obtained from a trusted store (Key Vault), and refreshed by an agent (here, the Key Vault VM extension).
+- Provisioning of certificates is changed from deployment-time and version-based (as done by Azure Compute Resource Provider) to post-deployment by using version-less Key Vault URIs.
+- Access to the key vault is granted via user-assigned managed identities, which are created and assigned to the virtual machine scale set during deployment.
+- After deployment, the agent (the Key Vault VM extension) polls and refreshes observed certificates on each node of the virtual machine scale set. Certificate rotation is thus fully automated, because Service Fabric automatically picks up the latest valid certificate.
+
+The sequence is fully scriptable and automated, and it allows a user-touch-free initial deployment of a cluster that's configured for certificate autorollover. The next sections provide detailed steps, which contain a mix of PowerShell cmdlets and fragments of JSON templates. The same functionality is achievable with all supported means of interacting with Azure.
> [!NOTE]
-> This example assumes a certificate exists already in the vault; enrolling and renewing a KeyVault-managed certificate requires prerequisite manual steps as described earlier in this article. For production environments, use KeyVault-managed certificates - a sample script specific to a Microsoft-internal PKI is included below.
+> This example assumes that a certificate exists already in your key vault. Enrolling and renewing a Key Vault-managed certificate requires prerequisite manual steps, as described earlier in this article. For production environments, use Key Vault-managed certificates. We've included a sample script that's specific to a Microsoft-internal PKI.
> [!NOTE]
-> Certificate autorollover only makes sense for CA-issued certificates; using self-signed certificates, including those generated when deploying a Service Fabric cluster in the Azure portal, is nonsensical, but still possible for local/developer-hosted deployments, by declaring the issuer thumbprint to be the same as of the leaf certificate.
+> Certificate autorollover makes sense only for CA-issued certificates. Using self-signed certificates, including those generated during deployment of a Service Fabric cluster in the Azure portal, is nonsensical, but still possible for local or developer-hosted deployments if you declare the issuer thumbprint to be the same as that of the leaf certificate.
### Starting point
-For brevity, we will assume the following starting state:
- - The Service Fabric cluster exists, and is secured with a CA-issued certificate declared by thumbprint.
- - The certificate is stored in a vault, and provisioned as a virtual machine scale set secret
- - The same certificate will be used to convert the cluster to common name-based certificate declarations, and so can be validated by subject and issuer; if this is not the case, obtain the CA-issued certificate intended for this purpose, and add it to the cluster definition by thumbprint as explained [here](service-fabric-cluster-security-update-certs-azure.md)
-Here is a json excerpt from a template corresponding to such a state - note this omits many required settings, and only illustrates the certificate-related aspects:
+For brevity, let's assume the following starting state:
+
+- The Service Fabric cluster exists, and is secured with a CA-issued certificate declared by thumbprint.
+- The certificate is stored in a key vault and provisioned as a virtual machine scale set secret.
+- The same certificate will be used to convert the cluster to common name-based certificate declarations, and so it can be validated by subject and issuer. If this isn't the case, obtain the CA-issued certificate that's intended for this purpose, and add it to the cluster definition by thumbprint. This process is explained in [Add or remove certificates for a Service Fabric cluster in Azure](service-fabric-cluster-security-update-certs-azure.md).
+
+Here's a JSON excerpt from a template that corresponds to such a state. The excerpt omits many required settings and illustrates only the certificate-related aspects.
+ ```json "resources": [ { ## VMSS definition
Here is a json excerpt from a template corresponding to such a state - note this
] ```
-The above essentially says that certificate with thumbprint ```json [parameters('primaryClusterCertificateTP')] ``` and found at KeyVault URI ```json [parameters('clusterCertificateUrlValue')] ``` is declared as the cluster's sole certificate, by thumbprint. Next we'll set up the additional resources needed to ensure the autorollover of the certificate.
+The preceding code essentially says that the certificate with thumbprint ```json [parameters('primaryClusterCertificateTP')] ``` and found at Key Vault URI ```json [parameters('clusterCertificateUrlValue')] ``` is declared as the cluster's sole certificate, by thumbprint.
+
+Next, let's set up the additional resources that are needed to ensure the autorollover of the certificate.
+
+### Set up the prerequisite resources
+
+As mentioned earlier, a certificate that's provisioned as a virtual machine scale set secret is retrieved from the key vault by the Microsoft.Compute Resource Provider service. It does so by using its first-party identity on behalf of the deployment operator. For autorollover, that will change. You'll switch to using a managed identity that's assigned to the virtual machine scale set and that has been granted GET permissions on the secrets in that vault.
-### Setting up prerequisite resources
-As mentioned before, a certificate provisioned as a virtual machine scale set secret is retrieved from the vault by the Microsoft.Compute Resource Provider service, using its first-party identity and on behalf of the deployment operator. For autorollover, that will change - we'll switch to using a managed identity, assigned to the virtual machine scale set, and which is granted permissions to the vault's secrets.
+You should deploy the next excerpts at the same time. They're listed individually only for play-by-play analysis and explanation.
-All of the subsequent excerpts should be deployed concomitantly - they are listed individually for play-by-play analysis and explanations.
+First, define a user-assigned identity (default values are included as examples). For more information, see the [official documentation](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-arm.md#create-a-user-assigned-managed-identity).
-First define a user assigned identity (default values are included as examples) - refer to the [official documentation](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-arm.md#create-a-user-assigned-managed-identity) for up-to-date information:
```json { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
First define a user assigned identity (default values are included as examples)
]} ```
-Then grant this identity access to the vault secrets - refer to the [official documentation](/rest/api/keyvault/keyvault/vaults/update-access-policy) for current information:
+Next, grant this identity access to the key vault secrets. For the most current information, see the [official documentation](/rest/api/keyvault/keyvault/vaults/update-access-policy).
```json "resources": [{
Then grant this identity access to the vault secrets - refer to the [official do
]}}]}}] ```
-In the next step, we'll:
- - assign the user-assigned identity to the virtual machine scale set
- - declare the virtual machine scale set' dependency on the creation of the managed identity, and on the result of granting it access to the vault
- - declare the KeyVault VM extension, requiring that it retrieves observed certificates on startup ([official documentation](../virtual-machines/extensions/key-vault-windows.md))
- - update the definition of the Service Fabric VM extension to depend on the KVVM extension, and to convert the cluster cert to common name
-(We're making these changes in a single step since they fall under the scope of the same resource.)
+In the next step, you'll do the following:
+
+- Assign the user-assigned identity to the virtual machine scale set.
+- Declare the virtual machine scale set dependency on the creation of the managed identity, and on the result of granting it access to the key vault.
+- Declare the Key Vault VM extension and require it to retrieve observed certificates on startup. For more information, see the [Key Vault VM extension for Windows](../virtual-machines/extensions/key-vault-windows.md) official documentation.
+- Update the definition of the Service Fabric VM extension to depend on the Key Vault VM extension, and to convert the cluster certificate declaration from thumbprint to common name.
+
+> [!NOTE]
+> These changes are being made as a single step because they fall within the scope of the same resource.
```json "parameters": {
In the next step, we'll:
} ] ```
-Note, although not explicitly listed above, that we removed the vault certificate URL from the 'OsProfile' section of the virtual machine scale set.
-The last step is to update the cluster definition to change the certificate declaration from thumbprint to common name - here we are also pinning the issuer thumbprints:
+
+Although it's not explicitly listed in the preceding code, note that the key vault certificate URL has been removed from the `OsProfile` section of the virtual machine scale set.
+
+The final step is to update the cluster definition to change the certificate declaration from thumbprint to common name. We're also pinning the issuer thumbprints:
```json "parameters": {
The last step is to update the cluster definition to change the certificate decl
] ```
-At this point, you can run the updates mentioned above in a single deployment; for its part, the Service Fabric Resource Provider service will split the cluster upgrade in several steps, as described in the segment on [converting cluster certificates from thumbprint to common name](cluster-security-certificates.md#converting-a-cluster-from-thumbprint--to-common-name-based-certificate-declarations).
+At this point, you can run the previously mentioned updates in a single deployment. For its part, the Service Fabric Resource Provider service splits the cluster upgrade in several steps, as described in the segment on [converting cluster certificates from thumbprint to common name](cluster-security-certificates.md#converting-a-cluster-from-thumbprint--to-common-name-based-certificate-declarations).
### Analysis and observations
-This section is a catch-all for explaining steps detailed above, as well as drawing attention to important aspects.
-#### Certificate provisioning, explained
-The KVVM extension, as a provisioning agent, runs continuously on a predetermined frequency. On failing to retrieve an observed certificate, it would continue to the next in line, and then hibernate until the next cycle. The SFVM extension, as the cluster bootstrapping agent, will require the declared certificates before the cluster can form. This, in turn, means that the SFVM extension can only run after the successful retrieval of the cluster certificate(s), denoted here by the ```json "provisionAfterExtensions" : [ "KVVMExtension" ]"``` clause, and by the KeyVaultVM extension's ```json "requireInitialSync": true``` setting. This indicates to the KVVM extension that on the first run (after deployment or a reboot) it must cycle through its observed certificates until all are downloaded successfully. Setting this parameter to false, coupled with a failure to retrieve the cluster certificate(s) would result in a failure of the cluster deployment. Conversely, requiring an initial sync with an incorrect/invalid list of observed certificates would result in a failure of the KVVM extension, and so, again, a failure to deploy the cluster.
+This section is a catch-all for explaining concepts and processes that have been presented throughout this article, as well as drawing attention to certain other important aspects.
+
+#### About certificate provisioning
+
+The Key Vault VM extension, as a provisioning agent, runs continuously on a predetermined frequency. If it fails to retrieve an observed certificate, it continues to the next in line, and then hibernates until the next cycle. The Service Fabric VM extension, as the cluster bootstrapping agent, requires the declared certificates before the cluster can form. This, in turn, means that the Service Fabric VM extension can run only after the successful retrieval of the cluster certificates, denoted here by the ```json "provisionAfterExtensions" : [ "KVVMExtension" ]"``` clause, and by the Key Vault VM extension's ```json "requireInitialSync": true``` setting.
+
+This indicates to the Key Vault VM extension that, on the first run (after deployment or a reboot), it must cycle through its observed certificates until all are downloaded successfully. Setting this parameter to false, coupled with a failure to retrieve the cluster certificates, would result in a failure of the cluster deployment. Conversely, requiring an initial sync with an incorrect or invalid list of observed certificates would result in a failure of the Key Vault VM extension and, again, a failure to deploy the cluster.
#### Certificate linking, explained
-You may have noticed the KVVM extension's 'linkOnRenewal' flag, and the fact that it is set to false. We're addressing here in depth the behavior controlled by this flag and its implications on the functioning of a cluster. Note this behavior is specific to Windows.
+
+You might have noticed the Key Vault VM extension `linkOnRenewal` flag, and the fact that it is set to false. This setting addresses in depth the behavior controlled by this flag and its implications on the functioning of a cluster. This behavior is specific to Windows.
According to its [definition](../virtual-machines/extensions/key-vault-windows.md#extension-schema): ```json
-"linkOnRenewal": <Only Windows. This feature enables auto-rotation of SSL certificates, without necessitating a re-deployment or binding. e.g.: false>,
+"linkOnRenewal": <Only Windows. This feature enables auto-rotation of SSL certificates, without necessitating a re-deployment or binding. e.g.: false>,
```
-Certificates used to establish a TLS connection are typically [acquired as a handle](/windows/win32/api/sspi/nf-sspi-acquirecredentialshandlea) via the S-channel Security Support Provider ΓÇô that is, the client does not directly access the private key of the certificate itself. S-channel supports redirection (linking) of credentials in the form of a certificate extension ([CERT_RENEWAL_PROP_ID](/windows/win32/api/wincrypt/nf-wincrypt-certsetcertificatecontextproperty#cert_renewal_prop_id)): if this property is set, its value represents the thumbprint of the ΓÇÿrenewalΓÇÖ certificate, and so S-channel will instead attempt to load the linked certificate. In fact, it will traverse this linked (and hopefully acyclic) list until it ends up with the ΓÇÿfinalΓÇÖ certificate ΓÇô one without a renewal mark. This feature, when used judiciously, is a great mitigation against loss of availability caused by expired certificates (for instance). In other cases, it can be the cause of outages that are difficult to diagnose and mitigate. S-channel executes the traversal of certificates on their renewal properties unconditionally - irrespective of subject, issuers, or any other specific attributes that participate in the validation of the resulting certificate by the client. It is possible, indeed, that the resulting certificate has no associated private key, or the key has not been ACLed to its prospective consumer.
+Certificates used to establish a TLS connection are ordinarily [acquired as a handle](/windows/win32/api/sspi/nf-sspi-acquirecredentialshandlea) via the S-channel Security Support Provider. That is, the client doesn't directly access the private key of the certificate itself. S-channel supports redirection, or linking, of credentials in the form of a certificate extension, [CERT_RENEWAL_PROP_ID](/windows/win32/api/wincrypt/nf-wincrypt-certsetcertificatecontextproperty#cert_renewal_prop_id).
+
+If this property is set, its value represents the thumbprint of the *renewal* certificate, and so S-channel will instead attempt to load the linked certificate. In fact, the S-channel will traverse this linked and, hopefully, acyclic list until it ends up with the *final* certificate, one without a renewal mark. This feature, when used judiciously, is a great mitigation against a loss of availability that's caused by, for example, expired certificates.
+
+In other cases, it can be the cause of outages that are difficult to diagnose and mitigate. S-channel executes the traversal of certificates on their renewal properties unconditionally, irrespective of subject, issuers, or any other specific attributes that participate in the validation of the resulting certificate by the client. It's possible that the resulting certificate has no associated private key, or that the key hasn't been ACLed to its prospective consumer.
-If linking is enabled, the KeyVault VM extension, upon retrieving an observed certificate from the vault, will attempt to find matching, existing certificates in order to link them via the renewal extension property. The matching is (exclusively) based on Subject Alternative Name (SAN), and works as exemplified below.
-Assume two existing certificates, as follows:
- A: CN = ΓÇ£Alice's accessoriesΓÇ¥, SAN = {ΓÇ£alice.universalexports.comΓÇ¥}, renewal = ΓÇÿΓÇÖ
+If linking is enabled, the Key Vault VM extension, when it retrieves an observed certificate from the key vault, will attempt to find matching, existing certificates to link them via the renewal extension property. The matching is based exclusively on the subject alternative name (SAN), and it works, if there are two existing certificates, as shown in the following examples:
+ A: Certificate name (CN) = ΓÇ£Alice's accessoriesΓÇ¥, SAN = {ΓÇ£alice.universalexports.comΓÇ¥}, renewal = ΓÇÿΓÇÖ
B: CN = ΓÇ£Bob's bitsΓÇ¥, SAN = {ΓÇ£bob.universalexports.comΓÇ¥, ΓÇ£bob.universalexports.netΓÇ¥}, renewal = ΓÇÿΓÇÖ
-Assume a certificate C is retrieved by the KVVM ext: CN = ΓÇ£Mallory's malwareΓÇ¥, SAN = {ΓÇ£alice.universalexports.comΓÇ¥, ΓÇ£bob.universalexports.comΓÇ¥, ΓÇ£mallory.universalexports.comΓÇ¥}
+Assume that a certificate C is retrieved by the Key Vault VM extension: CN = ΓÇ£Mallory's malwareΓÇ¥, SAN = {ΓÇ£alice.universalexports.comΓÇ¥, ΓÇ£bob.universalexports.comΓÇ¥, ΓÇ£mallory.universalexports.comΓÇ¥}
-AΓÇÖs SAN list is fully included in CΓÇÖs, and so A.renewal = C.thumbprint; BΓÇÖs SAN list has a common intersection with CΓÇÖs, but is not fully included in it, so B.renewal remains empty.
+Certificate AΓÇÖs SAN list is fully included in CΓÇÖs, and so A.renewal = C.thumbprint. Certificate BΓÇÖs SAN list has a common intersection with CΓÇÖs, but is not fully included in it, so B.renewal remains empty.
-Any attempt to invoke AcquireCredentialsHandle (S-channel) in this state on certificate A will actually end up sending C to the remote party. In the case of Service Fabric, the [Transport subsystem](service-fabric-architecture.md#transport-subsystem) of a cluster uses S-channel for mutual authentication, and so the behavior described above affects the clusterΓÇÖs fundamental communication directly. Continuing the example above, and assuming A is the cluster certificate, what happens next depends:
- - if CΓÇÖs private key is not ACLd to the account that Fabric is running as, weΓÇÖll see failures to acquire the private key (SEC_E_UNKNOWN_CREDENTIALS or similar)
- - if CΓÇÖs private key is accessible, then weΓÇÖll see authorization failures returned by the other nodes (CertificateNotMatched, unauthorized etc.)
+Any attempt to invoke AcquireCredentialsHandle (S-channel) in this state on certificate A actually ends up sending C to the remote party. In the case of Service Fabric, the [Transport subsystem](service-fabric-architecture.md#transport-subsystem) of a cluster uses S-channel for mutual authentication, and so the previously described behavior affects the clusterΓÇÖs fundamental communication directly. Continuing with the preceding example, and assuming that A is the cluster certificate, what happens next depends on:
+
+- If CΓÇÖs private key is not ACLed to the account that Service Fabric is running as, youΓÇÖll see failures to acquire the private key (SEC_E_UNKNOWN_CREDENTIALS or similar).
+- If CΓÇÖs private key is accessible, youΓÇÖll see authorization failures returned by the other nodes (CertificateNotMatched, unauthorized, and so on).
-In either case, transport fails and the cluster may go down; the symptoms vary. To make things worse, the linking depends on the order of renewal ΓÇô which is dictated by the order of the list of observed certificates of the KVVM extension, the renewal schedule in the vault or even transient errors that would alter the order of retrieval.
+In either case, transport fails and the cluster might go down. The symptoms vary. To make things worse, the linking depends on the order of renewal, which is dictated by the order of the list of observed certificates of the Key Vault VM extension, the renewal schedule in the key vault, or even transient errors that would alter the order of retrieval.
+
+To mitigate against such incidents, we recommend the following:
-To mitigate against such incidents, we recommend:
- - do not mix the SANs of different vault certificates; each vault certificate should serve a distinct purpose, and their subject and SAN should reflect that with specificity
- - include the subject common name in the SAN list (as, literally, `CN=<subject common name>`)
- - if unsure, disable linking on renewal for certificates provisioned with the KVVM extension
+- Don't mix the subject alternative names of different vault certificates. Each vault certificate should serve a distinct purpose, and its subject and SAN should reflect that with specificity.
+- Include the subject common name in the SAN list (as, literally, `CN=<subject common name>`).
+- If you're unsure about how to proceed, disable linking on renewal for certificates that are provisioned with the Key Vault VM extension.
-#### Why use a user-assigned managed identity? What are the implications of using it?
-As it emerged from the json snippets above, a specific sequencing of the operations/updates is required to guarantee the success of the conversion, and to maintain the availability of the cluster. Specifically, the virtual machine scale set resource declares and uses its identity to retrieve secrets in a single (from the user's perspective) update. The Service Fabric VM extension (which bootstraps the cluster) depends on the completion of KeyVault VM extension, which depends on the successful retrieval of observed certificates. The KVVM extension uses the virtual machine scale set's identity to access the vault, which means that the access policy on the vault must have been already updated prior to the deployment of the virtual machine scale set.
+ > [!NOTE]
+ > Disabling linking is a top-level property of the Key Vault VM extension and can't be set for individual observed certificates.
-To dispose the creation of a managed identity, or to assign it to another resource, the deployment operator must have the required role (ManagedIdentityOperator) in the subscription or the resource group, in addition to the roles required to manage the other resources referenced in the template.
+#### Why should I use a user-assigned managed identity? What are the implications of using it?
-From a security standpoint, recall that the virtual machine (scale set) is considered a security boundary with regard to its Azure identity. That means that any application hosted on the VM could, in principle, obtain an access token representing the VM - managed identity access tokens are obtained from the unauthenticated IMDS endpoint. If you consider the VM to be a shared, or multi-tenant environment, then perhaps this method of retrieving cluster certificates is not indicated. It is, however, the only provisioning mechanism suitable for certificate autorollover.
+As it became evident from the preceding JSON snippets, a specific sequencing of the operations and updates is required to both guarantee the success of the conversion and maintain the availability of the cluster. Specifically, the virtual machine scale set resource declares and uses its identity to retrieve secrets in a (from the user's perspective) single update.
+
+The Service Fabric VM extension, which bootstraps the cluster, depends on the completion of the Key Vault VM extension, which in turn depends on the successful retrieval of observed certificates.
+
+The Key Vault VM extension uses the virtual machine scale set's identity to access the key vault, which means that the access policy on the key vault must have been already updated prior to the deployment of the virtual machine scale set.
+
+To dispose the creation of a managed identity, or to assign it to another resource, the deployment operator must have the required role (ManagedIdentityOperator) in the subscription or the resource group, in addition to the roles that are required to manage the other resources referenced in the template.
+
+From a security standpoint, recall that the virtual machine scale set is considered a security boundary with regard to its Azure identity. That means that any application that's hosted on the VM could, in principle, obtain an access token representing the VM. Managed identity access tokens are obtained from the unauthenticated Instance Metadata Service endpoint. If you consider the VM to be a shared, or multi-tenant environment, this method of retrieving cluster certificates might not be indicated. It is, however, the only provisioning mechanism suitable for certificate autorollover.
## Troubleshooting and frequently asked questions
-*Q*: How to programmatically enroll into a KeyVault-managed certificate?
-*A*: Find out the name of the issuer from the KeyVault documentation, then replace it in the script below.
+**Q: How can I programmatically enroll into a Key Vault-managed certificate?**
+
+Find out the name of the issuer from the Key Vault documentation, and then replace it in the following script:
+ ```PowerShell $issuerName=<depends on your PKI of choice> $clusterVault="sftestcus"
From a security standpoint, recall that the virtual machine (scale set) is consi
Get-AzKeyVaultCertificateOperation -VaultName $clusterVault -Name $clusterCertVaultName ```
-*Q*: What happens when a certificate is issued by an undeclared/unspecified issuer? Where can I obtain the exhaustive list of active issuers of a given PKI?
-*A*: If the certificate declaration specifies issuer thumbprints, and the direct issuer of the certificate is not included in the list of pinned issuers, the certificate will be considered invalid - irrespective of whether or not its root is trusted by the client. Therefore it is critical to ensure the list of issuers is current/up to date. The introduction of a new issuer is a rare event, and should be widely publicized prior to it beginning to issue certificates.
+**Q: What happens when a certificate is issued by an undeclared or unspecified issuer? Where can I obtain an exhaustive list of active issuers of a specific PKI?**
+
+If the certificate declaration specifies issuer thumbprints, and the direct issuer of the certificate isn't included in the list of pinned issuers, the certificate will be considered invalid, whether or not its root is trusted by the client. Therefore, it's critical to ensure that the list of issuers is current. The introduction of a new issuer is a rare event, and it should be widely publicized before it begins to issue certificates.
+
+In general, a PKI publishes and maintains a certification practice statement, in accordance with IETF [RFC 7382](https://tools.ietf.org/html/rfc7382). Besides other information, the statement includes all active issuers. Retrieving this list programmatically might differ from one PKI to another.
+
+For Microsoft-internal PKIs, be sure to consult the internal documentation on the endpoints and SDKs that are used to retrieve the authorized issuers. It is the cluster owner's responsibility to review this list periodically to ensure that their cluster definition includes *all* expected issuers.
-In general, a PKI will publish and maintain a certification practice statement, in accordance with IETF [RFC 7382](https://tools.ietf.org/html/rfc7382). Among other information, it will include all active issuers. Retrieving this list programmatically may differ from a PKI to another.
+**Q: Are multiple PKIs supported?**
-For Microsoft-internal PKIs, please consult the internal documentation on the endpoints/SDKs used to retrieve the authorized issuers; it is the cluster owner's responsibility to probe this list periodically, and ensure their cluster definition includes *all* expected issuers.
+Yes. You may not declare multiple CN entries in the cluster manifest with the same value, but you can list issuers from multiple PKIs that correspond to the same CN. It's not a recommended practice, and certificate transparency practices might prevent such certificates from being issued. However, as a means to migrate from one PKI to another, this is an acceptable mechanism.
-*Q*: Are multiple PKIs supported?
-*A*: Yes; you may not declare multiple CN entries in the cluster manifest with the same value, but can list issuers from multiple PKIs corresponding to the same CN. It is not a recommended practice, and certificate transparency practices may prevent such certificates from being issued. However, as means to migrate from one PKI to another, this is an acceptable mechanism.
+**Q: What if the current cluster certificate is not CA-issued, or doesn't have the intended subject?**
-*Q*: What if the current cluster certificate is not CA-issued, or does not have the intended subject?
-*A*: Obtain a certificate with the intended subject, and add it to the cluster's definition as a secondary, by thumbprint. Once the upgrade completed successfully, initiate another cluster configuration upgrade to convert the certificate declaration to common name.
+Obtain a certificate with the intended subject, and add it to the cluster's definition as a secondary, by thumbprint. After the upgrade finishes successfully, initiate another cluster configuration upgrade to convert the certificate declaration to common name.
[Image1]:./media/security-cluster-certificate-mgmt/certificate-journey-thumbprint.png [Image2]:./media/security-cluster-certificate-mgmt/certificate-journey-common-name.png
service-fabric How To Patch Cluster Nodes Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-patch-cluster-nodes-windows.md
Title: Patch the Windows operating system in your Service Fabric cluster description: Here's how to enable automatic OS image upgrades to patch Service Fabric cluster nodes running on Windows. Previously updated : 10/19/2021 Last updated : 04/26/2022 # Patch the Windows operating system in your Service Fabric cluster
When enabling automatic OS updates, you'll also need to disable Windows Update i
1. Enable automatic OS image upgrades and disable Windows Updates in the deployment template:
-
+ ```json
- "virtualMachineProfile": {
- "properties": {
- "upgradePolicy": {
- "automaticOSUpgradePolicy": {
- "enableAutomaticOSUpgrade": true
- }
+ "properties": {
+ "upgradePolicy": {
+ "mode": "Automatic",
+ "automaticOSUpgradePolicy": {
+ "enableAutomaticOSUpgrade": true
} }
- }
+ }
```
+
```json
- "virtualMachineProfile": {
- "osProfile": {
- "windowsConfiguration": {
- "enableAutomaticUpdates": false
- }
+ "osProfile": {
+ "windowsConfiguration": {
+ "enableAutomaticUpdates": false
} } ```
service-fabric Service Fabric Dnsservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-dnsservice.md
public class ValuesController : Controller
} } ```
+## Recursive Queries
+
+For DNS names that the DNS service can't resolve on its own (for example, a public DNS name), it will forward the query to pre-existing recursive DNS servers on the nodes.
++
+Prior to Service Fabric 9.0, these servers were queried serially with a fixed timeout period of 5 seconds in between. If a server didn't respond within the timeout period, the next server (if available) would be queried. In the case that these DNS servers were encountering any issues, completion of DNS queries would take longer than 5 seconds, which is not ideal.
++
+Beginning in Service Fabric 9.0, support for parallel recursive queries was added. With parallel queries, all recursive DNS servers can be contacted at once, where the first response wins. This will result in quicker responses in the scenario previously mentioned.
+
+Fine-grained options are also introduced in Service Fabric 9.0 to control the behavior of the recursive queries, including the timeout periods and query attempts. These options can be set in the cluster config, under **DnsService**:
+
+- **RecursiveQuerySerialMaxAttempts** - The number of serial queries that will be attempted, at most. If this number is higher than the amount of forwarding DNS servers, querying will stop once all the servers have been attempted exactly once.
+- **RecursiveQuerySerialTimeout** - The timeout value in seconds for each attempted serial query.
+- **RecursiveQueryParallelMaxAttempts** - The number of times parallel queries will be attempted. Parallel queries are executed after the max attempts for serial queries have been exhausted.
+- **RecursiveQueryParallelTimeout** - The timeout value in seconds for each attempted parallel query.
+ ## Known Issues * For Service Fabric versions 6.3 and higher, there is a problem with DNS lookups for service names containing a hyphen in the DNS name. For more information on this issue, please track the following [GitHub Issue](https://github.com/Azure/service-fabric-issues/issues/1197). A fix for this is coming in the next 6.3 update.
service-fabric Service Fabric Environment Variables Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-environment-variables-reference.md
Internal Environment Variables Used by Service Fabric Runtime:
- Fabric_ApplicationId - Fabric_CodePackageInstanceId - Fabric_CodePackageInstanceSeqNum
+- Fabric_InstanceId
+- Fabric_ReplicaId
- Fabric_RuntimeConnectionAddress - Fabric_ServicePackageActivationGuid - Fabric_ServicePackageInstanceId
service-fabric Service Fabric Get Started Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started-linux.md
Start a container-based [Service Fabric Onebox](https://hub.docker.com/_/microso
``` 3. Start the cluster.<br/>
+ <b>Ubuntu 20.04 LTS:</b>
+ ```bash
+ docker run --name sftestcluster -d -v /var/run/docker.sock:/var/run/docker.sock -p 19080:19080 -p 19000:19000 -p 25100-25200:25100-25200 mcr.microsoft.com/service-fabric/onebox:u20
+ ```
<b>Ubuntu 18.04 LTS:</b> ```bash docker run --name sftestcluster -d -v /var/run/docker.sock:/var/run/docker.sock -p 19080:19080 -p 19000:19000 -p 25100-25200:25100-25200 mcr.microsoft.com/service-fabric/onebox:u18
service-fabric Service Fabric Keyvault References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-keyvault-references.md
Title: Azure Service Fabric - Using Service Fabric application KeyVault references
-description: This article explains how to use service-fabric KeyVaultReference support for application secrets.
+description: This article explains how to use service fabric KeyVaultReference support for application secrets.
Last updated 09/20/2019
Last updated 09/20/2019
# KeyVaultReference support for Azure-deployed Service Fabric Applications
-A common challenge when building cloud applications is how to securely distribute secrets to your applications. For example, you might want to deploy a database key to your application without exposing the key during the pipeline or to the operator. Service Fabric KeyVaultReference support makes it easy to deploy secrets to your applications simply by referencing the URL of the secret that is stored in Key Vault. Service Fabric will handle fetching that secret on behalf of your application's Managed Identity, and activating the application with the secret.
+A common challenge when building cloud applications is figuring out how to securely distribute secrets to your applications and manage them. Service Fabric KeyVaultReference support makes it easy. Once configured, you can reference the URL of the secret that is stored in Key Vault in your application definition and Service Fabric will handle fetching that secret and activating your application with it. When using the "SF-managed" version of the feature, Service Fabric can also monitor your Key Vault and automatically trigger rolling application parameter upgrades as your secrets rotate in the vault.
-> [!NOTE]
-> KeyVaultReference support for Service Fabric Applications is GA (out-of-preview) starting with Service Fabric version 7.2 CU5. It is recommended that you upgrade to this version before using this feature.
+## Options for delivering secrets to applications in Service Fabric
-> [!NOTE]
-> KeyVaultReference support for Service Fabric Applications supports only [versioned](../key-vault/general/about-keys-secrets-certificates.md#objects-identifiers-and-versioning) secrets. Versionless secrets are not supported. The Key Vault needs to be in the same subscription as your service fabric cluster.
+The classic way of delivering secrets to a Service Fabric application was to declare [Encrypted Parameters](service-fabric-application-secret-management.md). This involved encrypting secrets against an encryption certificate, and passing those encrypted secrets to your application. This method has a few downsides: needing to manage the encryption certificate, exposure of the secrets in the deployment pipeline, and a lack of visibility into the metadata of the secrets attached to a deployed application. Similarly, rotating secrets requires an application deployment. Unless you're running a standalone cluster, we no longer recommend using encrypted parameters.
+
+Another option is the use of [Secret Store References](service-fabric-application-secret-store.md#use-the-secret-in-your-application). This experience allows for central management of your application secrets, better visibility into the metadata of deployed secrets, and allows for central management of the encryption certificate. Some may prefer this style of secret management when running standalone Service Fabric clusters.
+
+The recommendation today is to reduce the reliance on secrets wherever possible by using [Managed Identities for Service Fabric applications](concepts-managed-identity.md). Managed identities can be used to authenticate directly to Azure Storage, Azure SQL, and more. That means there's no need to manage a separate credential when accessing [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md).
+
+When it isn't possible to use Managed Identity as a client, we recommend using KeyVaultReferences. You should use KeyVaultReferences rather than using Managed Identity to go directly to Key Vault. KeyVaultReferences help increase the availability of your application because it enforces that secret changes happen during rolling upgrades. It also scales better as secrets are cached and served from within the cluster. If your application uses Encrypted Parameters today, there are only minimal changes needed in your application code to use KeyVaultReferences. Your application can continue to expect to come up with a single secret, and for that secret to be the same for the lifetime of the process.
## Prerequisites - Managed Identity for Service Fabric Applications
- Service Fabric KeyVaultReference support uses an application's Managed Identity to fetch secrets on behalf of the application, so your application must be deployed via
- and assigned a managed identity. Follow this [document](concepts-managed-identity.md) to enable managed identity for your application.
+ Service Fabric KeyVaultReference support uses an application's Managed Identity to fetch secrets on behalf of the application. You must deploy your application via ARM and assign it a managed identity. Follow this [document](concepts-managed-identity.md) to enable managed identity for your application.
- Central Secrets Store (CSS).
- Central Secrets Store (CSS) is Service Fabric's encrypted local secrets cache. This feature uses CSS to protect and persist secrets after they are fetched from Key Vault. Enabling this optional system service is also required to consume this feature. Follow this [document](service-fabric-application-secret-store.md) to enable and configure CSS.
+ Central Secrets Store (CSS) is Service Fabric's encrypted local secrets cache. This feature uses CSS to protect and persist secrets after they're fetched from Key Vault. Enabling this system service is required to use KeyVaultReferences. Follow this [document](service-fabric-application-secret-store.md) to enable and configure CSS.
-- Grant application's managed identity access permission to the keyvault
+- Grant application's managed identity access permission to the Key Vault
+
+ Reference this [document](how-to-grant-access-other-resources.md) to see how to grant managed identity access to Key Vault. Also note if you're using system assigned managed identity, the managed identity is created only after application deployment. This can create race conditions where the application tries to access the secret before the identity can be given access to the vault. The system assigned identity's name will be `{cluster name}/{application name}/{service name}`.
+
+## KeyVaultReferences vs. Managed KeyVaultReferences
+
+The basic idea of KeyVaultReferences is rather than setting the value of your application parameter as your secret, you set it to the Key Vault URL, which will then be resolved to the secret value upon activation of your application. In Key Vault, a single secret, for example, `https://my.vault.azure.net/secrets/MySecret/` can have multiple versions, for example, `https://my.vault.azure.net/secrets/MySecret/<oid1>` and `<oid2>`. When you use a KeyVaultReference, the value should be a versioned reference (`https://my.vault.azure.net/secrets/MySecret/<oid1>`). If you rotate that secret in the vault, for example, to `<oid2>`, you should trigger an application upgrade to the new reference. When you use a ManagedKeyVaultReference, the value should be a version-less reference (`https://my.vault.azure.net/secrets/MySecret/`). Service Fabric will resolve the latest instance `<oid1>` and activate the application with that secret. If you rotate the secret in the vault to `<oid2>`, Service Fabric will automatically trigger an application parameter upgrade to move to `<oid2>` on your behalf.
+
+> [!NOTE]
+> KeyVaultReference (versioned secrets) support for Service Fabric Applications is Generally Available starting with Service Fabric version 7.2 CU5. It is recommended that you upgrade to this version before using this feature.
+
+> [!NOTE]
+> Managed KeyVaultReference (version-less secrets) support for Service Fabric Applications is Generally Available starting with Service Fabric version 9.0.
- Reference this [document](how-to-grant-access-other-resources.md) to see how to grant managed identity access to keyvault. Also note if you are using system assigned managed identity, the managed identity is created only after application deployment. This can create race conditions where the application tries to access the secret before the identity can be given access to the vault. The system assigned identity's name will be `{cluster name}/{application name}/{service name}`.
-
## Use KeyVaultReferences in your application
-KeyVaultReferences can be consumed in a number of ways
+
+KeyVaultReferences can be consumed
+ - [As an environment variable](#as-an-environment-variable) - [Mounted as a file into your container](#mounted-as-a-file-into-your-container) - [As a reference to a container repository password](#as-a-reference-to-a-container-repository-password)
string secret = Environment.GetEnvironmentVariable("MySecret");
string secret = sr.ReadToEnd(); } ```
- > [!NOTE]
+
+ > [!NOTE]
> MountPoint controls the folder where the files containing secret values will be mounted. ### As a reference to a container repository password
string secret = Environment.GetEnvironmentVariable("MySecret");
</ContainerHostPolicies> ```
+## Use Managed KeyVaultReferences in your application
+
+First, you must enable secret monitoring by upgrading your cluster definition:
+
+```json
+"fabricSettings": [
+ {
+ "name": "CentralSecretService",
+ "parameters": [
+ {
+ "name": "EnableSecretMonitoring",
+ "value": "true"
+ }
+ ]
+ }
+],
+```
+
+> [!NOTE]
+> The default may become `true` in the future
+
+After the cluster upgrade has finished, your user application can be upgraded. Anywhere a KeyVaultReference can be used, a ManagedKeyVaultReference can also be used, for example,
+
+```xml
+ <Section Name="MySecrets">
+ <Parameter Name="MySecret" Type="ManagedKeyVaultReference" Value="[MySecretReference]"/>
+ </Section>
+```
+
+The primary difference in specifying ManagedKeyVaultReferences is that they *can't* be hardcoded in your application type manifest. They must be declared as Application-level parameters, and further they must be overridden in your ARM application definition.
+
+Here's an excerpt from a well-formed manifest
+
+```xml
+<?xml version="1.0" encoding="utf-8"?>
+<ApplicationManifest ApplicationTypeName="MyAppType" ApplicationTypeVersion="1.0.0">
+ <Parameters>
+ <Parameter Name="MySecretReference" DefaultValue="" />
+ </Parameters>
+ <ServiceManifestImport>
+ <EnvironmentOverrides CodePackageRef="Code">
+ <EnvironmentVariable Name="MySecret" Value="[MySecretReference]" Type="ManagedKeyVaultReference" />
+ </EnvironmentOverrides>
+ <Policies>
+ <IdentityBindingPolicy ServiceIdentityRef="MySvcIdentity" ApplicationIdentityRef="MyAppIdentity" />
+ </Policies>
+ </ServiceManifestImport>
+ <Principals>
+ <ManagedIdentities>
+ <ManagedIdentity Name="MyAppIdentity" />
+ </ManagedIdentities>
+ </Principals>
+</ApplicationManifest>
+```
+
+and an excerpt of the application resource definition:
+
+```json
+{
+ "type": "Microsoft.ServiceFabric/clusters/applications",
+ "name": "MyApp",
+ "identity": {
+ "type" : "userAssigned",
+ "userAssignedIdentities": {
+ "[variables('userAssignedIdentityResourceId')]": {}
+ }
+ },
+ "properties": {
+ "parameters": {
+ "MySecretReference": "https://my.vault.azure.net/secrets/MySecret/"
+ },
+ "managedIdentities": [
+ {
+ "name" : "MyAppIdentity",
+ "principalId" : "<guid>"
+ }
+ ]
+ }
+}
+```
+
+Both declaring the ManagedKeyVaultReference as an application parameter, as well as overriding that parameter at deployment is needed for Service Fabric to successfully manage the lifecycle of the deployed secret.
+ ## Next steps
-* [Azure KeyVault Documentation](../key-vault/index.yml)
-* [Learn about Central Secret Store](service-fabric-application-secret-store.md)
-* [Learn about Managed identity for Service Fabric Applications](concepts-managed-identity.md)
+- [Azure KeyVault Documentation](../key-vault/index.yml)
+- [Learn about Central Secret Store](service-fabric-application-secret-store.md)
+- [Learn about Managed identity for Service Fabric Applications](concepts-managed-identity.md)
service-fabric Service Fabric Reliable Services Communication Aspnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-communication-aspnetcore.md
In this configuration, `KestrelCommunicationListener` will automatically select
For HTTPS, it should have the Endpoint configured with HTTPS protocol without a port specified in ServiceManifest.xml and pass the endpoint name to KestrelCommunicationListener constructor.
+## IHost and Minimal Hosting integration
+In addition to IWebHost/IWebHostBuilder, `KestrelCommunicationListener` and `HttpSysCommunicationListener` support building ASP.NET Core services using IHost/IHostBuilder.
+This is available starting v5.2.1363 of `Microsoft.ServiceFabric.AspNetCore.Kestrel` and `Microsoft.ServiceFabric.AspNetCore.HttpSys` packages.
+
+```csharp
+// Stateless Service
+protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
+{
+ return new ServiceInstanceListener[]
+ {
+ new ServiceInstanceListener(serviceContext =>
+ new KestrelCommunicationListener(serviceContext, "ServiceEndpoint", (url, listener) =>
+ {
+ return Host.CreateDefaultBuilder()
+ .ConfigureWebHostDefaults(webBuilder =>
+ {
+ webBuilder.UseKestrel()
+ .UseStartup<Startup>()
+ .UseServiceFabricIntegration(listener, ServiceFabricIntegrationOptions.None)
+ .UseContentRoot(Directory.GetCurrentDirectory())
+ .UseUrls(url);
+ })
+ .ConfigureServices(services => services.AddSingleton<StatelessServiceContext>(serviceContext))
+ .Build();
+ }))
+ };
+}
+
+```
+
+```csharp
+// Stateful Service
+protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
+{
+ return new ServiceReplicaListener[]
+ {
+ new ServiceReplicaListener(serviceContext =>
+ new KestrelCommunicationListener(serviceContext, "ServiceEndpoint", (url, listener) =>
+ {
+ return Host.CreateDefaultBuilder()
+ .ConfigureWebHostDefaults(webBuilder =>
+ {
+ webBuilder.UseKestrel()
+ .UseStartup<Startup>()
+ .UseServiceFabricIntegration(listener, ServiceFabricIntegrationOptions.UseUniqueServiceUrl)
+ .UseContentRoot(Directory.GetCurrentDirectory())
+ .UseUrls(url);
+ })
+ .ConfigureServices(services =>
+ {
+ services.AddSingleton<StatefulServiceContext>(serviceContext);
+ services.AddSingleton<IReliableStateManager>(this.StateManager);
+ })
+ .Build();
+ }))
+ };
+}
+```
++
+>[!NOTE]
+> As KestrelCommunicationListener and HttpSysCommunicationListener are meant for web services, it is required to register/configure a web server (using [ConfigureWebHostDefaults](/dotnet/api/microsoft.extensions.hosting.generichostbuilderextensions.configurewebhostdefaults) or [ConfigureWebHost](/dotnet/api/microsoft.extensions.hosting.generichostwebhostbuilderextensions.configurewebhost) method) over the IHost
++
+ASP.NET 6 introduced the Minimal Hosting model which is a more simplified and streamlined way of creating web applications. Minimal hosting model can also be used with KestrelCommunicationListener and HttpSysCommunicationListener.
+
+```csharp
+// Stateless Service
+protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
+{
+ return new ServiceInstanceListener[]
+ {
+ new ServiceInstanceListener(serviceContext =>
+ new KestrelCommunicationListener(serviceContext, "ServiceEndpoint", (url, listener) =>
+ {
+ var builder = WebApplication.CreateBuilder();
+
+ builder.Services.AddSingleton<StatelessServiceContext>(serviceContext);
+ builder.WebHost
+ .UseKestrel()
+ .UseContentRoot(Directory.GetCurrentDirectory())
+ .UseServiceFabricIntegration(listener, ServiceFabricIntegrationOptions.None)
+ .UseUrls(url);
+
+ builder.Services.AddControllersWithViews();
+
+ var app = builder.Build();
+
+ if (!app.Environment.IsDevelopment())
+ {
+ app.UseExceptionHandler("/Home/Error");
+ }
+
+ app.UseHttpsRedirection();
+ app.UseStaticFiles();
+ app.UseRouting();
+ app.UseAuthorization();
+ app.MapControllerRoute(
+ name: "default",
+ pattern: "{controller=Home}/{action=Index}/{id?}");
+
+ return app;
+ }))
+ };
+}
+```
+
+```csharp
+// Stateful Service
+protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
+{
+ return new ServiceReplicaListener[]
+ {
+ new ServiceReplicaListener(serviceContext =>
+ new KestrelCommunicationListener(serviceContext, "ServiceEndpoint", (url, listener) =>
+ {
+ var builder = WebApplication.CreateBuilder();
+
+ builder.Services
+ .AddSingleton<StatefulServiceContext>(serviceContext)
+ .AddSingleton<IReliableStateManager>(this.StateManager);
+ builder.WebHost
+ .UseKestrel()
+ .UseContentRoot(Directory.GetCurrentDirectory())
+ .UseServiceFabricIntegration(listener, ServiceFabricIntegrationOptions.UseUniqueServiceUrl)
+ .UseUrls(url);
+
+ builder.Services.AddControllersWithViews();
+
+ var app = builder.Build();
+
+ if (!app.Environment.IsDevelopment())
+ {
+ app.UseExceptionHandler("/Home/Error");
+ }
+ app.UseStaticFiles();
+ app.UseRouting();
+ app.UseAuthorization();
+ app.MapControllerRoute(
+ name: "default",
+ pattern: "{controller=Home}/{action=Index}/{id?}");
+
+ return app;
+ }))
+ };
+}
+```
++ ## Service Fabric configuration provider App configuration in ASP.NET Core is based on key-value pairs established by the configuration provider. Read [Configuration in ASP.NET Core](/aspnet/core/fundamentals/configuration/) to understand more on general ASP.NET Core configuration support.
service-fabric Service Fabric Reliable Services Lifecycle Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-lifecycle-java.md
Finally, you have to think about error or failure conditions.
The lifecycle of a stateless service is fairly straightforward. Here's the order of events: 1. The service is constructed.
-2. These events occur in parallel:
- - `StatelessService.createServiceInstanceListeners()` is invoked, and any returned listeners are opened. `CommunicationListener.openAsync()` is called on each listener.
+2. `StatelessService.createServiceInstanceListeners()` is invoked, and any returned listeners are opened. `CommunicationListener.openAsync()` is called on each listener.
+3. Then in parallel:
- The service's `runAsync` method (`StatelessService.runAsync()`) is called.
-3. If present, the service's own `onOpenAsync` method is called. Specifically, `StatelessService.onOpenAsync()` is called. This is an uncommon override, but it is available.
-
-It's important to note that there is no ordering between the call to create and open the listeners and the call to `runAsync`. The listeners might open before `runAsync` is started. Similarly, `runAsync` might be invoked before the communication listeners are open, or before they have even been constructed. If any synchronization is required, it must be done by the implementer. Here are some common solutions:
-
-* Sometimes listeners can't function until other information is created or other work is done. For stateless services, that work usually can be done in the service's constructor. It can be done during the `createServiceInstanceListeners()` call, or as part of the construction of the listener itself.
-* Sometimes the code in `runAsync` won't start until the listeners are open. In this case, additional coordination is necessary. A common solution is to add a flag in the listeners. The flag indicates when the listeners have finished. The `runAsync` method checks this before continuing the actual work.
+ - If present, the service's own `onOpenAsync` method is called. Specifically, `StatelessService.onOpenAsync()` is called. This is an uncommon override, but it is available.
## Stateless service shutdown When shutting down a stateless service, the same pattern is followed, but in reverse:
-1. These events occur in parallel:
- - Any open listeners are closed. `CommunicationListener.closeAsync()` is called on each listener.
- - The cancellation token that was passed to `runAsync()` is canceled. Checking the cancellation token's `isCancelled` property returns `true`, and if called, the token's `throwIfCancellationRequested` method throws a `CancellationException`.
-2. When `closeAsync()` finishes on each listener and `runAsync()` also finishes, the service's `StatelessService.onCloseAsync()` method is called, if it's present. Again, this is not a common override, but it can be used to safely close resources, stop background processing, finish saving external state, or close down existing connections.
-3. After `StatelessService.onCloseAsync()` finishes, the service object is destructed.
+1. Any open listeners are closed. `CommunicationListener.closeAsync()` is called on each listener.
+2. The cancellation token that was passed to `runAsync()` is canceled. Checking the cancellation token's `isCancelled` property returns `true`, and if called, the token's `throwIfCancellationRequested` method throws a `CancellationException`.
+3. When `runAsync()` finishes, the service's `StatelessService.onCloseAsync()` method is called, if it's present. Again, this is not a common override, but it can be used to safely close resources, stop background processing, finish saving external state, or close down existing connections.
+4. After `StatelessService.onCloseAsync()` finishes, the service object is destructed.
## Stateful service startup Stateful services have a pattern that is similar to stateless services, with a few changes. Here's the order of events for starting a stateful service: 1. The service is constructed. 2. `StatefulServiceBase.onOpenAsync()` is called. This call is not commonly overridden in the service.
-3. These events occur in parallel:
- - `StatefulServiceBase.createServiceReplicaListeners()` is invoked.
+3. `StatefulServiceBase.createServiceReplicaListeners()` is invoked.
- If the service is a primary service, all returned listeners are opened. `CommunicationListener.openAsync()` is called on each listener. - If the service is a secondary service, only listeners marked as `listenOnSecondary = true` are opened. Having listeners that are open on secondaries is less common.
+4. Then in parallel:
- If the service is currently a primary, the service's `StatefulServiceBase.runAsync()` method is called.
-4. After all the replica listener's `openAsync()` calls finish and `runAsync()` is called, `StatefulServiceBase.onChangeRoleAsync()` is called. This call is not commonly overridden in the service.
+ - `StatefulServiceBase.onChangeRoleAsync()` is called. This call is not commonly overridden in the service.
+
-Similar to stateless services, in stateful service, there's no coordination between the order in which the listeners are created and opened and when `runAsync` is called. If you need coordination, the solutions are much the same. But there's one additional case for stateful service. Say that the calls that arrive at the communication listeners require information kept inside some [Reliable Collections](service-fabric-reliable-services-reliable-collections.md). Because the communication listeners might open before the Reliable Collections are readable or writeable, and before `runAsync` starts, some additional coordination is necessary. The simplest and most common solution is for the communication listeners to return an error code. The client uses the error code to know to retry the request.
+ > [!NOTE]
+ > For a new secondary replica, `StatefulServiceBase.onChangeRoleAsync()` is called twice. Once after step 2, when it becomes an Idle Secondary and again during step 4, when it becomes an Active Secondary. For more information on replica and instance lifecycle, read [Replica and Instance Lifecycle](service-fabric-concepts-replica-lifecycle.md).
## Stateful service shutdown Like stateless services, the lifecycle events during shutdown are the same as during startup, but reversed. When a stateful service is being shut down, the following events occur:
-1. These events occur in parallel:
- - Any open listeners are closed. `CommunicationListener.closeAsync()` is called on each listener.
- - The cancellation token that was passed to `runAsync()` is canceled. A call to the cancellation token's `isCancelled()` method returns `true`, and if called, the token's `throwIfCancellationRequested()` method throws an `OperationCanceledException`.
-2. After `closeAsync()` finishes on each listener and `runAsync()` also finishes, the service's `StatefulServiceBase.onChangeRoleAsync()` is called. This call is not commonly overridden in the service.
+1. Any open listeners are closed. `CommunicationListener.closeAsync()` is called on each listener.
+2. The cancellation token that was passed to `runAsync()` is canceled. A call to the cancellation token's `isCancelled()` method returns `true`, and if called, the token's `throwIfCancellationRequested()` method throws an `OperationCanceledException`. Service Fabric waits for `runAsync()` to complete.
+> [!NOTE]
+> Waiting for `runAsync` to finish is necessary only if this replica is a primary replica.
- > [!NOTE]
- > Waiting for `runAsync` to finish is necessary only if this replica is a primary replica.
-
-3. After the `StatefulServiceBase.onChangeRoleAsync()` method finishes, the `StatefulServiceBase.onCloseAsync()` method is called. This call is an uncommon override, but it is available.
-3. After `StatefulServiceBase.onCloseAsync()` finishes, the service object is destructed.
+3. After `runAsync()` finishes, the service's `StatefulServiceBase.onCloseAsync()` method is called. This call is an uncommon override, but it is available.
+4. After `StatefulServiceBase.onCloseAsync()` finishes, the service object is destructed.
## Stateful service primary swaps While a stateful service is running, communication listeners are opened and the `runAsync` method is called only for the primary replicas of that stateful services. Secondary replicas are constructed, but see no further calls. While a stateful service is running, the replica that's currently the primary can change. The lifecycle events that a stateful replica can see depends on whether it is the replica being demoted or promoted during the swap.
While a stateful service is running, communication listeners are opened and the
### For the demoted primary Service Fabric needs the primary replica that's demoted to stop processing messages and stop any background work. This step is similar to when the service is shut down. One difference is that the service isn't destructed or closed, because it remains as a secondary. The following events occur:
-1. These events occur in parallel:
- - Any open listeners are closed. `CommunicationListener.closeAsync()` is called on each listener.
- - The cancellation token that was passed to `runAsync()` is canceled. A check of the cancellation token's `isCancelled()` method returns `true`. If called, the token's `throwIfCancellationRequested()` method throws an `OperationCanceledException`.
-2. After `closeAsync()` finishes on each listener and `runAsync()` also finishes, the service's `StatefulServiceBase.onChangeRoleAsync()` is called. This call is not commonly overridden in the service.
+1. Any open listeners are closed. `CommunicationListener.closeAsync()` is called on each listener.
+2. The cancellation token that was passed to `runAsync()` is canceled. A check of the cancellation token's `isCancelled()` method returns `true`. If called, the token's `throwIfCancellationRequested()` method throws an `OperationCanceledException`. Service Fabric waits for `runAsync()` to complete.
+3. Listeners marked as listenOnSecondary = true are opened.
+4. The service's `StatefulServiceBase.onChangeRoleAsync()` is called. This call is not commonly overridden in the service.
### For the promoted secondary Similarly, Service Fabric needs the secondary replica that's promoted to start listening for messages on the wire, and to start any background tasks that it needs to complete. This process is similar to when the service is created. The difference is that the replica itself already exists. The following events occur:
-1. These events occur in parallel:
- - `StatefulServiceBase.createServiceReplicaListeners()` is invoked and any returned listeners are opened. `CommunicationListener.openAsync()` is called on each listener.
+1. `CommunicationListener.closeAsync()` is called for all the opened listeners (marked with listenOnSecondary = true)
+2. All the communication listeners are opened. `CommunicationListener.openAsync()` is called on each listener.
+3. Then in parallel:
- The service's `StatefulServiceBase.runAsync()` method is called.
-2. After all the replica listener's `openAsync()` calls finish and `runAsync()` is called, `StatefulServiceBase.onChangeRoleAsync()` is called. This call is not commonly overridden in the service.
+ - `StatefulServiceBase.onChangeRoleAsync()` is called. This call is not commonly overridden in the service.
+
+ > [!NOTE]
+ > `createServiceReplicaListeners` is called only once and is not called again during the replica promotion or demotion process; the same `ServiceReplicaListener` instances are used but new `CommunicationListener` instances are created (by calling the `ServiceReplicaListener.createCommunicationListener` method) after the previous instances are closed.
### Common issues during stateful service shutdown and primary demotion Service Fabric changes the primary of a stateful service for multiple reasons. The most common reasons are [cluster rebalancing](service-fabric-cluster-resource-manager-balancing.md) and [application upgrade](service-fabric-application-upgrade.md). During these operations, it's important that the service respects the `cancellationToken`. This also applies during normal service shutdown, such as if the service was deleted.
service-fabric Service Fabric Reliable Services Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-lifecycle.md
There are details around the exact ordering of these events. The order of events
The lifecycle of a stateless service is straightforward. Here's the order of events: 1. The service is constructed.
-2. Then, in parallel, two things happen:
- - `StatelessService.CreateServiceInstanceListeners()` is invoked and any returned listeners are opened. `ICommunicationListener.OpenAsync()` is called on each listener.
+2. `StatelessService.CreateServiceInstanceListeners()` is invoked and any returned listeners are opened. `ICommunicationListener.OpenAsync()` is called on each listener.
+3. Then, in parallel, two things happen -
- The service's `StatelessService.RunAsync()` method is called.
-3. If present, the service's `StatelessService.OnOpenAsync()` method is called. This call is an uncommon override, but it is available. Extended service initialization tasks can be started at this time.
-
-Keep in mind that there is no ordering between the calls to create and open the listeners and **RunAsync**. The listeners can open before **RunAsync** is started. Similarly, you can invoke **RunAsync** before the communication listeners are open or even constructed. If any synchronization is required, it is left as an exercise to the implementer. Here are some common solutions:
-
- - Sometimes listeners can't function until some other information is created or work is done. For stateless services, that work can usually be done in other locations, such as the following:
- - In the service's constructor.
- - During the `CreateServiceInstanceListeners()` call.
- - As a part of the construction of the listener itself.
- - Sometimes the code in **RunAsync** doesn't start until the listeners are open. In this case, additional coordination is necessary. One common solution is that there is a flag within the listeners that indicates when they have finished. This flag is then checked in **RunAsync** before continuing to actual work.
+ - If present, the service's `StatelessService.OnOpenAsync()` method is called. This call is an uncommon override, but it is available. Extended service initialization tasks can be started at this time.
## Stateless service shutdown For shutting down a stateless service, the same pattern is followed, just in reverse:
-1. In parallel:
- - Any open listeners are closed. `ICommunicationListener.CloseAsync()` is called on each listener.
- - The cancellation token passed to `RunAsync()` is canceled. A check of the cancellation token's `IsCancellationRequested` property returns true, and if called, the token's `ThrowIfCancellationRequested` method throws an `OperationCanceledException`.
-2. After `CloseAsync()` finishes on each listener and `RunAsync()` also finishes, the service's `StatelessService.OnCloseAsync()` method is called, if present. OnCloseAsync is called when the stateless service instance is going to be gracefully shut down. This can occur when the service's code is being upgraded, the service instance is being moved due to load balancing, or a transient fault is detected. It is uncommon to override `StatelessService.OnCloseAsync()`, but it can be used to safely close resources, stop background processing, finish saving external state, or close down existing connections.
-3. After `StatelessService.OnCloseAsync()` finishes, the service object is destructed.
+1. Any open listeners are closed. `ICommunicationListener.CloseAsync()` is called on each listener.
+2. The cancellation token passed to `RunAsync()` is canceled. A check of the cancellation token's `IsCancellationRequested` property returns true, and if called, the token's `ThrowIfCancellationRequested` method throws an `OperationCanceledException`. Service Fabric waits for `RunAsync()` to complete.
+3. After `RunAsync()` finishes, the service's `StatelessService.OnCloseAsync()` method is called, if present. OnCloseAsync is called when the stateless service instance is going to be gracefully shut down. This can occur when the service's code is being upgraded, the service instance is being moved due to load balancing, or a transient fault is detected. It is uncommon to override `StatelessService.OnCloseAsync()`, but it can be used to safely close resources, stop background processing, finish saving external state, or close down existing connections.
+4. After `StatelessService.OnCloseAsync()` finishes, the service object is destructed.
## Stateful service startup Stateful services have a similar pattern to stateless services, with a few changes. For starting up a stateful service, the order of events is as follows: 1. The service is constructed. 2. `StatefulServiceBase.OnOpenAsync()` is called. This call is not commonly overridden in the service.
-3. The following things happen in parallel:
- - `StatefulServiceBase.CreateServiceReplicaListeners()` is invoked.
+3. `StatefulServiceBase.CreateServiceReplicaListeners()` is invoked.
- If the service is a Primary service, all returned listeners are opened. `ICommunicationListener.OpenAsync()` is called on each listener. - If the service is a Secondary service, only those listeners marked as `ListenOnSecondary = true` are opened. Having listeners that are open on secondaries is less common.
+4. Then in parallel:
- If the service is currently a Primary, the service's `StatefulServiceBase.RunAsync()` method is called.
-4. After all the replica listener's `OpenAsync()` calls finish and `RunAsync()` is called, `StatefulServiceBase.OnChangeRoleAsync()` is called. This call is not commonly overridden in the service.
+ - `StatefulServiceBase.OnChangeRoleAsync()` is called. This call is not commonly overridden in the service.
-Similar to stateless services, there's no coordination between the order in which the listeners are created and opened and when **RunAsync** is called. If you need coordination, the solutions are much the same. There is one additional case for stateful service. Say that the calls that arrive at the communication listeners require information kept inside some [Reliable Collections](service-fabric-reliable-services-reliable-collections.md).
> [!NOTE]
- > Because the communication listeners could open before the reliable collections are readable or writeable, and before **RunAsync** could start, some additional coordination is necessary. The simplest and most common solution is for the communication listeners to return an error code that the client uses to know to retry the request.
+ > For a new secondary replica, `StatefulServiceBase.OnChangeRoleAsync()` is called twice. Once after step 2, when it becomes an Idle Secondary and again during step 4, when it becomes an Active Secondary. For more information on replica and instance lifecycle, read [Replica and Instance Lifecycle](service-fabric-concepts-replica-lifecycle.md).
## Stateful service shutdown Like stateless services, the lifecycle events during shutdown are the same as during startup, but reversed. When a stateful service is being shut down, the following events occur:
-1. In parallel:
- - Any open listeners are closed. `ICommunicationListener.CloseAsync()` is called on each listener.
- - The cancellation token passed to `RunAsync()` is canceled. A check of the cancellation token's `IsCancellationRequested` property returns true, and if called, the token's `ThrowIfCancellationRequested` method throws an `OperationCanceledException`.
-2. After `CloseAsync()` finishes on each listener and `RunAsync()` also finishes, the service's `StatefulServiceBase.OnChangeRoleAsync()` is called. This call is not commonly overridden in the service.
+1. Any open listeners are closed. `ICommunicationListener.CloseAsync()` is called on each listener.
+2. `StatefulServiceBase.OnCloseAsync()` method is called. This call is an uncommon override, but is available.
+3. The cancellation token passed to `RunAsync()` is canceled. A check of the cancellation token's `IsCancellationRequested` property returns true, and if called, the token's `ThrowIfCancellationRequested` method throws an `OperationCanceledException`. Service Fabric waits for `RunAsync()` to complete.
> [!NOTE] > The need to wait for **RunAsync** to finish is only necessary if this replica is a Primary replica.
-3. After the `StatefulServiceBase.OnChangeRoleAsync()` method finishes, the `StatefulServiceBase.OnCloseAsync()` method is called. This call is an uncommon override, but it is available.
-3. After `StatefulServiceBase.OnCloseAsync()` finishes, the service object is destructed.
+4. After `StatefulServiceBase.RunAsync()` finishes, the service object is destructed.
## Stateful service Primary swaps While a stateful service is running, only the Primary replicas of that stateful services have their communication listeners opened and their **RunAsync** method called. Secondary replicas are constructed, but see no further calls. While a stateful service is running, the replica that's currently the Primary can change as a result of fault or cluster balancing optimization. What does this mean in terms of the lifecycle events that a replica can see? The behavior the stateful replica sees depends on whether it is the replica being demoted or promoted during the swap.
While a stateful service is running, only the Primary replicas of that stateful
### For the Primary that's demoted For the Primary replica that's demoted, Service Fabric needs this replica to stop processing messages and quit any background work it is doing. As a result, this step looks like it did when the service is shut down. One difference is that the service isn't destructed or closed because it remains as a Secondary. The following APIs are called:
-1. In parallel:
- - Any open listeners are closed. `ICommunicationListener.CloseAsync()` is called on each listener.
- - The cancellation token passed to `RunAsync()` is canceled. A check of the cancellation token's `IsCancellationRequested` property returns true, and if called, the token's `ThrowIfCancellationRequested` method throws an `OperationCanceledException`.
-2. After `CloseAsync()` finishes on each listener and `RunAsync()` also finishes, the service's `StatefulServiceBase.OnChangeRoleAsync()` is called. This call is not commonly overridden in the service.
+1. Any open listeners are closed. `ICommunicationListener.CloseAsync()` is called on each listener.
+2. The cancellation token passed to `RunAsync()` is canceled. A check of the cancellation token's `IsCancellationRequested` property returns true, and if called, the token's `ThrowIfCancellationRequested` method throws an `OperationCanceledException`. Service Fabric waits for `RunAsync()` to complete.
+3. Listeners marked as ListenOnSecondary = true are opened.
+4. The service's `StatefulServiceBase.OnChangeRoleAsync()` is called. This call is not commonly overridden in the service.
### For the Secondary that's promoted Similarly, Service Fabric needs the Secondary replica that's promoted to start listening for messages on the wire and start any background tasks it needs to complete. As a result, this process looks like it did when the service is created, except that the replica itself already exists. The following APIs are called:
-1. In parallel:
- - `StatefulServiceBase.CreateServiceReplicaListeners()` is invoked and any returned listeners are opened. `ICommunicationListener.OpenAsync()` is called on each listener.
+1. `ICommunicationListener.CloseAsync()` is called for all the opened listeners (marked with ListenOnSecondary = true).
+2. All the communication listeners are opened. `ICommunicationListener.OpenAsync()` is called on each listener.
+3. Then in parallel:
- The service's `StatefulServiceBase.RunAsync()` method is called.
-2. After all the replica listener's `OpenAsync()` calls finish and `RunAsync()` is called, `StatefulServiceBase.OnChangeRoleAsync()` is called. This call is not commonly overridden in the service.
+ - `StatefulServiceBase.OnChangeRoleAsync()` is called. This call is not commonly overridden in the service.
+
+ > [!NOTE]
+ > `CreateServiceReplicaListeners` is called only once and is not called again during the replica promotion or demotion process; the same `ServiceReplicaListener` instances are used but new `ICommunicationListener` instances are created (by calling the `ServiceReplicaListener.CreateCommunicationListener` method) after the previous instances are closed.
### Common issues during stateful service shutdown and Primary demotion Service Fabric changes the Primary of a stateful service for a variety of reasons. The most common are [cluster rebalancing](service-fabric-cluster-resource-manager-balancing.md) and [application upgrade](service-fabric-application-upgrade.md). During these operations (as well as during normal service shutdown, like you'd see if the service was deleted), it is important that the service respect the `CancellationToken`.
service-fabric Service Fabric Tutorial Create Vnet And Linux Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-create-vnet-and-linux-cluster.md
Title: Create a Linux Service Fabric cluster in Azure
+ Title: Create a Linux Service Fabric cluster in Azure
description: Learn how to deploy a Linux Service Fabric cluster into an existing Azure virtual network using Azure CLI.
The following procedures create a seven-node Service Fabric cluster. To calculat
Download the following Resource Manager template files: For Ubuntu 16.04 LTS:-
-* [AzureDeploy.json][template]
-* [AzureDeploy.Parameters.json][parameters]
+- [AzureDeploy.json][template]
+ - **vmImageSku** attribute is to "16.04-LTS"
+ - Microsoft.ServiceFabric/clusters resource's
+ - **apiVersion** being set to "2018-02-01"
+ - **vmImage** property being set to "Linux"
+- [AzureDeploy.Parameters.json][parameters]
For Ubuntu 18.04 LTS:-
-* [AzureDeploy.json][template2]
-* [AzureDeploy.Parameters.json][parameters2]
-
-For Ubuntu 18.04 LTS the difference between the two templates are
-* the **vmImageSku** attribute being set to "18.04-LTS"
-* each node's **typeHandlerVersion** being set to 1.1
-* Microsoft.ServiceFabric/clusters resource's
- - **apiVersion** being set to "2019-03-01" or higher
- - **vmImage** property being set to "Ubuntu18_04"
-
-This template deploys a secure cluster of seven virtual machines and three node types into a virtual network. Other sample templates can be found on [GitHub](https://github.com/Azure-Samples/service-fabric-cluster-templates). The [AzureDeploy.json][template] deploys a number resources, including the following.
+- [AzureDeploy.json][template2]
+ - **vmImageSku** attribute is to "18.04-LTS"
+ - Microsoft.ServiceFabric/clusters resource's
+ - **apiVersion** being set to "2019-03-01"
+ - **vmImage** property being set to "Ubuntu18_04"
+- [AzureDeploy.Parameters.json][parameters2]
+
+For Ubuntu 20.04 LTS:
+- [AzureDeploy.json][template3]
+ - **vmImageSku** attribute is to "20.04-LTS"
+ - Microsoft.ServiceFabric/clusters resource's
+ - **apiVersion** being set to "2019-03-01"
+ - **vmImage** property being set to "Ubuntu20_04"
+- [AzureDeploy.Parameters.json][parameters3]
+
+These templates deploy a secure cluster of seven virtual machines and three node types into a virtual network. Other sample templates can be found on [GitHub](https://github.com/Azure-Samples/service-fabric-cluster-templates). The AzureDeploy.json deploys a number of resources, including the following.
### Service Fabric cluster
The template in this article deploy a cluster that uses the certificate thumbpri
[parameters]:https://github.com/Azure-Samples/service-fabric-cluster-templates/blob/master/7-VM-Ubuntu-3-NodeTypes-Secure/AzureDeploy.Parameters.json [template2]:https://github.com/Azure-Samples/service-fabric-cluster-templates/blob/master/7-VM-Ubuntu-1804-3-NodeTypes-Secure/AzureDeploy.json [parameters2]:https://github.com/Azure-Samples/service-fabric-cluster-templates/blob/master/7-VM-Ubuntu-1804-3-NodeTypes-Secure/AzureDeploy.Parameters.json
+[template3]:https://github.com/Azure-Samples/service-fabric-cluster-templates/blob/master/7-VM-Ubuntu-2004-3-NodeTypes-Secure/AzureDeploy.json
+[parameters3]:https://github.com/Azure-Samples/service-fabric-cluster-templates/blob/master/7-VM-Ubuntu-2004-3-NodeTypes-Secure/AzureDeploy.Parameters.json
service-fabric Service Fabric Tutorial Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-deploy-app.md
+
+ Title: Deploy a Service Fabric app to a cluster in Azure
+description: Learn how to deploy an existing application to a newly created Azure Service Fabric cluster from Visual Studio.
+++ Last updated : 07/22/2019+++
+# Tutorial: Deploy a Service Fabric application to a cluster in Azure
+
+This tutorial is part two of a series. It shows you how to deploy an Azure Service Fabric application to a new cluster in Azure.
+
+In this tutorial, you learn how to:
+> [!div class="checklist"]
+> * Create a cluster.
+> * Deploy an application to a remote cluster using Visual Studio.
+
+In this tutorial series, you learn how to:
+> [!div class="checklist"]
+> * [Build a .NET Service Fabric application](service-fabric-tutorial-create-dotnet-app.md).
+> * Deploy the application to a remote cluster.
+> * [Add an HTTPS endpoint to an ASP.NET Core front-end service](service-fabric-tutorial-dotnet-app-enable-https-endpoint.md).
+> * [Configure CI/CD by using Azure Pipelines](service-fabric-tutorial-deploy-app-with-cicd-vsts.md).
+> * [Set up monitoring and diagnostics for the application](service-fabric-tutorial-monitoring-aspnet.md).
+
+## Prerequisites
+
+Before you begin this tutorial:
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* [Install Visual Studio 2019](https://www.visualstudio.com/), and install the **Azure development** and **ASP.NET and web development** workloads.
+* [Install the Service Fabric SDK](service-fabric-get-started.md).
+
+> [!NOTE]
+> A free account may not meet the requirements to create a virtual machine. This will prevent the completion of the tutorial. In addition, a non-work or non-school account may encounter permission issues while creating the certificate on the keyvault associated with the cluster. If you experience an error related to certificate creation use the Portal to create the cluster instead.
+
+## Download the voting sample application
+
+If you didn't build the voting sample application in [part one of this tutorial series](service-fabric-tutorial-create-dotnet-app.md), you can download it. In a command window, run the following code to clone the sample application repository to your local machine.
+
+```git
+git clone https://github.com/Azure-Samples/service-fabric-dotnet-quickstart
+```
+
+Open the application in Visual Studio, running as administrator, and build the application.
+
+## Create a cluster
+
+Now that the application is ready, you create a Service Fabric cluster and then deploy the application to the cluster. A [Service Fabric cluster](./service-fabric-deploy-anywhere.md) is a network-connected set of virtual or physical machines into which your microservices are deployed and managed.
+
+In this tutorial, you create a new three node test cluster in the Visual Studio IDE and then publish the application to that cluster. See the [Create and manage a cluster tutorial](service-fabric-tutorial-create-vnet-and-windows-cluster.md) for information on creating a production cluster. You can also deploy the application to an existing cluster that you previously created through the [Azure portal](https://portal.azure.com), by using [PowerShell](./scripts/service-fabric-powershell-create-secure-cluster-cert.md) or [Azure CLI](./scripts/cli-create-cluster.md) scripts, or from an [Azure Resource Manager template](service-fabric-tutorial-create-vnet-and-windows-cluster.md).
+
+> [!NOTE]
+> The Voting application, and many other applications, use the Service Fabric reverse proxy to communicate between services. Clusters created from Visual Studio have the reverse proxy enabled by default. If you're deploying to an existing cluster, you must [enable the reverse proxy in the cluster](service-fabric-reverseproxy-setup.md) for the Voting application to work.
++
+### Find the VotingWeb service endpoint
+
+The front-end web service of the Voting application is listening on a specific port (8080 if you in followed the steps in [part one of this tutorial series](service-fabric-tutorial-create-dotnet-app.md). When the application deploys to a cluster in Azure, both the cluster and the application run behind an Azure load balancer. The application port must be opened in the Azure load balancer by using a rule. The rule sends inbound traffic through the load balancer to the web service. The port is found in the **VotingWeb/PackageRoot/ServiceManifest.xml** file in the **Endpoint** element.
+
+```xml
+<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="8080" />
+```
+
+Take note of the service endpoint, which is needed in a later step. If you're deploying to an existing cluster, open this port by creating a load-balancing rule and probe in the Azure load balancer using a [PowerShell script](./scripts/service-fabric-powershell-open-port-in-load-balancer.md) or via the load balancer for this cluster in the [Azure portal](https://portal.azure.com).
+
+### Create a test cluster in Azure
+In Solution Explorer, right-click on **Voting** and select **Publish**.
+
+In **Connection Endpoint**, select **Create New Cluster**. If you're deploying to an existing cluster, select the cluster endpoint from the list. The Create Service Fabric Cluster dialog opens.
+
+In the **Cluster** tab, enter the **Cluster name** (for example, "mytestcluster"), select your subscription, select a region for the cluster (such as South Central US), enter the number of cluster nodes (we recommend three nodes for a test cluster), and enter a resource group (such as "mytestclustergroup"). Click **Next**.
+
+![Screenshot shows the Cluster tab of the Create Service Fabric Cluster dialog box.](./media/service-fabric-tutorial-deploy-app-to-party-cluster/create-cluster.png)
+
+In the **Certificate** tab, enter the password and output path for the cluster certificate. A self-signed certificate is created as a PFX file and saved to the specified output path. The certificate is used for both node-to-node and client-to-node security. Don't use a self-signed certificate for production clusters. This certificate is used by Visual Studio to authenticate with the cluster and deploy applications. Select **Import certificate** to install the PFX in the CurrentUser\My certificate store of your computer. Click **Next**.
+
+![Screenshot shows the Certificate tab of the Create Service Fabric Cluster dialog box.](./media/service-fabric-tutorial-deploy-app-to-party-cluster/certificate.png)
+
+In the **VM Detail** tab, enter the **User name** and **Password** for the cluster admin account. Select the **Virtual machine image** for the cluster nodes and the **Virtual machine size** for each cluster node. Click the **Advanced** tab.
+
+![Screenshot shows the V M Detail tab of the Create Service Fabric Cluster dialog box.](./media/service-fabric-tutorial-deploy-app-to-party-cluster/vm-detail.png)
+
+In **Ports**, enter the VotingWeb service endpoint from the previous step (for example, 8080). When the cluster is created, these application ports are opened in the Azure load balancer to forward traffic to the cluster. Click **Create** to create the cluster, which takes several minutes.
+
+![Screenshot shows the Advanced tab of the Create Service Fabric Cluster dialog box.](./media/service-fabric-tutorial-deploy-app-to-party-cluster/advanced.png)
+
+## Publish the application to the cluster
+
+When the new cluster is ready, you can deploy the Voting application directly from Visual Studio.
+
+In Solution Explorer, right-click on **Voting** and select **Publish**. The **Publish** dialog box appears.
+
+In **Connection Endpoint**, select the endpoint for the cluster you created in the previous step. For example, "mytestcluster.southcentralus.cloudapp.azure.com:19000". If you select **Advanced Connection Parameters**, the certificate information should be auto-filled.
+![Publish a Service Fabric application](./media/service-fabric-tutorial-deploy-app-to-party-cluster/publish-app.png)
+
+Select **Publish**.
+
+Once the application is deployed, open a browser and enter the cluster address followed by **:8080**. Or enter another port if one is configured. An example is `http://mytestcluster.southcentralus.cloudapp.azure.com:8080`. You see the application running in the cluster in Azure. In the voting web page, try adding and deleting voting options and voting for one or more of these options.
+
+![Service Fabric voting sample](./media/service-fabric-tutorial-deploy-app-to-party-cluster/application-screenshot-new-azure.png)
++
+## Next steps
+In this part of the tutorial, you learned how to:
+
+> [!div class="checklist"]
+> * Create a cluster.
+> * Deploy an application to a remote cluster using Visual Studio.
+
+Advance to the next tutorial:
+> [!div class="nextstepaction"]
+> [Enable HTTPS](service-fabric-tutorial-dotnet-app-enable-https-endpoint.md)
spatial-anchors Setup Unity Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/setup-unity-project.md
When developing MixedReality apps on HoloLens, you need to set the XR configurat
Azure Spatial Anchors SDK versions 2.9.0 or earlier only provide support for the Windows XR plugin (`com.unity.xr.windowsmr`), so the Azure Spatial Anchors windows package has an explicit dependency on the Windows XR Plugin.
-Azure Spatial Anchors SDK versions 2.10.0 or later provide support for both the Mixed Reality OpenXR plugin ([com.microsoft.mixedreality.openxr](https://dev.azure.com/aipmr/MixedReality-Unity-Packages/_packaging?_a=package&feed=Unity-packages&view=overview&package=com.microsoft.mixedreality.openxr&protocolType=Npm)) and the Windows XR plugin ([com.unity.xr.windowsmr](https://docs.unity3d.com/Manual/com.unity.xr.windowsmr.html)). You need to include either the `com.microsoft.mixedreality.openxr` package or the `com.unity.xr.windowsmr` package in your project depending on your choice.
+Azure Spatial Anchors SDK versions 2.10.0 or later provide support for both the Mixed Reality OpenXR plugin ([com.microsoft.mixedreality.openxr](https://dev.azure.com/aipmr/MixedReality-Unity-Packages/_packaging?_a=package&feed=Unity-packages&view=overview&package=com.microsoft.mixedreality.openxr&protocolType=Npm)) and the Windows XR plugin `com.unity.xr.windowsmr`. You need to include either the `com.microsoft.mixedreality.openxr` package or the `com.unity.xr.windowsmr` package in your project depending on your choice.
#### Configure your Unity project capabilities
When it's all done, your `dependencies` section should look something like this:
## Next steps > [!div class="nextstepaction"]
-> [How To: Create and locate anchors in Unity](./create-locate-anchors-unity.md)
+> [How To: Create and locate anchors in Unity](./create-locate-anchors-unity.md)
storage Object Replication Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-configure.md
Previously updated : 09/02/2021 Last updated : 04/26/2022
This article describes how to configure an object replication policy by using th
## Prerequisites
-Before you configure object replication, create the source and destination storage accounts if they do not already exist. The source and destination accounts can be either general-purpose v2 storage accounts or premium block blob accounts (preview). For more information, see [Create an Azure Storage account](../common/storage-account-create.md).
+Before you configure object replication, create the source and destination storage accounts if they do not already exist. The source and destination accounts can be either general-purpose v2 storage accounts or premium block blob accounts. For more information, see [Create an Azure Storage account](../common/storage-account-create.md).
Object replication requires that blob versioning is enabled for both the source and destination account, and that blob change feed is enabled for the source account. To learn more about blob versioning, see [Blob versioning](versioning-overview.md). To learn more about change feed, see [Change feed support in Azure Blob Storage](storage-blob-change-feed.md). Keep in mind that enabling these features can result in additional costs. To configure an object replication policy for a storage account, you must be assigned the Azure Resource Manager **Contributor** role, scoped to the level of the storage account or higher. For more information, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md) in the Azure role-based access control (Azure RBAC) documentation.
-> [!IMPORTANT]
-> Object replication for premium block blob accounts is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Configure object replication with access to both storage accounts If you have access to both the source and destination storage accounts, then you can configure the object replication policy on both accounts. The following examples show how to configure object replication with the Azure portal, PowerShell, or Azure CLI.
To create a replication policy in the Azure portal, follow these steps:
1. Under **Data management**, select **Object replication**. 1. Select **Set up replication rules**. 1. Select the destination subscription and storage account.
-1. In the **Container pairs** section, select a source container from the source account, and a destination container from the destination account. You can create up to 10 container pairs per replication policy.
+1. In the **Container pairs** section, select a source container from the source account, and a destination container from the destination account. You can create up to 10 container pairs per replication policy using this method. If you want to configure more than 10 container pairs (up to 1,000), see [Configure object replication using a JSON file](#configure-object-replication-using-a-json-file).
The following image shows a set of replication rules.
az storage account or-policy show \
-## Configure object replication with access to only the destination account
+## Configure object replication using a JSON file
-If you do not have permissions to the source storage account, then you can configure object replication on the destination account and provide a JSON file that contains the policy definition to another user to create the same policy on the source account. For example, if the source account is in a different Azure AD tenant from the destination account, then you can use this approach to configure object replication.
+If you do not have permissions to the source storage account or if you want to use more than 10 container pairs, then you can configure object replication on the destination account and provide a JSON file that contains the policy definition to another user to create the same policy on the source account. For example, if the source account is in a different Azure AD tenant from the destination account, then you can use this approach to configure object replication.
> [!NOTE] > Cross-tenant object replication is permitted by default for a storage account. To prevent replication across tenants, you can set the **AllowCrossTenantReplication** property (preview) to disallow cross-tenant object replication for your storage accounts. For more information, see [Prevent object replication across Azure Active Directory tenants](object-replication-prevent-cross-tenant-policies.md).
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Previously updated : 04/19/2022 Last updated : 04/26/2022
Object replication requires that the following Azure Storage features are also e
Enabling change feed and blob versioning may incur additional costs. For more details, refer to the [Azure Storage pricing page](https://azure.microsoft.com/pricing/details/storage/).
-Object replication is supported for general-purpose v2 storage accounts, and for premium block blob accounts in preview. Both the source and destination accounts must be either general-purpose v2 or premium block blob accounts. Object replication supports block blobs only; append blobs and page blobs are not supported.
-
-> [!IMPORTANT]
-> Object replication for premium block blob accounts is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+Object replication is supported for general-purpose v2 storage accounts and premium block blob accounts. Both the source and destination accounts must be either general-purpose v2 or premium block blob accounts. Object replication supports block blobs only; append blobs and page blobs are not supported.
## How object replication works
The source and destination accounts may be in the same region or in different re
### Replication rules
-Replication rules specify how Azure Storage will replicate blobs from a source container to a destination container. You can specify up to 10 replication rules for each replication policy. Each replication rule defines a single source and destination container, and each source and destination container can be used in only one rule, meaning that a maximum of 10 source containers and 10 destination containers may participate in a single replication policy.
+Replication rules specify how Azure Storage will replicate blobs from a source container to a destination container. You can specify up to 1,000 replication rules for each replication policy. Each replication rule defines a single source and destination container, and each source and destination container can be used in only one rule, meaning that a maximum of 1,000 source containers and 1,000 destination containers may participate in a single replication policy.
When you create a replication rule, by default only new block blobs that are subsequently added to the source container are copied. You can specify that both new and existing block blobs are copied, or you can define a custom copy scope that copies block blobs created from a specified time onward.
This table shows how this feature is supported in your account and the impact on
| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--| | Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
-<sup>2</sup> Feature is supported in preview.
- ## Billing Object replication incurs additional costs on read and write transactions against the source and destination accounts, as well as egress charges for the replication of data from the source account to the destination account and read charges to process change feed.
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
The field HRESULT contains the result code, below are the most common error code
### [0x80070002](#tab/x80070002)
-This error code means the source file is not in storage.
+This error code means the source file isn't in storage.
-There are reasons why this can happen:
+There are reasons why this error code can happen:
- The file was deleted by another application.
+ - A common scenario: the query execution starts, it enumerates the files and the files are found. Later, during the query execution, a file is deleted (for example by Databricks, Spark or ADF). The query fails because the file isn't found.
+ - This issue can also occur with delta format. The query might succeed on retry because there's a new version of the table and the deleted file isn't queried again.
- Invalid execution plan cached - As a temporary mitigation, run the command `DBCC FREEPROCCACHE`. If the problem persists create a support ticket.
The error message might also resemble:
File {path} cannot be opened because it does not exist or it is used by another process. ``` -- If an Azure AD login has a connection open for more than 1 hour during query execution, any query that relies on Azure AD fails. This includes querying storage using Azure AD pass-through and statements that interact with Azure AD (like CREATE EXTERNAL PROVIDER). This affects tools that keep connections open, like in query editor in SSMS and ADS. Tools that open new connections to execute a query, like Synapse Studio, are not affected.
+- If an Azure AD user has a connection open for more than 1 hour during query execution, any query that relies on Azure AD fails, including queries that access storage using Azure AD pass-through authentication, and statements that interact with Azure AD (like CREATE EXTERNAL PROVIDER). This issue frequently affects tools that keep connections open, like in query editor in SSMS and ADS. Tools that open new connections to execute a query, like Synapse Studio, aren't affected.
-- Azure AD authentication token might be cached by the client applications. For example Power BI caches Azure Active Directory token and reuses the same token for one hour. The long-running queries might fail if the token expires during execution.
+- Azure AD authentication token might be cached by the client applications. For example, Power BI caches Azure Active Directory token and reuses the same token for one hour. The long-running queries might fail if the token expires during execution.
Consider the following mitigations:
This error message can occur when the serverless SQL pool is experiencing resour
- One serverless SQL pool can concurrently handle 1000 active sessions that are executing lightweight queries, but the numbers will drop if the queries are more complex or scan a larger amount of data. For more information, see [Concurrency limits for Serverless SQL Pool](resources-self-help-sql-on-demand.md#constraints). - Try reducing the number of queries executing simultaneously or the query complexity.
-If the issue is non-transient or you confirmed the problem is not related to high concurrency or query complexity, create a support ticket.
+If the issue is non-transient or you confirmed the problem isn't related to high concurrency or query complexity, create a support ticket.
### [0x8007000C](#tab/x8007000C)
More information about syntax and usage:
### Parquet files
-When reading Parquet files, the query will not recover automatically. It needs to be retried by the client application.
+When the file format is Parquet, the query won't recover automatically. It needs to be retried by the client application.
### Synapse Link for Dataverse
-This error can occur when reading data from Synapse Link for Dataverse, when Synapse Link is syncing data to the lake and the data is being queried at the same time. The product group has a goal to improve this.
+This error can occur when reading data from Synapse Link for Dataverse, when Synapse Link is syncing data to the lake and the data is being queried at the same time. The product group has a goal to improve this behavior.
### [0x800700A1](#tab/x800700A1)
Confirm the storage account accessed is using the "Archive" access tier.
The `archive access` tier is an offline tier. While a blob is in the `archive access` tier, it can't be read or modified.
-To read or download a blob in the Archive tier, rehydrate it to an online tier: [Archive access tier](/azure/storage/blobs/access-tiers-overview.md#archive-access-tier)
+To read or download a blob in the Archive tier, rehydrate it to an online tier: [Archive access tier](/azure/storage/blobs/access-tiers-overview#archive-access-tier)
### [0x80070057](#tab/x80070057) This error can occur when the authentication method is User Identity, also known as "Azure AD pass-through" and the Azure Active Directory access token expires.
-The error message might also resemble the following:
+The error message might also resemble the following pattern:
``` File {path} cannot be opened because it does not exist or it is used by another process. ``` -- If an Azure AD login has a connection open for more than 1 hour during query execution, any query that relies on Azure AD fails. This includes querying storage using Azure AD pass-through and statements that interact with Azure AD (like CREATE EXTERNAL PROVIDER). This affects tools that keep connections open, like the query editor in SQL Server Management Studio (SSMS) and ADS. Tools that open new connections to execute a query, like Synapse Studio, are not affected.
+- If an Azure AD user has a connection open for more than 1 hour during query execution, any query that relies on Azure AD fails, including queries that access storage using Azure AD pass-through authentication and statements that interact with Azure AD (like CREATE EXTERNAL PROVIDER). This issue frequently affects tools that keep connections open, like the query editor in SQL Server Management Studio (SSMS) and Azure Data Studio (ADS). Client tools that open new connections to execute a query, like Synapse Studio, aren't affected.
-- Azure AD authentication token might be cached by the client applications. For example Power BI caches an Azure AD token and reuses it for one hour. The long-running queries might fail if the token expires in the middle of execution.
+- Azure AD authentication token might be cached by the client applications. For example, Power BI caches an Azure AD token and reuses it for one hour. The long-running queries might fail if the token expires in the middle of execution.
Consider the following mitigations to resolve the issue:
Consider the following mitigations to resolve the issue:
### [0x80072EE7](#tab/x80072EE7)
-This error code can occur when there is a transient issue in the serverless SQL pool.
+This error code can occur when there's a transient issue in the serverless SQL pool.
It happens infrequently and is temporary by nature. Retry the query. If the issue persists create a support ticket.
virtual-desktop Autoscale Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scaling-plan.md
The autoscale feature (preview) lets you scale your Azure Virtual Desktop deploy
> - Autoscale doesn't support Azure Virtual Desktop for Azure Stack HCI > - Autoscale doesn't support scaling of ephemeral disks. > - Autoscale doesn't support scaling of generalized VMs.-
+> - You can't use the autoscale feature and [scale session hosts using Azure Automation](set-up-scaling-script.md) on the same host pool. You must use one or the other.
For best results, we recommend using autoscale with VMs you deployed with Azure Virtual Desktop Azure Resource Manager templates or first-party tools from Microsoft.
To create and assign the custom role to your subscription with the Azure portal:
4. On the **Permissions** tab, add the following permissions to the subscription you're assigning the role to:
- ```azcopy
- "Microsoft.Insights/eventtypes/values/read"
- "Microsoft.Compute/virtualMachines/deallocate/action"
- "Microsoft.Compute/virtualMachines/restart/action"
- "Microsoft.Compute/virtualMachines/powerOff/action"
- "Microsoft.Compute/virtualMachines/start/action"
- "Microsoft.Compute/virtualMachines/read"
- "Microsoft.DesktopVirtualization/hostpools/read"
- "Microsoft.DesktopVirtualization/hostpools/write"
- "Microsoft.DesktopVirtualization/hostpools/sessionhosts/read"
- "Microsoft.DesktopVirtualization/hostpools/sessionhosts/write"
- "Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/delete"
- "Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/read"
- "Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/sendMessage/action"
+ ```
+ "Microsoft.Insights/eventtypes/values/read"
+ "Microsoft.Compute/virtualMachines/deallocate/action"
+ "Microsoft.Compute/virtualMachines/restart/action"
+ "Microsoft.Compute/virtualMachines/powerOff/action"
+ "Microsoft.Compute/virtualMachines/start/action"
+ "Microsoft.Compute/virtualMachines/read"
+ "Microsoft.DesktopVirtualization/hostpools/read"
+ "Microsoft.DesktopVirtualization/hostpools/write"
+ "Microsoft.DesktopVirtualization/hostpools/sessionhosts/read"
+ "Microsoft.DesktopVirtualization/hostpools/sessionhosts/write"
+ "Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/delete"
+ "Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/read"
+ "Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/sendMessage/action"
``` 5. When you're finished, select **Ok**.
virtual-desktop Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/management.md
Title: Microsoft Endpoint Configuration Manager for Azure Virtual Desktop
+ Title: Microsoft Endpoint Manager for Azure Virtual Desktop
description: Recommended ways for you to manage your Azure Virtual Desktop environment. Previously updated : 10/14/2021 Last updated : 04/26/2022
-# Microsoft Endpoint Manager and Intune for Azure Virtual Desktop
+# Microsoft Endpoint Manager for Azure Virtual Desktop
-We recommend using [Microsoft Endpoint Manager](https://www.microsoft.com/endpointmanager) to manage your Azure Virtual Desktop environment after deployment. Microsoft Endpoint Manager is a unified management platform that includes Microsoft Endpoint Configuration Manager and Microsoft Intune.
-
-> [!NOTE]
-> Managing Azure Virtual Desktop session hosts using Microsoft Endpoint Manager is currently only supported in the Azure Public cloud.
+We recommend using [Microsoft Endpoint Manager](https://www.microsoft.com/endpointmanager) to manage your Azure Virtual Desktop environment. Microsoft Endpoint Manager is a unified management platform that includes Microsoft Endpoint Configuration Manager and Microsoft Intune.
## Microsoft Endpoint Configuration Manager
-Microsoft Endpoint Configuration Manager versions 1906 and later can manage your Azure Virtual Desktop devices. For more information, see [Supported OS versions for clients and devices for Configuration Manager](/mem/configmgr/core/plan-design/configs/supported-operating-systems-for-clients-and-devices#windows-virtual-desktop).
+Microsoft Endpoint Configuration Manager versions 1906 and later can manage your domain-joined and Hybrid Azure Active Directory (AD)-joined session hosts. For more information, see [Supported OS versions for clients and devices for Configuration Manager](/mem/configmgr/core/plan-design/configs/supported-operating-systems-for-clients-and-devices#azure-virtual-desktop).
## Microsoft Intune
-Intune supports Windows 10 Enterprise virtual machines (VMs) for Azure Virtual Desktop. For more information about support, see [Using Windows 10 Enterprise with Intune](/mem/intune/fundamentals/windows-virtual-desktop).
+Microsoft Intune can manage your Azure AD-joined and Hybrid Azure AD-joined session hosts. To learn more about using Intune to manage Windows 11 and Windows 10 single session hosts, see [Using Azure Virtual Desktop with Intune](/mem/intune/fundamentals/windows-virtual-desktop).
-Intune support for Windows 10 Enterprise multi-session VMs on Azure Virtual Desktop is currently in public preview. To see what the public preview version currently supports, check out [Using Windows 10 Enterprise multi-session with Intune](/mem/intune/fundamentals/windows-virtual-desktop-multi-session).
+For Windows 11 and Windows 10 multi-session hosts, Intune currently supports device-based configurations. To learn more about using Intune to manage multi-session hosts, see [Using Azure Virtual Desktop multi-session with Intune](/mem/intune/fundamentals/windows-virtual-desktop-multi-session).
+
+> [!NOTE]
+> Managing Azure Virtual Desktop session hosts using Intune is currently supported in the Azure Public and Azure Government clouds.
## Licensing
virtual-desktop Set Up Scaling Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-scaling-script.md
You can reduce your total Azure Virtual Desktop deployment cost by scaling your
In this article, you'll learn about the scaling tool built with the Azure Automation account and Azure Logic App that automatically scales session host VMs in your Azure Virtual Desktop environment. To learn how to use the scaling tool, skip ahead to [Prerequisites](#prerequisites).
+> [!NOTE]
+> You can't scale session hosts using Azure Automation and use the [autoscale feature](autoscale-scaling-plan.md) on the same host pool. You must use one or the other.
+ ## How the scaling tool works The scaling tool provides a low-cost automation option for customers who want to optimize their session host VM costs.
You can use the scaling tool to:
- Schedule VMs to start and stop based on peak and off-peak business hours. - Scale out VMs based on number of sessions per CPU core.-- Scale in VMs during Off-Peak hours, leaving the minimum number of session host VMs running.
+- Scale in VMs during off-peak hours, leaving the minimum number of session host VMs running.
The scaling tool uses a combination of an Azure Automation account, a PowerShell runbook, a webhook, and the Azure Logic App to function. When the tool runs, Azure Logic App calls a webhook to start the Azure Automation runbook. The runbook then creates a job.
virtual-machines Linux Vm Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux-vm-connect.md
Previously updated : 04/18/2022 Last updated : 04/25/2022
This document describes how to connect, via SSH, to a VM that has a public IP. I
1. On the page for the VM, select **Networking** from the left menu. 1. On the **Networking** page, check to see if there is a rule which allows TCP on port 22 from the IP address of the computer you are using to connect to the VM. If the rule exists, you can move to the next section.
+
+ :::image type="content" source="media/linux-vm-connect/check-rule.png" alt-text="Screenshot showing how to check to see if there is already a rule allowing S S H connections.":::
+ 1. If there isn't a rule, add one by selecting **Add inbound port rule**. 1. From the **Service** dropdown select **SSH**.
- :::image type="content" source="media/linux-vm-connect/create-rule.png" alt-text="Screenshot showing where to choose S S H.":::
+ :::image type="content" source="media/linux-vm-connect/create-rule.png" alt-text="Screenshot showing where to choose S S H when creating a new N S G rule.":::
1. Edit **Priority** and **Source** if necessary 1. For **Name**, type *SSH*.
This document describes how to connect, via SSH, to a VM that has a public IP. I
To learn more about adding a public IP address to an existing VM, see [Associate a public IP address to a virtual machine](../virtual-network/ip-services/associate-public-ip-address-vm.md) - Verify your VM is running. On the Overview tab, in the **Essentials** section, verify the status of the VM is **Running**. To start the VM, select **Start** at the top of the page.+
+ :::image type="content" source="media/linux-vm-connect/running.png" alt-text="Screenshot showing how to check to make sure your virtual machine is in the running state.":::
## Connect to the VM
virtual-machines Hbv3 Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/hpc/hbv3-performance.md
Performance expectations using common HPC microbenchmarks are as follows:
| Workload | HBv3 | |-|-|
-| STREAM Triad | 330-350 GB/s (~82-86 GB/s per NUMA) |
+| STREAM Triad | 330-350 GB/s (amplified up to 630 GB/s) |
| High-Performance Linpack (HPL) | 4 TF (Rpeak, FP64), 8 TF (Rpeak, FP32) for 120-core VM size | | RDMA latency & bandwidth | 1.2 microseconds (1-byte), 192 Gb/s (one-way) | | FIO on local NVMe SSDs (RAID0) | 7 GB/s reads, 3 GB/s writes; 186k IOPS reads, 201k IOPS writes |
virtual-machines Hbv3 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/hpc/hbv3-series-overview.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-An [HBv3-series](../../hbv3-series.md) server features 2 * 64-core EPYC 7V13 CPUs for a total of 128 physical "Zen3" cores. Simultaneous Multithreading (SMT) is disabled on HBv3. These 128 cores are divided into 16 sections (8 per socket), each section containing 8 processor cores with uniform access to a 32 MB L3 cache. Azure HBv3 servers also run the following AMD BIOS settings:
+An [HBv3-series](../../hbv3-series.md) server features 2 * 64-core EPYC 7V73X CPUs for a total of 128 physical "Zen3" cores with AMD 3D V-Cache. Simultaneous Multithreading (SMT) is disabled on HBv3. These 128 cores are divided into 16 sections (8 per socket), each section containing 8 processor cores with uniform access to a 96 MB L3 cache. Azure HBv3 servers also run the following AMD BIOS settings:
```bash Nodes per Socket (NPS) = 2
Each HBv3 VM size is similar in physical layout, features, and performance of a
| HBv3-series VM size | NUMA domains | Cores per NUMA domain | Similarity with AMD EPYC | ||--||-|
-Standard_HB120rs_v3 | 4 | 30 | Dual-socket EPYC 7713 |
+Standard_HB120rs_v3 | 4 | 30 | Dual-socket EPYC 7773X |
Standard_HB120r-96s_v3 | 4 | 24 | Dual-socket EPYC 7643 |
-Standard_HB120r-64s_v3 | 4 | 16 | Dual-socket EPYC 7543 |
-Standard_HB120r-32s_v3 | 4 | 8 | Dual-socket EPYC 7313 |
+Standard_HB120r-64s_v3 | 4 | 16 | Dual-socket EPYC 7573X |
+Standard_HB120r-32s_v3 | 4 | 8 | Dual-socket EPYC 7373X |
Standard_HB120r-16s_v3 | 4 | 4 | Dual-socket EPYC 72F3 | > [!NOTE]
When paired in a striped array, the NVMe SSD provides up to 7 GB/s reads and 3 G
| Hardware specifications | HBv3-series VMs | |-|-| | Cores | 120, 96, 64, 32, or 16 (SMT disabled) |
-| CPU | AMD EPYC 7V13 |
-| CPU Frequency (non-AVX) | 3.1 GHz (all cores), 3.675 GHz (up to 10 cores) |
+| CPU | AMD EPYC 7V73X |
+| CPU Frequency (non-AVX) | 3.0 GHz (all cores), 3.5 GHz (up to 10 cores) |
| Memory | 448 GB (RAM per core depends on VM size) | | Local Disk | 2 * 960 GB NVMe (block), 480 GB SSD (page file) | | Infiniband | 200 Gb/s Mellanox ConnectX-6 HDR InfiniBand |
virtual-network Create Custom Ip Address Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-cli.md
To utilize the Azure BYOIP feature, you must perform the following steps prior t
* The address range must be no smaller than a /24 so it will be accepted by Internet Service Providers.
-* A Route Origin Authorization (ROA) document that authorizes Microsoft to advertise the address range must be filled out by the customer on the appropriate Routing Internet Registry website. ARIN, RIPE, and APNIC.
+* A Route Origin Authorization (ROA) document that authorizes Microsoft to advertise the address range must be filled out by the customer on the appropriate Routing Internet Registry (RIR) website or via their API. The RIR will require the ROA to be digitally signed with the Resource Public Key Infrastructure (RPKI) of your RIR.
For this ROA:
To utilize the Azure BYOIP feature, you must perform the following steps prior t
* The prefix length should exactly match the prefixes that can be advertised by Microsoft. For example, if you plan to bring 1.2.3.0/24 and 2.3.4.0/23 to Microsoft, they should both be named.
- * After the ROA is complete and submitted, allow at least 24 hours for it to become available to Microsoft.
+ * After the ROA is complete and submitted, allow at least 24 hours for it to become available to Microsoft, where it will be verified to determine its authenticity and correctness as part of the provisioning process.
### Certificate readiness
virtual-network Create Custom Ip Address Prefix Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-portal.md
To utilize the Azure BYOIP feature, you must perform the following steps prior t
* The address range must be no smaller than a /24 so it will be accepted by Internet Service Providers.
-* A Route Origin Authorization (ROA) document that authorizes Microsoft to advertise the address range must be filled out by the customer on the appropriate Routing Internet Registry website. ARIN, RIPE, and APNIC.
+* A Route Origin Authorization (ROA) document that authorizes Microsoft to advertise the address range must be filled out by the customer on the appropriate Routing Internet Registry (RIR) website or via their API. The RIR will require the ROA to be digitally signed with the Resource Public Key Infrastructure (RPKI) of your RIR.
For this ROA:
To utilize the Azure BYOIP feature, you must perform the following steps prior t
* The prefix length should exactly match the prefixes that can be advertised by Microsoft. For example, if you plan to bring 1.2.3.0/24 and 2.3.4.0/23 to Microsoft, they should both be named.
- * After the ROA is complete and submitted, allow at least 24 hours for it to become available to Microsoft.
+ * After the ROA is complete and submitted, allow at least 24 hours for it to become available to Microsoft, where it will be verified to determine its authenticity and correctness as part of the provisioning process.
### Certificate readiness
virtual-network Create Custom Ip Address Prefix Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-powershell.md
To utilize the Azure BYOIP feature, you must perform the following steps prior t
* The address range must be no smaller than a /24 so it will be accepted by Internet Service Providers.
-* A Route Origin Authorization (ROA) document that authorizes Microsoft to advertise the address range must be filled out by the customer on the appropriate Routing Internet Registry website. ARIN, RIPE, and APNIC.
+* A Route Origin Authorization (ROA) document that authorizes Microsoft to advertise the address range must be filled out by the customer on the appropriate Routing Internet Registry (RIR) website or via their API. The RIR will require the ROA to be digitally signed with the Resource Public Key Infrastructure (RPKI) of your RIR.
For this ROA:
To utilize the Azure BYOIP feature, you must perform the following steps prior t
* The prefix length should exactly match the prefixes that can be advertised by Microsoft. For example, if you plan to bring 1.2.3.0/24 and 2.3.4.0/23 to Microsoft, they should both be named.
- * After the ROA is complete and submitted, allow at least 24 hours for it to become available to Microsoft.
+ * After the ROA is complete and submitted, allow at least 24 hours for it to become available to Microsoft, where it will be verified to determine its authenticity and correctness as part of the provisioning process.
### Certificate readiness
virtual-network Virtual Networks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-faq.md
Yes. Azure reserves 5 IP addresses within each subnet. These are x.x.x.0-x.x.x.3
- x.x.x.2, x.x.x.3: Reserved by Azure to map the Azure DNS IPs to the VNet space - x.x.x.255: Network broadcast address for subnets of size /25 and larger. This will be a different address in smaller subnets.
+For example, for the subnet with addressing 172.16.1.128/26:
+
+- 172.16.1.128: Network address
+- 172.16.1.129: Reserved by Azure for the default gateway
+- 172.16.1.130, 172.16.1.131: Reserved by Azure to map the Azure DNS IPs to the VNet space
+- 172.16.1.191: Network broadcast address
++ ### How small and how large can VNets and subnets be? The smallest supported IPv4 subnet is /29, and the largest is /2 (using CIDR subnet definitions). IPv6 subnets must be exactly /64 in size.
For more information, see [FAQ about classic to Azure Resource Manager migration
### How can I report an issue?
-You can post your questions about your migration issues to theΓÇ»[Microsoft Q&A](/answers/topics/azure-virtual-network.html) page. It's recommended that you post all your questions on this forum. If you have a support contract, you can also file a support request.
+You can post your questions about your migration issues to theΓÇ»[Microsoft Q&A](/answers/topics/azure-virtual-network.html) page. It's recommended that you post all your questions on this forum. If you have a support contract, you can also file a support request.
vpn-gateway Vpn Gateway Connect Different Deployment Models Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-connect-different-deployment-models-powershell.md
 Title: 'Connect classic virtual networks to Azure Resource Manager VNets: PowerShell'
-description: Learn how to to connect classic VNets to Resource Manager VNets using PowerShell.
-
+description: Learn how to connect classic VNets to Resource Manager VNets using PowerShell.
- Previously updated : 02/10/2021 Last updated : 04/26/2022 - # Connect virtual networks from different deployment models using PowerShell
-This article helps you connect classic VNets to Resource Manager VNets to allow the resources located in the separate deployment models to communicate with each other. The steps in this article use PowerShell, but you can also create this configuration using the Azure portal by selecting the article from this list.
+This article helps you connect classic VNets to Resource Manager VNets to allow the resources located in the separate deployment models to communicate with each other. The steps in this article use PowerShell.
+
+This article is intended for customers who already have a VNet that was created using the classic (legacy) deployment model, and now want to connect the classic VNet to anther VNet that was created using the latest deployment model. If you don't already have a legacy VNet, use the [Create a VNet-to-VNet connection](vpn-gateway-vnet-vnet-rm-ps.md) article instead.
+
+## Architecture
+
+Connecting a classic VNet to a Resource Manager VNet is similar to connecting a VNet to an on-premises site location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE. You can create a connection between VNets that are in different subscriptions and in different regions. You can also connect VNets that already have connections to on-premises networks, as long as the gateway is dynamic or route-based. For more information about VNet-to-VNet connections, see the [VNet-to-VNet FAQ](vpn-gateway-vpn-faq.md).
+
+For this configuration, you create a VPN gateway connection over an IPsec/IKE VPN tunnel between the virtual networks. Make sure that none of your VNet ranges overlap with each other, or with any of the local networks that they connect to.
+
+The following table shows an example of how the example VNets and local sites are defined:
-> [!div class="op_single_selector"]
-> * [Portal](vpn-gateway-connect-different-deployment-models-portal.md)
-> * [PowerShell](vpn-gateway-connect-different-deployment-models-powershell.md)
->
->
+| Virtual Network | Address Space | Region | Connects to local network site |
+|: |: |: |: |
+| ClassicVNet |(10.1.0.0/16) |West US | RMVNetSite (192.168.0.0/16) |
+| RMVNet | (192.168.0.0/16) |East US |ClassicVNetSite (10.1.0.0/16) |
-Connecting a classic VNet to a Resource Manager VNet is similar to connecting a VNet to an on-premises site location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE. You can create a connection between VNets that are in different subscriptions and in different regions. You can also connect VNets that already have connections to on-premises networks, as long as the gateway that they have been configured with is dynamic or route-based. For more information about VNet-to-VNet connections, see the [VNet-to-VNet FAQ](#faq) at the end of this article.
+## <a name="pre"></a>Prerequisites
-If you do not already have a virtual network gateway and do not want to create one, you may want to instead consider connecting your VNets using VNet Peering. VNet peering does not use a VPN gateway. For more information, see [VNet peering](../virtual-network/virtual-network-peering-overview.md).
+The following steps walk you through the settings necessary to configure a dynamic or route-based gateway for each VNet and create a VPN connection between the gateways. This configuration doesn't support static or policy-based gateways.
-## <a name="before"></a>Before you begin
+These steps assume that you have a legacy classic VNet and a Resource Manager VNet already created.
-The following steps walk you through the settings necessary to configure a dynamic or route-based gateway for each VNet and create a VPN connection between the gateways. This configuration does not support static or policy-based gateways.
+* Verify that the address ranges for the VNets don't overlap with each other, or overlap with any of the ranges for other connections that the gateways may be connected to.
+* In this article, we use PowerShell. Install the latest PowerShell cmdlets to your computer for **both** Resource Manager and Service Management.
-### <a name="pre"></a>Prerequisites
+ While it's possible to perform a few of the PowerShell commands using the Azure Cloud Shell environment, you need to install both versions of the cmdlets to create the connections properly.
-* Both VNets have already been created. If you need to create a resource manager virtual network, see [Create a resource group and a virtual network](../virtual-network/quick-create-powershell.md#create-a-resource-group-and-a-virtual-network). To create a classic virtual network, see [Create a classic VNet](/previous-versions/azure/virtual-network/create-virtual-network-classic).
-* The address ranges for the VNets do not overlap with each other, or overlap with any of the ranges for other connections that the gateways may be connected to.
-* You have installed the latest PowerShell cmdlets. See [How to install and configure Azure PowerShell](/powershell/azure/) for more information. Make sure you install both the Service Management (SM) and the Resource Manager (RM) cmdlets.
+ * [Service Management (classic) PowerShell cmdlets](/powershell/azure/servicemanagement/install-azure-ps?). When you install the Service Management cmdlets, you may need to modify the [Execution policy](/powershell/module/microsoft.powershell.core/about/about_execution_policies?) in order to install the classic version of the Azure module.
+
+ * [AZ PowerShell cmdlets for Resource Manager](/powershell/azure/install-az-ps?)
+
+ For more information, see [How to install and configure Azure PowerShell](/powershell/azure/).
### <a name="exampleref"></a>Example settings
-You can use these values to create a test environment, or refer to them to better understand the examples in this article.
+You can use these values to better understand the examples.
-**Classic VNet settings**
+**Classic VNet**
VNet Name = ClassicVNet <br>
+Resource Group = ClassicRG
Location = West US <br>
-Virtual Network Address Spaces = 10.0.0.0/24 <br>
-Subnet-1 = 10.0.0.0/27 <br>
-GatewaySubnet = 10.0.0.32/29 <br>
-Local Network Name = RMVNetLocal <br>
-GatewayType = DynamicRouting
+Virtual Network Address Spaces = 10.1.0.0/16 <br>
+Subnet1 = 10.1.0.0/24 <br>
+GatewaySubnet = 10.1.255.0/27 <br>
+Local Network Name = RMVNetSite <br>
+GatewayType = DynamicRouting <br>
-**Resource Manager VNet settings**
+**Resource Manager VNet**
VNet Name = RMVNet <br>
-Resource Group = RG1 <br>
+Resource Group = RMRG <br>
Virtual Network IP Address Spaces = 192.168.0.0/16 <br>
-Subnet-1 = 192.168.1.0/24 <br>
-GatewaySubnet = 192.168.0.0/26 <br>
+Subnet1 = 192.168.1.0/24 <br>
+GatewaySubnet = 192.168.255.0/27 <br>
Location = East US <br>
-Gateway public IP name = gwpip <br>
-Local Network Gateway = ClassicVNetLocal <br>
+Gateway public IP name = rmgwpip <br>
+Local Network Gateway = ClassicVNetSite <br>
Virtual Network Gateway name = RMGateway <br> Gateway IP addressing configuration = gwipconfig
-## <a name="createsmgw"></a>Section 1 - Configure the classic VNet
+## <a name="createsmgw"></a>Configure the classic VNet
+
+In this section, you configure your already existing classic VNet. If your VNet already has a gateway, verify that the gateway is Route-based, then proceed to the next section. If the gateway isn't Route-based, delete the gateway before moving forward with the next steps. You'll have the opportunity to create a new gateway later.
+ ### 1. Download your network configuration file
-1. Log in to your Azure account in the PowerShell console with elevated rights. The following cmdlet prompts you for the login credentials for your Azure Account. After logging in, it downloads your account settings so that they are available to Azure PowerShell. The classic Service Management (SM) Azure PowerShell cmdlets are used in this section.
+
+1. Sign in to your Azure account in the PowerShell console with elevated rights. The following cmdlet prompts you for the sign-in credentials for your Azure Account. After logging in, it downloads your account settings so that they're available to Azure PowerShell. The classic Service Management (SM) Azure PowerShell cmdlets are used in this section.
```azurepowershell Add-AzureAccount
Gateway IP addressing configuration = gwipconfig
```azurepowershell Select-AzureSubscription -SubscriptionName "Name of subscription" ```
-2. Export your Azure network configuration file by running the following command. You can change the location of the file to export to a different location if necessary.
+
+1. Create a directory on your computer. For this example, we created **AzureNet**.
+
+1. Export your Azure network configuration file by running the following command. You can change the location of the file to export to a different location if necessary.
```azurepowershell Get-AzureVNetConfig -ExportToFile C:\AzureNet\NetworkConfig.xml ```
-3. Open the .xml file that you downloaded to edit it. For an example of the network configuration file, see the [Network Configuration Schema](/previous-versions/azure/reference/jj157100(v=azure.100)).
+
+1. Open the .xml file that you downloaded to edit it. For an example of the network configuration file, see the [Network Configuration Schema](../cloud-services/schema-cscfg-networkconfiguration.md).
+
+1. Take note of the `VirtualNetworkSite name=` value. If you created your classic VNet using the portal, the name follows a format similar to "Group ClassicRG ClassicVNet", rather than "ClassicVNet" in the portal.
### 2. Verify the gateway subnet
-In the **VirtualNetworkSites** element, add a gateway subnet to your VNet if one has not already been created. When working with the network configuration file, the gateway subnet MUST be named "GatewaySubnet" or Azure cannot recognize and use it as a gateway subnet.
+In the **VirtualNetworkSites** element, add a gateway subnet to your VNet if one hasn't already been created. The gateway subnet MUST be named "GatewaySubnet" or Azure can't recognize and use it as a gateway subnet.
+ **Example:**
In the **VirtualNetworkSites** element, add a gateway subnet to your VNet if one
<VirtualNetworkSites> <VirtualNetworkSite name="ClassicVNet" Location="West US"> <AddressSpace>
- <AddressPrefix>10.0.0.0/24</AddressPrefix>
+ <AddressPrefix>10.1.0.0/16</AddressPrefix>
</AddressSpace> <Subnets>
- <Subnet name="Subnet-1">
- <AddressPrefix>10.0.0.0/27</AddressPrefix>
+ <Subnet name="Subnet1">
+ <AddressPrefix>10.1.0.0/24</AddressPrefix>
</Subnet> <Subnet name="GatewaySubnet">
- <AddressPrefix>10.0.0.32/29</AddressPrefix>
+ <AddressPrefix>10.1.255.0/27</AddressPrefix>
</Subnet> </Subnets> </VirtualNetworkSite>
In the **VirtualNetworkSites** element, add a gateway subnet to your VNet if one
``` ### 3. Add the local network site
-The local network site you add represents the RM VNet to which you want to connect. Add a **LocalNetworkSites** element to the file if one doesn't already exist. At this point in the configuration, the VPNGatewayAddress can be any valid public IP address because we haven't yet created the gateway for the Resource Manager VNet. Once we create the gateway, we replace this placeholder IP address with the correct public IP address that has been assigned to the RM gateway.
+
+The local network site you add represents the RM VNet to which you want to connect. Add a **LocalNetworkSites** element to the file if one doesn't already exist. At this point in the configuration, the VPNGatewayAddress can be any valid public IP address because we haven't yet created the gateway for the Resource Manager VNet. Once you create the RM gateway, you'll replace this placeholder IP address with the correct public IP address that has been assigned to the RM gateway.
```xml <LocalNetworkSites>
- <LocalNetworkSite name="RMVNetLocal">
+ <LocalNetworkSite name="RMVNetSite">
<AddressSpace> <AddressPrefix>192.168.0.0/16</AddressPrefix> </AddressSpace>
- <VPNGatewayAddress>13.68.210.16</VPNGatewayAddress>
+ <VPNGatewayAddress>5.4.3.2</VPNGatewayAddress>
</LocalNetworkSite> </LocalNetworkSites> ``` ### 4. Associate the VNet with the local network site
-In this section, we specify the local network site that you want to connect the VNet to. In this case, it is the Resource Manager VNet that you referenced earlier. Make sure the names match. This step does not create a gateway. It specifies the local network that the gateway will connect to.
+
+In this section, we specify the local network site that you want to connect the VNet to. In this case, it's the Resource Manager VNet that you referenced earlier. Make sure the names match. This step doesn't create a gateway. It specifies the local network that the gateway will connect to.
```xml <Gateway> <ConnectionsToLocalNetwork>
- <LocalNetworkSiteRef name="RMVNetLocal">
+ <LocalNetworkSiteRef name="RMVNetSite">
<Connection type="IPsec" /> </LocalNetworkSiteRef> </ConnectionsToLocalNetwork>
In this section, we specify the local network site that you want to connect the
``` ### 5. Save the file and upload+ Save the file, then import it to Azure by running the following command. Make sure you change the file path as necessary for your environment. ```azurepowershell Set-AzureVNetConfig -ConfigurationPath C:\AzureNet\NetworkConfig.xml ```
-You will see a similar result showing that the import succeeded.
+You'll see a similar result showing that the import succeeded.
```output OperationDescription OperationId OperationStatus
Set-AzureVNetConfig e0ee6e66-9167-cfa7-a746-7casb9 Succeeded
### 6. Create the gateway
-Before running this example, refer to the network configuration file that you downloaded for the exact names that Azure expects to see. The network configuration file contains the values for your classic virtual networks. Sometimes the names for classic VNets are changed in the network configuration file when creating classic VNet settings in the Azure portal due to the differences in the deployment models. For example, if you used the Azure portal to create a classic VNet named 'Classic VNet' and created it in a resource group named 'ClassicRG', the name that is contained in the network configuration file is converted to 'Group ClassicRG Classic VNet'. When specifying the name of a VNet that contains spaces, use quotation marks around the value.
-
+Before running this example, refer to the network configuration file that you downloaded for the exact names that Azure expects to see. The network configuration file contains the values for your classic virtual networks. When a classic VNet is created using the portal, the virtual network name is different in the network configuration file. For example, if you used the Azure portal to create a classic VNet named 'Classic VNet' and created it in a resource group named 'ClassicRG', the name that is contained in the network configuration file is converted to 'Group ClassicRG Classic VNet'. Always use the name contained in the network configuration file when you are working with PowerShell.When you specify the name of a VNet that contains spaces, use quotation marks around the value.
Use the following example to create a dynamic routing gateway:
New-AzureVNetGateway -VNetName ClassicVNet -GatewayType DynamicRouting
You can check the status of the gateway by using the **Get-AzureVNetGateway** cmdlet.
-## <a name="creatermgw"></a>Section 2 - Configure the RM VNet gateway
--
+## <a name="creatermgw"></a>Configure the RM VNet gateway
-The prerequisites assume that you already have created an RM VNet. In this step, you create a VPN gateway for the RM VNet. Don't start these steps until after you have retrieved the public IP address for the classic VNet's gateway.
+The prerequisites assume that you already have created an RM VNet. In this step, you create a VPN gateway for the RM VNet. Don't start these steps until after you have retrieved the public IP address for the classic VNet's gateway.
-1. Sign in to your Azure account in the PowerShell console. The following cmdlet prompts you for the login credentials for your Azure Account. After signing in, your account settings are downloaded so that they are available to Azure PowerShell. You can optionally use the "Try It" feature to launch Azure Cloud Shell in the browser.
+1. Sign in to your Azure account in the PowerShell console. The following cmdlet prompts you for the sign-in credentials for your Azure Account. After signing in, your account settings are downloaded so that they're available to Azure PowerShell. You can optionally use the "Try It" feature to launch Azure Cloud Shell in the browser.
If you use Azure Cloud Shell, skip the following cmdlet: ```azurepowershell Connect-AzAccount
- ```
- To verify that you are using the right subscription, run the following cmdlet:
+ ```
+
+ To verify that you're using the right subscription, run the following cmdlet:
```azurepowershell-interactive Get-AzSubscription ```
-
+ If you have more than one subscription, specify the subscription that you want to use. ```azurepowershell-interactive Select-AzSubscription -SubscriptionName "Name of subscription" ```
-2. Create a local network gateway. In a virtual network, the local network gateway typically refers to your on-premises location. In this case, the local network gateway refers to your Classic VNet. Give it a name by which Azure can refer to it, and also specify the address space prefix. Azure uses the IP address prefix you specify to identify which traffic to send to your on-premises location. If you need to adjust the information here later, before creating your gateway, you can modify the values and run the sample again.
-
+
+1. Create a local network gateway. In a virtual network, the local network gateway typically refers to your on-premises location. In this case, the local network gateway refers to your Classic VNet. Give it a name by which Azure can refer to it, and also specify the address space prefix. Azure uses the IP address prefix you specify to identify which traffic to send to your on-premises location. If you need to adjust the information here later, before creating your gateway, you can modify the values and run the sample again.
+ **-Name** is the name you want to assign to refer to the local network gateway.<br> **-AddressPrefix** is the Address Space for your classic VNet.<br> **-GatewayIpAddress** is the public IP address of the classic VNet's gateway. Be sure to change the following sample text "n.n.n.n" to reflect the correct IP address.<br> ```azurepowershell-interactive
- New-AzLocalNetworkGateway -Name ClassicVNetLocal `
- -Location "West US" -AddressPrefix "10.0.0.0/24" `
- -GatewayIpAddress "n.n.n.n" -ResourceGroupName RG1
+ New-AzLocalNetworkGateway -Name ClassicVNetSite `
+ -Location "West US" -AddressPrefix "10.1.0.0/16" `
+ -GatewayIpAddress "n.n.n.n" -ResourceGroupName RMRG
```
-3. Request a public IP address to be allocated to the virtual network gateway for the Resource Manager VNet. You can't specify the IP address that you want to use. The IP address is dynamically allocated to the virtual network gateway. However, this does not mean the IP address changes. The only time the virtual network gateway IP address changes is when the gateway is deleted and recreated. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of the gateway.
+
+1. Request a public IP address to be allocated to the virtual network gateway for the Resource Manager VNet. You can't specify the IP address that you want to use. The IP address is dynamically allocated to the virtual network gateway. However, this doesn't mean the IP address changes. The only time the virtual network gateway IP address changes is when the gateway is deleted and recreated. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of the gateway.
In this step, we also set a variable that is used in a later step. ```azurepowershell-interactive
- $ipaddress = New-AzPublicIpAddress -Name gwpip `
- -ResourceGroupName RG1 -Location 'EastUS' `
+ $ipaddress = New-AzPublicIpAddress -Name rmgwpip `
+ -ResourceGroupName RMRG -Location 'EastUS' `
-AllocationMethod Dynamic ```
-4. Verify that your virtual network has a gateway subnet. If no gateway subnet exists, add one. Make sure the gateway subnet is named *GatewaySubnet*.
-5. Retrieve the subnet used for the gateway by running the following command. In this step, we also set a variable to be used in the next step.
-
+1. Verify that your virtual network has a gateway subnet. If no gateway subnet exists, add one. Make sure the gateway subnet is named *GatewaySubnet*.
+
+ ```azurepowershell-interactive
+ $vnet = Get-AzVirtualNetwork -ResourceGroupName RMRG -Name RMVNet
+ Add-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -AddressPrefix 192.168.255.0/27 -VirtualNetwork $vnet
+ Set-AzVirtualNetwork -VirtualNetwork $vnet
+ ```
+
+1. Retrieve the subnet used for the gateway by running the following command. In this step, we also set a variable to be used in the next step.
+ **-Name** is the name of your Resource Manager VNet.<br> **-ResourceGroupName** is the resource group that the VNet is associated with. The gateway subnet must already exist for this VNet and must be named *GatewaySubnet* to work properly.<br> ```azurepowershell-interactive $subnet = Get-AzVirtualNetworkSubnetConfig -Name GatewaySubnet `
- -VirtualNetwork (Get-AzVirtualNetwork -Name RMVNet -ResourceGroupName RG1)
- ```
+ -VirtualNetwork (Get-AzVirtualNetwork -Name RMVNet -ResourceGroupName RMRG)
+ ```
-6. Create the gateway IP addressing configuration. The gateway configuration defines the subnet and the public IP address to use. Use the following sample to create your gateway configuration.
+1. Create the gateway IP addressing configuration. The gateway configuration defines the subnet and the public IP address to use. Use the following sample to create your gateway configuration.
- In this step, the **-SubnetId** and **-PublicIpAddressId** parameters must be passed the id property from the subnet, and IP address objects, respectively. You can't use a simple string. These variables are set in the step to request a public IP and the step to retrieve the subnet.
+ In this step, the **-SubnetId** and **-PublicIpAddressId** parameters must be passed the ID property from the subnet, and IP address objects, respectively. You can't use a simple string. These variables are set in the step to request a public IP and the step to retrieve the subnet.
```azurepowershell-interactive $gwipconfig = New-AzVirtualNetworkGatewayIpConfig ` -Name gwipconfig -SubnetId $subnet.id ` -PublicIpAddressId $ipaddress.id ```
-7. Create the Resource Manager virtual network gateway by running the following command. The `-VpnType` must be *RouteBased*. It can take 45 minutes or more for the gateway to create.
+
+1. Create the Resource Manager virtual network gateway by running the following command. The `-VpnType` must be *RouteBased*. It can take 45 minutes or more for the gateway to create.
```azurepowershell-interactive
- New-AzVirtualNetworkGateway -Name RMGateway -ResourceGroupName RG1 `
+ New-AzVirtualNetworkGateway -Name RMGateway -ResourceGroupName RMRG `
-Location "EastUS" -GatewaySKU Standard -GatewayType Vpn ` -IpConfigurations $gwipconfig ` -EnableBgp $false -VpnType RouteBased ```
-8. Copy the public IP address once the VPN gateway has been created. You use it when you configure the local network settings for your Classic VNet. You can use the following cmdlet to retrieve the public IP address. The public IP address is listed in the return as *IpAddress*.
+
+1. Copy the public IP address once the VPN gateway has been created. You use it when you configure the local network settings for your Classic VNet. You can use the following cmdlet to retrieve the public IP address. The public IP address is listed in the return as *IpAddress*.
```azurepowershell-interactive
- Get-AzPublicIpAddress -Name gwpip -ResourceGroupName RG1
+ Get-AzPublicIpAddress -Name rmgwpip -ResourceGroupName RMRG
```
-## <a name="localsite"></a>Section 3 - Modify the classic VNet local site settings
+## <a name="localsite"></a>Modify the classic VNet local site settings
-In this section, you work with the classic VNet. You replace the placeholder IP address that you used when specifying the local site settings that will be used to connect to the Resource Manager VNet gateway. Because you are working with the classic VNet, use PowerShell installed locally to your computer, not the Azure Cloud Shell TryIt.
+In this section, you work with the classic VNet. You replace the placeholder IP address that you used when specifying the local site settings that will be used to connect to the Resource Manager VNet gateway. Because you're working with the classic VNet, use PowerShell installed locally to your computer, not the Azure Cloud Shell TryIt.
1. Export the network configuration file. ```azurepowershell Get-AzureVNetConfig -ExportToFile C:\AzureNet\NetworkConfig.xml ```
-2. Using a text editor, modify the value for VPNGatewayAddress. Replace the placeholder IP address with the public IP address of the Resource Manager gateway and then save the changes.
- ```
+1. Using a text editor, modify the value for VPNGatewayAddress. Replace the placeholder IP address with the public IP address of the Resource Manager gateway and then save the changes.
+
+ ```xml
<VPNGatewayAddress>13.68.210.16</VPNGatewayAddress> ```
-3. Import the modified network configuration file to Azure.
+
+1. Import the modified network configuration file to Azure.
```azurepowershell Set-AzureVNetConfig -ConfigurationPath C:\AzureNet\NetworkConfig.xml ```
-## <a name="connect"></a>Section 4 - Create a connection between the gateways
+## <a name="connect"></a>Create a connection between the gateways
+ Creating a connection between the gateways requires PowerShell. You may need to add your Azure Account to use the classic version of the PowerShell cmdlets. To do so, use **Add-AzureAccount**.
-1. In the PowerShell console, set your shared key. Before running the cmdlets, refer to the network configuration file that you downloaded for the exact names that Azure expects to see. When specifying the name of a VNet that contains spaces, use single quotation marks around the value.<br><br>In following example, **-VNetName** is the name of the classic VNet and **-LocalNetworkSiteName** is the name you specified for the local network site. The **-SharedKey** is a value that you generate and specify. In the example, we used 'abc123', but you can generate and use something more complex. The important thing is that the value you specify here must be the same value that you specify in the next step when you create your connection. The return should show **Status: Successful**.
+1. In the PowerShell console, set your shared key. Before running the cmdlets, refer to the network configuration file that you downloaded for the exact names that Azure expects to see. When specifying the name of a VNet that contains spaces, use single quotation marks around the value.
+
+ In following example, **-VNetName** is the name of the classic VNet and **-LocalNetworkSiteName** is the name you specified for the local network site. Verify the names of both in the network configuration file that you downloaded earlier.
+
+ The **-SharedKey** is a value that you generate and specify. In the example, we used 'abc123', but you can generate and use something more complex. The important thing is that the value you specify here must be the same value that you specify in the next step when you create your connection. The return should show **Status: Successful**.
```azurepowershell Set-AzureVNetGatewayKey -VNetName ClassicVNet `
- -LocalNetworkSiteName RMVNetLocal -SharedKey abc123
+ -LocalNetworkSiteName RMVNetSite -SharedKey abc123
```
-2. Create the VPN connection by running the following commands:
-
+
+1. Create the VPN connection by running the following commands:
+ Set the variables. ```azurepowershell-interactive
- $vnet01gateway = Get-AzLocalNetworkGateway -Name ClassicVNetLocal -ResourceGroupName RG1
- $vnet02gateway = Get-AzVirtualNetworkGateway -Name RMGateway -ResourceGroupName RG1
+ $vnet01gateway = Get-AzLocalNetworkGateway -Name ClassicVNetSite -ResourceGroupName RMRG
+ $vnet02gateway = Get-AzVirtualNetworkGateway -Name RMGateway -ResourceGroupName RMRG
```
-
+ Create the connection. Notice that the **-ConnectionType** is IPsec, not Vnet2Vnet. ```azurepowershell-interactive
- New-AzVirtualNetworkGatewayConnection -Name RM-Classic -ResourceGroupName RG1 `
+ New-AzVirtualNetworkGatewayConnection -Name RM-Classic -ResourceGroupName RMRG `
-Location "East US" -VirtualNetworkGateway1 ` $vnet02gateway -LocalNetworkGateway2 ` $vnet01gateway -ConnectionType IPsec -RoutingWeight 10 -SharedKey 'abc123' ```
-## <a name="verify"></a>Section 5 - Verify your connections
+## <a name="verify"></a>Verify your connections
-### To verify the connection from your classic VNet to your Resource Manager VNet
+### Classic VNet to RM VNet
-#### PowerShell
+You can verify that your connection succeeded by using the 'Get-AzureVNetConnection' cmdlet. This cmdlet must be run locally on your computer.
+1. Use the following cmdlet example, configuring the values to match your own. The name of the virtual network must be in quotes if it contains spaces. Use the name of the virtual network, as found in the network configuration file.
-#### Azure portal
+ ```azurepowershell
+ Get-AzureVNetConnection "ClassicVNet"
+ ```
+
+1. After the cmdlet has finished, view the values. In the example below, the Connectivity State shows as 'Connected' and you can see ingress and egress bytes.
+ ```output
+ ConnectivityState : Connected
+ EgressBytesTransferred : 0
+ IngressBytesTransferred : 0
+ LastConnectionEstablished : 4/25/2022 4:24:34 PM
+ LastEventID : 24401
+ LastEventMessage : The connectivity state for the local network site 'RMVNetSite' changed from Not Connected to Connected.
+ LastEventTimeStamp : 4/25/2022 4:24:34 PM
+ LocalNetworkSiteName : RMVNetSite
+ OperationDescription :
+ OperationId :
+ OperationStatus :
+ ```
+### RM VNet to classic VNet
-### To verify the connection from your Resource Manager VNet to your classic VNet
+You can verify that your connection succeeded by using the 'Get-AzVirtualNetworkGatewayConnection' cmdlet, with or without '-Debug'.
-#### PowerShell
+1. Use the following cmdlet example, configuring the values to match your own. If prompted, select 'A' in order to run 'All'. In the example, '-Name' refers to the name of the connection that you want to test.
+ ```azurepowershell-interactive
+ Get-AzVirtualNetworkGatewayConnection -Name VNet1toSite1 -ResourceGroupName TestRG1
+ ```
-#### Azure portal
+1. After the cmdlet has finished, view the values. In the example below, the connection status shows as 'Connected' and you can see ingress and egress bytes.
+ ```azure-powershell-interactive
+ "connectionStatus": "Connected",
+ "ingressBytesTransferred": 33509044,
+ "egressBytesTransferred": 4142431
+ ```
-## <a name="faq"></a>VNet-to-VNet FAQ
+## Next steps
+For more information about VNet-to-VNet connections, see the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md).