Updates from: 05/07/2022 01:07:44
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Restful Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/restful-technical-profile.md
Your REST API may need to return an error message, such as 'The user was not fou
| Attribute | Required | Description | | | -- | -- | | version | Yes | Your REST API version. For example: 1.0.1 |
-| status | Yes | Must be 409 |
+| status | Yes | An HTTP response status codes-like number, and must be 409 |
| code | No | An error code from the RESTful endpoint provider, which is displayed when `DebugMode` is enabled. | | requestId | No | A request identifier from the RESTful endpoint provider, which is displayed when `DebugMode` is enabled. | | userMessage | Yes | An error message that is shown to the user. |
active-directory-b2c Technical Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/technical-overview.md
Read the [User flows and custom policies overview](user-flow-overview.md) articl
## User interface
-In Azure AD B2C, you can craft your users' identity experiences so that the pages that are shown blend seamlessly with the look and feel of your brand. You get nearly full control of the HTML and CSS content presented to your users when they proceed through your application's identity journeys. With this flexibility, you can maintain brand and visual consistency between your application and Azure AD B2C.
+In Azure AD B2C, you can craft your users' identity experiences so that the pages that are shown blend seamlessly with the look and feel of your brand. You get nearly full control of the HTML and CSS content presented to your users when they proceed through your application's identity journeys. (Customizing the pages rendered by third parties when using social accounts is limited to the options provided by the identity provider, and these are outside the control of Azure AD B2C.) With this flexibility, you can maintain brand and visual consistency between your application and Azure AD B2C.
The following diagram shows how Azure AD B2C can communicate using various proto
## Application integration
-When a user wants to sign in to your application, the application initiates an authorization request to a user flow- or custom policy-provided endpoint. The user flow or custom policy defines and controls the user's experience. When they complete a user flow, for example the *sign-up or sign-in* flow, Azure AD B2C generates a token, then redirects the user back to your application.
+When a user wants to sign in to your application, the application initiates an authorization request to a user-flow or custom policy-provided endpoint. The user flow or custom policy defines and controls the user's experience. When they complete a user flow, for example the *sign up or sign in* flow, Azure AD B2C generates a token, then redirects the user back to your application. This token is specific to Azure AD B2C and is not to be confused with the token issued by third-party identity providers when using social accounts. For information about how to use third-party tokens, see [Pass an identity provider access token to your application in Azure Active Directory B2C](idp-pass-through-user-flow.md).
:::image type="content" source="media/technical-overview/app-integration.png" alt-text="Mobile app with arrows showing flow between Azure AD B2C sign-in page.":::
Azure AD B2C evaluates each sign-in event and ensures that all policy requiremen
## Password complexity
-During sign up or password reset, your users must supply a password that meets complexity rules. By default, Azure AD B2C enforces a strong password policy. Azure AD B2C also provides configuration options for specifying the complexity requirements of the passwords your customers use.
+During sign up or password reset, your users must supply a password that meets complexity rules. By default, Azure AD B2C enforces a strong password policy. Azure AD B2C also provides configuration options for specifying the complexity requirements of the passwords your customers use when they use local accounts.
![Screenshot of password complexity user experience](media/technical-overview/password-complexity.png)
Sessions are modeled as encrypted data, with the decryption key known only to th
### Access to user data
-Azure AD B2C tenants share many characteristics with enterprise Azure Active Directory tenants used for employees and partners. Shared aspects include mechanisms for viewing administrative roles, assigning roles, and auditing activities.
+Azure AD B2C tenants share many characteristics with enterprise Azure Active Directory tenants used for employees and partners. Shared aspects include mechanisms for viewing administrative roles, assigning roles, and auditing activities.
You can assign roles to control who can perform certain administrative actions in Azure AD B2C, including:
active-directory-b2c Userjourneys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/userjourneys.md
User journeys specify explicit paths through which a policy allows a relying par
These user journeys can be considered as templates available to satisfy the core need of the various relying parties of the community of interest. User journeys facilitate the definition of the relying party part of a policy. A policy can define multiple user journeys. Each user journey is a sequence of orchestration steps.
-To define the user journeys supported by the policy, a **UserJourneys** element is added under the top-level element of the policy file.
+To define the user journeys supported by the policy, a `UserJourneys` element is added under the top-level `TrustFrameworkPolicy` element of the policy file.
+
+```xml
+<TrustFrameworkPolicy ...>
+ ...
+ <UserJourneys>
+ ...
+ </UserJourneys>
+</TrustFrameworkPolicy>
+```
The **UserJourneys** element contains the following element:
The **AuthorizationTechnicalProfiles** element contains the following element:
| Element | Occurrences | Description | | - | -- | -- |
-| AuthorizationTechnicalProfile | 0:1 | List of authorization technical profiles. |
+| AuthorizationTechnicalProfile | 0:1 | The technical profile reference used to authorize the user. |
The **AuthorizationTechnicalProfile** element contains the following attribute: | Attribute | Required | Description | | | -- | -- |
-| TechnicalProfileReferenceId | Yes | The identifier of the technical profile that is to be executed. |
+| ReferenceId | Yes | The identifier of the technical profile that is to be executed. |
The following example shows a user journey element with authorization technical profiles:
Orchestration steps can be conditionally executed based on preconditions defined
To specify the ordered list of orchestration steps, an **OrchestrationSteps** element is added as part of the policy. This element is required.
+```xml
+<UserJourney Id="SignUpOrSignIn">
+ <OrchestrationSteps>
+ <OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ ...
+```
+ The **OrchestrationSteps** element contains the following element: | Element | Occurrences | Description |
The **OrchestrationStep** element contains the following attributes:
| Attribute | Required | Description | | | -- | -- | | `Order` | Yes | The order of the orchestration steps. |
-| `Type` | Yes | The type of the orchestration step. Possible values: <ul><li>**ClaimsProviderSelection** - Indicates that the orchestration step presents various claims providers to the user to select one.</li><li>**CombinedSignInAndSignUp** - Indicates that the orchestration step presents a combined social provider sign-in and local account sign-up page.</li><li>**ClaimsExchange** - Indicates that the orchestration step exchanges claims with a claims provider.</li><li>**GetClaims** - Specifies that the orchestration step should process claim data sent to Azure AD B2C from the relying party via its `InputClaims` configuration.</li><li>**InvokeSubJourney** - Indicates that the orchestration step exchanges claims with a [sub journey](subjourneys.md) (in public preview).</li><li>**SendClaims** - Indicates that the orchestration step sends the claims to the relying party with a token issued by a claims issuer.</li></ul> |
+| `Type` | Yes | The type of the orchestration step. Possible values: <ul><li>**ClaimsProviderSelection** - Indicates that the orchestration step presents various claims providers to the user to select one.</li><li>**CombinedSignInAndSignUp** - Indicates that the orchestration step presents a combined social provider sign-in and local account sign-up page.</li><li>**ClaimsExchange** - Indicates that the orchestration step exchanges claims with a claims provider.</li><li>**GetClaims** - Specifies that the orchestration step should process claim data sent to Azure AD B2C from the relying party via its `InputClaims` configuration.</li><li>**InvokeSubJourney** - Indicates that the orchestration step exchanges claims with a [sub journey](subjourneys.md).</li><li>**SendClaims** - Indicates that the orchestration step sends the claims to the relying party with a token issued by a claims issuer.</li></ul> |
| ContentDefinitionReferenceId | No | The identifier of the [content definition](contentdefinitions.md) associated with this orchestration step. Usually the content definition reference identifier is defined in the self-asserted technical profile. But, there are some cases when Azure AD B2C needs to display something without a technical profile. There are two examples - if the type of the orchestration step is one of following: `ClaimsProviderSelection` or `CombinedSignInAndSignUp`, Azure AD B2C needs to display the identity provider selection without having a technical profile. | | CpimIssuerTechnicalProfileReferenceId | No | The type of the orchestration step is `SendClaims`. This property defines the technical profile identifier of the claims provider that issues the token for the relying party. If absent, no relying party token is created. |
The **Precondition** element contains the following attributes:
| Attribute | Required | Description | | | -- | -- | | `Type` | Yes | The type of check or query to perform for this precondition. The value can be **ClaimsExist**, which specifies that the actions should be performed if the specified claims exist in the user's current claim set, or **ClaimEquals**, which specifies that the actions should be performed if the specified claim exists and its value is equal to the specified value. |
-| `ExecuteActionsIf` | Yes | Decides how the precondition is considered satisfied. Possible values: `true` (default), or `false`. If the value is set to `true`, it's considered satisfied when the claim matches the precondition. If the value is set to `false`, it's considered satisfied when the claim doesn't match the precondition. |
+| `ExecuteActionsIf` | Yes | Decides how the precondition is considered satisfied. Possible values: `true`, or `false`. If the value is set to `true`, it's considered satisfied when the claim matches the precondition. If the value is set to `false`, it's considered satisfied when the claim doesn't match the precondition. |
The **Precondition** elements contains the following elements:
active-directory Howto Authentication Use Email Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-use-email-signin.md
Some organizations haven't moved to hybrid authentication for the following reas
To help with the move to hybrid authentication, you can configure Azure AD to let users sign in with their email as an alternate login ID. For example, if *Contoso* rebranded to *Fabrikam*, rather than continuing to sign in with the legacy `ana@contoso.com` UPN, email as an alternate login ID can be used. To access an application or service, users would sign in to Azure AD using their non-UPN email, such as `ana@fabrikam.com`.
-![Diagram of email as an alternate login ID.](media/howto-authentication-use-email-signin/email-alternate-login-id.png)
+![Diagram of email as an alternate login I D.](media/howto-authentication-use-email-signin/email-alternate-login-id.png)
This article shows you how to enable and use email as an alternate login ID.
This article shows you how to enable and use email as an alternate login ID.
Here's what you need to know about email as an alternate login ID: * The feature is available in Azure AD Free edition and higher.
-* The feature enables sign-in with *ProxyAddresses*, in addition to UPN, for cloud-authenticated Azure AD users. More on how this applies to Azure AD B2B scenarios in the [B2B](#b2b-guest-user-sign-in-with-an-email-address) section.
+* The feature enables sign-in with *ProxyAddresses*, in addition to UPN, for cloud-authenticated Azure AD users. More on how this applies to Azure AD business-to-business (B2B) collaboration in the [B2B](#b2b-guest-user-sign-in-with-an-email-address) section.
* When a user signs in with a non-UPN email, the `unique_name` and `preferred_username` claims (if present) in the [ID token](../develop/id-tokens.md) will return the non-UPN email. * The feature supports managed authentication with Password Hash Sync (PHS) or Pass-Through Authentication (PTA). * There are two options for configuring the feature:
In the current preview state, the following limitations apply to email as an alt
* [Hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md) * [Azure AD joined devices](../devices/concept-azure-ad-join.md) * [Azure AD registered devices](../devices/concept-azure-ad-register.md)
- * [Applications using Resource Owner Password Credentials (ROPC)](../develop/v2-oauth-ropc.md)
- * Applications using legacy authentication such as POP3 and SMTP
+ * [Resource Owner Password Credentials (ROPC)](../develop/v2-oauth-ropc.md)
+ * Legacy authentication such as POP3 and SMTP
* Skype for Business
- * Microsoft Office on macOS
* Microsoft 365 Admin Portal * **Unsupported apps** - Some third-party applications may not work as expected if they assume that the `unique_name` or `preferred_username` claims are immutable or will always match a specific user attribute, such as UPN.
-* **Logging** - Changes made to the feature's configuration in HRD policy are not explicitly shown in the audit logs. In addition, the *Sign-in identifier type* field in the sign-in logs may not be always accurate and should not be used to determine whether the feature has been used for sign-in.
+* **Logging** - Changes made to the feature's configuration in HRD policy are not explicitly shown in the audit logs.
* **Staged rollout policy** - The following limitations apply only when the feature is enabled using staged rollout policy: * The feature does not work as expected for users that are included in other staged rollout policies.
In both configuration options, the user submits their username and password to A
One of the user attributes that's automatically synchronized by Azure AD Connect is *ProxyAddresses*. If users have an email address defined in the on-prem AD DS environment as part of the *ProxyAddresses* attribute, it's automatically synchronized to Azure AD. This email address can then be used directly in the Azure AD sign-in process as an alternate login ID. > [!IMPORTANT]
-> Only emails in verified domains for the tenant are synchronized to Azure AD. Each Azure AD tenant has one or more verified domains, for which you have proven ownership, and are uniquely bound to you tenant.
+> Only emails in verified domains for the tenant are synchronized to Azure AD. Each Azure AD tenant has one or more verified domains, for which you have proven ownership, and are uniquely bound to your tenant.
> > For more information, see [Add and verify a custom domain name in Azure AD][verify-domain]. ## B2B guest user sign-in with an email address
-![Diagram of email as an alternate login ID for B2B guest user sign-in.](media/howto-authentication-use-email-signin/email-alternate-login-id-b2b.png)
+![Diagram of email as an alternate login I D for B 2 B guest user sign-in.](media/howto-authentication-use-email-signin/email-alternate-login-id-b2b.png)
-Email as an alternate login ID applies to [Azure AD business-to-business (B2B) collaboration](../external-identities/what-is-b2b.md) under a "bring your own sign-in identifiers" model. When email as an alternate login ID is enabled in the home tenant, Azure AD users can perform guest sign in with non-UPN email on the resource tenanted endpoint. No action is required from the resource tenant to enable this functionality.
+Email as an alternate login ID applies to [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) under a "bring your own sign-in identifiers" model. When email as an alternate login ID is enabled in the home tenant, Azure AD users can perform guest sign in with non-UPN email on the resource tenanted endpoint. No action is required from the resource tenant to enable this functionality.
## Enable user sign-in with an email address
Email as an alternate login ID applies to [Azure AD business-to-business (B2B) c
Once users with the *ProxyAddresses* attribute applied are synchronized to Azure AD using Azure AD Connect, you need to enable the feature for users to sign in with email as an alternate login ID for your tenant. This feature tells the Azure AD login servers to not only check the sign-in identifier against UPN values, but also against *ProxyAddresses* values for the email address.
-During preview, you can currently only enable the sign-in with email as an alternate login ID feature using PowerShell. You need *global administrator* permissions to complete the following steps:
+During preview, you currently need *global administrator* permissions to enable sign-in with email as an alternate login ID. You can use either Azure portal or PowerShell to set up the feature.
-1. Open a PowerShell session as an administrator, then install the *AzureADPreview* module using the [Install-Module][Install-Module] cmdlet:
+### Azure portal
+
+1. Sign in to the [Azure portal][azure-portal] as a *global administrator*.
+1. Search for and select **Azure Active Directory**.
+1. From the navigation menu on the left-hand side of the Azure Active Directory window, select **Azure AD Connect > Email as alternate login ID**.
+
+ ![Screenshot of email as alternate login I D option in the Azure portal.](media/howto-authentication-use-email-signin/azure-ad-connect-screen.png)
+
+1. Click the checkbox next to *Email as an alternate login ID*.
+1. Click **Save**.
+
+ ![Screenshot of email as alternate login I D blade in the Azure portal.](media/howto-authentication-use-email-signin/email-alternate-login-id-screen.png)
+
+With the policy applied, it can take up to 1 hour to propagate and for users to be able to sign in using their alternate login ID.
+
+### PowerShell
+
+> [!NOTE]
+> This configuration option uses HRD policy. For more information, see [homeRealmDiscoveryPolicy resource type](/graph/api/resources/homeRealmDiscoveryPolicy?view=graph-rest-1.0).
+
+Once users with the *ProxyAddresses* attribute applied are synchronized to Azure AD using Azure AD Connect, you need to enable the feature for users to sign-in with email as an alternate login ID for your tenant. This feature tells the Azure AD login servers to not only check the sign-in identifier against UPN values, but also against *ProxyAddresses* values for the email address.
+
+During preview, you can currently only enable email as an alternate login ID using PowerShell or the Microsoft Graph API. You need *global administrator* privileges to complete the following steps:
+
+1. Open a PowerShell session as an administrator, then install the *Microsoft.Graph* module using the `Install-Module` cmdlet:
```powershell
- Install-Module AzureADPreview
+ Install-Module Microsoft.Graph
```
- If prompted, select **Y** to install NuGet or to install from an untrusted repository.
+ For more information on installation, see [Install the Microsoft Graph PowerShell SDK](/graph/powershell/installation).
-1. Sign in to your Azure AD tenant as a *global administrator* using the [Connect-AzureAD][Connect-AzureAD] cmdlet:
+1. Sign-in to your Azure AD tenant using the `Connect-MgGraph` cmdlet:
```powershell
- Connect-AzureAD
+ Connect-MgGraph -Scopes "Policy.ReadWrite.ApplicationConfiguration" -TenantId organizations
```
- The command returns information about your account, environment, and tenant ID.
+ The command will ask you to authenticate using a web browser.
-1. Check if the *HomeRealmDiscoveryPolicy* already exists in your tenant using the [Get-AzureADPolicy][Get-AzureADPolicy] cmdlet as follows:
+1. Check if a *HomeRealmDiscoveryPolicy* already exists in your tenant using the `Get-MgPolicyHomeRealmDiscoveryPolicy` cmdlet as follows:
```powershell
- Get-AzureADPolicy | Where-Object Type -eq "HomeRealmDiscoveryPolicy" | Format-List *
+ Get-MgPolicyHomeRealmDiscoveryPolicy
``` 1. If there's no policy currently configured, the command returns nothing. If a policy is returned, skip this step and move on to the next step to update an existing policy.
- To add the *HomeRealmDiscoveryPolicy* policy to the tenant, use the [New-AzureADPolicy][New-AzureADPolicy] cmdlet and set the *AlternateIdLogin* attribute to *"Enabled": true* as shown in the following example:
+ To add the *HomeRealmDiscoveryPolicy* to the tenant, use the `New-MgPolicyHomeRealmDiscoveryPolicy` cmdlet and set the *AlternateIdLogin* attribute to *"Enabled": true* as shown in the following example:
```powershell $AzureADPolicyDefinition = @(
During preview, you can currently only enable the sign-in with email as an alter
} } | ConvertTo-JSON -Compress )+ $AzureADPolicyParameters = @{ Definition = $AzureADPolicyDefinition DisplayName = "BasicAutoAccelerationPolicy"
- IsOrganizationDefault = $true
- Type = "HomeRealmDiscoveryPolicy"
+ AdditionalProperties = @{ IsOrganizationDefault = $true }
}
- New-AzureADPolicy @AzureADPolicyParameters
+
+ New-MgPolicyHomeRealmDiscoveryPolicy @AzureADPolicyParameters
``` When the policy has been successfully created, the command returns the policy ID, as shown in the following example output: ```powershell
- Id                                   DisplayName                 Type                     IsOrganizationDefault
- --                                   --                 -                    
- 5de3afbe-4b7a-4b33-86b0-7bbe308db7f7 BasicAutoAccelerationPolicy HomeRealmDiscoveryPolicy True
+ Definition DeletedDateTime Description DisplayName Id IsOrganizationDefault
+ - -- -- --
+ {{"HomeRealmDiscoveryPolicy":{"AlternateIdLogin":{"Enabled":true}}}} BasicAutoAccelerationPolicy HRD_POLICY_ID True
``` 1. If there's already a configured policy, check if the *AlternateIdLogin* attribute is enabled, as shown in the following example policy output: ```powershell
- Id : 5de3afbe-4b7a-4b33-86b0-7bbe308db7f7
- OdataType :
- AlternativeIdentifier :
- Definition : {{"HomeRealmDiscoveryPolicy" :{"AlternateIdLogin":{"Enabled": true}}}}
- DisplayName : BasicAutoAccelerationPolicy
- IsOrganizationDefault : True
- KeyCredentials : {}
- Type : HomeRealmDiscoveryPolicy
+ Definition DeletedDateTime Description DisplayName Id IsOrganizationDefault
+ - -- -- --
+ {{"HomeRealmDiscoveryPolicy":{"AlternateIdLogin":{"Enabled":true}}}} BasicAutoAccelerationPolicy HRD_POLICY_ID True
```
- If the policy exists but the *AlternateIdLogin* attribute that isn't present or enabled, or if other attributes exist on the policy you wish to preserve, update the existing policy using the [Set-AzureADPolicy][Set-AzureADPolicy] cmdlet.
+ If the policy exists but the *AlternateIdLogin* attribute that isn't present or enabled, or if other attributes exist on the policy you wish to preserve, update the existing policy using the `Update-MgPolicyHomeRealmDiscoveryPolicy` cmdlet.
> [!IMPORTANT] > When you update the policy, make sure you include any old settings and the new *AlternateIdLogin* attribute.
- The following example adds the *AlternateIdLogin* attribute and preserves the *AllowCloudPasswordValidation* attribute that may have already been set:
+ The following example adds the *AlternateIdLogin* attribute and preserves the *AllowCloudPasswordValidation* attribute that was previously set:
```powershell $AzureADPolicyDefinition = @(
During preview, you can currently only enable the sign-in with email as an alter
} } | ConvertTo-JSON -Compress )+ $AzureADPolicyParameters = @{
- ID = "b581c39c-8fe3-4bb5-b53d-ea3de05abb4b"
- Definition = $AzureADPolicyDefinition
- DisplayName = "BasicAutoAccelerationPolicy"
- IsOrganizationDefault = $true
- Type = "HomeRealmDiscoveryPolicy"
+ HomeRealmDiscoveryPolicyId = "HRD_POLICY_ID"
+ Definition = $AzureADPolicyDefinition
+ DisplayName = "BasicAutoAccelerationPolicy"
+ AdditionalProperties = @{ "IsOrganizationDefault" = $true }
}
-
- Set-AzureADPolicyΓÇ»@AzureADPolicyParameters
+
+ Update-MgPolicyHomeRealmDiscoveryPolicy @AzureADPolicyParameters
``` Confirm that the updated policy shows your changes and that the *AlternateIdLogin* attribute is now enabled: ```powershell
- Get-AzureADPolicy | Where-Object Type -eq "HomeRealmDiscoveryPolicy" | Format-List *
+ Get-MgPolicyHomeRealmDiscoveryPolicy
```
-With the policy applied, it can take up to 1 hour to propagate and for users to be able to sign in using their alternate login ID.
+> [!NOTE]
+> With the policy applied, it can take up to an hour to propagate and for users to be able to sign-in using email as an alternate login ID.
+
+### Removing policies
+
+To remove an HRD policy, use the `Remove-MgPolicyHomeRealmDiscoveryPolicy` cmdlet:
+
+```powershell
+Remove-MgPolicyHomeRealmDiscoveryPolicy -HomeRealmDiscoveryPolicyId "HRD_POLICY_ID"
+```
## Enable staged rollout to test user sign-in with an email address
If users have trouble signing in with their email address, review the following
``` 1. Make sure the user account has their email address set in the *ProxyAddresses* attribute in Azure AD.
+### Sign-in logs
++
+You can review the [sign-in logs in Azure AD][sign-in-logs] for more information. Sign-ins with email as an alternate login ID will emit `proxyAddress` in the *Sign-in identifier type* field and the inputted username in the *Sign-in identifier* field.
+ ### Conflicting values between cloud-only and synced users Within a tenant, a cloud-only user's UPN may take on the same value as another user's proxy address synced from the on-premises directory. In this scenario, with the feature enabled, the cloud-only user will not be able to sign in with their UPN. Here are the steps for detecting instances of this issue.
For more information on hybrid identity operations, see [how password hash sync]
[phs-overview]: ../hybrid/how-to-connect-password-hash-synchronization.md [pta-overview]: ../hybrid/how-to-connect-pta-how-it-works.md [identity-protection]: ../identity-protection/overview-identity-protection.md#risk-detection-and-remediation
+[sign-in-logs]: ../reports-monitoring/concept-sign-ins.md
<!-- EXTERNAL LINKS -->
+[azure-portal]: https://portal.azure.com
[Install-Module]: /powershell/module/powershellget/install-module [Connect-AzureAD]: /powershell/module/azuread/connect-azuread [Get-AzureADPolicy]: /powershell/module/azuread/get-azureadpolicy
active-directory Msal Js Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-sso.md
# Single sign-on with MSAL.js
-Single Sign-On (SSO) enables users to enter their credentials once to sign in and establish a session, which can be reused across multiple applications without requiring to authenticate again. The session provides a seamless experience to the user and reduces the repeated prompts for credentials.
+Single sign-on (SSO) provides a more seamless experience by reducing the number of times your users are asked for their credentials. Users enter their credentials once, and the established session can be reused by other applications on the device without further prompting.
-Azure Active Directory (Azure AD) provides SSO capabilities to applications by setting a session cookie when the user authenticates the first time. The MSAL.js library allows applications to apply a session cookie in a few ways.
+Azure Active Directory (Azure AD) enables SSO by setting a session cookie when a user first authenticates. MSAL.js allows use of the session cookie for SSO between the browser tabs opened for one or several applications.
## SSO between browser tabs
-When your application is open in multiple tabs and you first sign in the user on one tab, the user is also signed in on the other tabs without being prompted. MSAL.js caches the ID token for the user in the browser `localStorage` and will sign the user in to the application on the other open tabs.
+When a user has your application open in several tabs and signs in on one of them, they're signed into the same app open on the other tabs without being prompted. MSAL.js caches the ID token for the user in the browser `localStorage` and will sign the user in to the application on the other open tabs.
By default, MSAL.js uses `sessionStorage`, which doesn't allow the session to be shared between tabs. To get SSO between tabs, make sure to set the `cacheLocation` in MSAL.js to `localStorage` as shown below. ```javascript- const config = { auth: { clientId: "abcd-ef12-gh34-ikkl-ashdjhlhsdg",
When applications are hosted on the same domain, the user can sign into an app o
When applications are hosted on different domains, the tokens cached on domain A cannot be accessed by MSAL.js in domain B.
-When a user is signed in on domain A navigate to an application on domain B, the user will be redirected or prompted with the sign-in page. Since Azure AD still has the user session cookie, it will sign in the user and no prompt for credentials.
+When a user signed in on domain A navigates to an application on domain B, they're typically redirected or prompted to sign in. Because Azure AD still has the user's session cookie, it signs in the user without prompting for credentials.
+
+If the user has multiple user accounts in a session with Azure AD, the user is prompted to pick an account to sign in with.
-If the user has multiple user accounts in session with Azure AD, the user will be prompted to pick the relevant account to sign in with.
+### Automatic account selection
-### Automatically select account on Azure AD
+When a user is signed in concurrently to multiple Azure AD accounts on the same device, you might find you have the need to bypass the account selection prompt.
-In certain cases, the application has access to the user's authentication context and there's a need to bypass the Azure AD account selection prompt when multiple accounts are signed in. Bypassing the Azure AD account selection prompt can be done in a few different ways:
+**Using a session ID**
-**Using Session ID**
+Use the session ID (SID) in silent authentication requests you make with `acquireTokenSilent` in MSAL.js.
-Session ID (SID) is an [optional claim](active-directory-optional-claims.md) that can be configured in the ID tokens. A claim allows the application to identify the userΓÇÖs Azure AD session independent of the userΓÇÖs account name or username. You can pass the SID in the request parameters to the `acquireTokenSilent` call. The `acquireTokenSilent` in the request parameters allow Azure AD to bypass the account selection. SID is bound to the session cookie and won't cross browser contexts.
+To use a SID, add `sid` as an [optional claim](active-directory-optional-claims.md) to your app's ID tokens. The `sid` claim allows an application to identify a user's Azure AD session independent of their account name or username. To learn how to add optional claims like `sid`, see [Provide optional claims to your app](active-directory-optional-claims.md).
+
+The SID is bound to the session cookie and won't cross browser contexts. You can use the SID only with `acquireTokenSilent`.
```javascript var request = {
var request = {
}); ```
-SID can be used only with silent authentication requests made by `acquireTokenSilent` call in MSAL.js. To find the steps to configure optional claims in your application manifest, see [Provide optional claims to your app](active-directory-optional-claims.md).
-
-**Using Login Hint**
+**Using a login hint**
-If you don't have SID claim configured or need to bypass the account selection prompt in interactive authentication calls, you can do so by providing a `login_hint` in the request parameters and optionally a `domain_hint` as `extraQueryParameters` in the MSAL.js interactive methods (`loginPopup`, `loginRedirect`, `acquireTokenPopup`, and `acquireTokenRedirect`). For example:
+To bypass the account selection prompt typically shown during interactive authentication requests (or for silent requests when you haven't configured the `sid` optional claim), provide a `loginHint`. In multi-tenant applications, also include a `domain_hint`.
```javascript var request = {
var request = {
msalInstance.loginRedirect(request); ```
-To get the values for login_hint and domain_hint by reading the claims returned in the ID token for the user.
+Get the values for `loginHint` and `domain_hint` from the user's **ID token**:
-- **loginHint** should be set to the `preferred_username` claim in the ID token.
+- `loginHint`: Use the ID token's `preferred_username` claim value.
-- **domain_hint** is only required to be passed when using the /common authority. The domain hint is determined by tenant ID(tid). If the `tid` claim in the ID token is `9188040d-6c67-4c5b-b112-36a304b66dad` it's consumers. Otherwise, it's organizations.
+- `domain_hint`: Use the ID token's `tid` claim value. Required in requests made by multi-tenant applications that use the */common* authority. Optional for other applications.
-For more information about **login_hint** and **domain_hint**, see [auth code grant](v2-oauth2-auth-code-flow.md).
+For more information about login hint and domain hint, see [Microsoft identity platform and OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md).
## SSO without MSAL.js login
active-directory Scenario Web App Call Api Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-acquire-token.md
Previously updated : 09/25/2020 Last updated : 05/06/2022 #Customer intent: As an application developer, I want to know how to write a web app that calls web APIs by using the Microsoft identity platform.
You've built your client application object. Now, you'll use it to acquire a token to call a web API. In ASP.NET or ASP.NET Core, calling a web API is done in the controller: -- Get a token for the web API by using the token cache. To get this token, you call the MSAL `AcquireTokenSilent` method (or the equivalent in Microsoft.Identity.Web).
+- Get a token for the web API by using the token cache. To get this token, you call the Microsoft Authentication Library (MSAL) `AcquireTokenSilent` method (or the equivalent in Microsoft.Identity.Web).
- Call the protected API, passing the access token to it as a parameter. # [ASP.NET Core](#tab/aspnetcore)
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-management-azure-portal.md
Previously updated : 02/15/2022 Last updated : 05/06/2022
active-directory Groups Create Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-create-rule.md
Previously updated : 09/02/2021 Last updated : 05/05/2022
# Create or update a dynamic group in Azure Active Directory
-In Azure Active Directory (Azure AD), you can use rules to determine group membership based on user or device properties. This article tells how to set up a rule for a dynamic group in the Azure portal.
-Dynamic membership is supported for security groups or Microsoft 365 Groups. When a group membership rule is applied, user and device attributes are evaluated for matches with the membership rule. When an attribute changes for a user or device, all dynamic group rules in the organization are processed for membership changes. Users and devices are added or removed if they meet the conditions for a group. Security groups can be used for either devices or users, but Microsoft 365 Groups can be only user groups. Using Dynamic groups requires Azure AD premium P1 license or Intune for Education license. See [Dynamic membership rules for groups](./groups-dynamic-membership.md) for more details.
+In Azure Active Directory (Azure AD), you can use rules to determine group membership based on user or device properties. This article tells how to set up a rule for a dynamic group in the Azure portal. Dynamic membership is supported for security groups and Microsoft 365 Groups. When a group membership rule is applied, user and device attributes are evaluated for matches with the membership rule. When an attribute changes for a user or device, all dynamic group rules in the organization are processed for membership changes. Users and devices are added or removed if they meet the conditions for a group. Security groups can be used for either devices or users, but Microsoft 365 Groups can be only user groups. Using Dynamic groups requires Azure AD premium P1 license or Intune for Education license. See [Dynamic membership rules for groups](./groups-dynamic-membership.md) for more details.
## Rule builder in the Azure portal
For examples of syntax, supported properties, operators, and values for a member
1. Search for and select **Groups**. 1. Select **All groups**, and select **New group**.
- ![Select the command to add new group](./media/groups-create-rule/create-new-group-azure-active-directory.png)
+ ![Screenshot showing how to select the "add new group" action](./media/groups-create-rule/create-new-group-azure-active-directory.png)
1. On the **Group** page, enter a name and description for the new group. Select a **Membership type** for either users or devices, and then select **Add dynamic query**. The rule builder supports up to five expressions. To add more than five expressions, you must use the text box.
If the rule you entered isn't valid, an explanation of why the rule couldn't be
1. Select a group to open its profile. 1. On the profile page for the group, select **Dynamic membership rules**. The rule builder supports up to five expressions. To add more than five expressions, you must use the text box.
- ![Add membership rule for a dynamic group](./media/groups-create-rule/update-dynamic-group-rule.png)
+ ![Screenshot showing how to add a membership rule for a dynamic group](./media/groups-create-rule/update-dynamic-group-rule.png)
1. To see the custom extension properties available for your membership rule: 1. Select **Get custom extension properties**
When a new Microsoft 365 group is created, a welcome email notification is sent
## Check processing status for a rule
-You can see the membership processing status and the last updated date on the **Overview** page for the group.
+You can see the dynamic rule processing status and the last membership change date on the **Overview** page for the group.
- ![display of dynamic group status](./media/groups-create-rule/group-status.png)
+ ![Diagram of dynamic group status](./media/groups-create-rule/group-status.png)
-The following status messages can be shown for **Membership processing** status:
+The following status messages can be shown for **Dynamic rule processing** status:
- **Evaluating**: The group change has been received and the updates are being evaluated. - **Processing**: Updates are being processed.
The following status messages can be shown for **Membership processing** status:
- **Processing error**: Processing couldn't be completed because of an error evaluating the membership rule. - **Update paused**: Dynamic membership rule updates have been paused by the administrator. MembershipRuleProcessingState is set to ΓÇ£PausedΓÇ¥.
-The following status messages can be shown for **Membership last updated** status:
+The following status messages can be shown for **Last membership change** status:
- &lt;**Date and time**&gt;: The last time the membership was updated. - **In Progress**: Updates are currently in progress.
The following status messages can be shown for **Membership last updated** statu
If an error occurs while processing the membership rule for a specific group, an alert is shown on the top of the **Overview page** for the group. If no pending dynamic membership updates can be processed for all the groups within the organization for more than 24 hours, an alert is shown on the top of **All groups**.
-![processing error message alerts](./media/groups-create-rule/processing-error.png)
+![Screenshot showing how to process error message alerts](./media/groups-create-rule/processing-error.png)
-These articles provide additional information on groups in Azure Active Directory.
+## Next steps
+
+The following articles provide additional information on how to use groups in Azure Active Directory.
- [See existing groups](../fundamentals/active-directory-groups-view-azure-portal.md) - [Create a new group and adding members](../fundamentals/active-directory-groups-create-azure-portal.md)
active-directory External Collaboration Settings Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-collaboration-settings-configure.md
For B2B collaboration with other Azure AD organizations, you should also review
## Configure settings in the portal
-1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account and open the **Azure Active Directory** service.
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator account and open the **Azure Active Directory** service.
1. Select **External Identities** > **External collaboration settings**. 1. Under **Guest user access**, choose the level of access you want guest users to have:
active-directory My Apps Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/my-apps-deployment-plan.md
- Title: Plan My Apps configuration
-description: Planning guide to effectively use My Apps in your organization.
-------- Previously updated : 09/02/2021----
-# Plan Azure Active Directory My Apps configuration
-
-> [!NOTE]
-> This article is designed for IT professionals who need to plan the configuration of their organizationΓÇÖs My Apps portal.
->
-> **For end user documentation, see [Sign in and start apps from the My Apps portal](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510)**.
-
-Azure Active Directory (Azure AD) My Apps is a web-based portal for launching and managing apps. The My Apps page gives users a single place to start their work and find all the applications to which they have access. Users access My Apps at [https://myapps.microsoft.com](https://myapps.microsoft.com/).
-
-> [!VIDEO https://www.youtube.com/embed/atj6Ivn5m0k]
-
-## Why Configure My apps
-
-The My Apps portal is available to users by default and cannot be turned off. ItΓÇÖs important to configure it so that they have the best possible experience, and the portal stays useful.
-
-Any application in the Azure Active Directory enterprise applications list appears when both of the following conditions are met:
-
-* The visibility property for the app is set to true.
-* The app is assigned to any user or group. It appears for assigned users.
-
-Configuring the portal ensures that the right people can easily find the right apps.
-
-### How is the My Apps portal used?
-
-Users access the My Apps portal to:
-
-* Discover and access all their organizationΓÇÖs Azure AD-connected applications to which they have access.
- * ItΓÇÖs best to ensure apps are configured for single sign-on (SSO) to provide users the best experience.
-* Request access to new apps that are configured for self-service.
-* Create personal collections of apps.
-* Manage access to apps for others when assigned the role of group owner or delegated control for the group used to grant access to the application(s).
-
-Administrators can configure:
-
-* [Consent experiences](../manage-apps/configure-user-consent.md) including terms of service.
-* [Self-service application discovery and access requests](../manage-apps/access-panel-manage-self-service-access.md).
-* [Collections of applications](../manage-apps/access-panel-collections.md).
-* Assignment of icons to applications
-* User-friendly names for applications
-* Banner logo in the My Apps header. For more information about assigning a banner logo, see [Add branding to your organization's Azure Active Directory sign-in page](../fundamentals/customize-branding.md)
-
-## Plan consent configuration
-
-### User consent for applications
-
-Before a user can sign in to an application and the application can access your organization's data, a user or an admin must grant the application permissions. You can configure whether user consent is allowed, and under which conditions. **Microsoft recommends you only allow user consent for applications from verified publishers.**
-
-For more information, see [Configure how end-users consent to applications](../manage-apps/configure-user-consent.md)
-
-### Group owner consent for apps accessing data
-
-Group and team owners can authorize applications, such as applications published by third-party vendors, to access your organization's data associated with a group. See [Resource-specific consent in Microsoft Teams](/microsoftteams/resource-specific-consent) to learn more.
-
-You can configure whether you'd like to allow or disable this feature.
-
-For more information, see [Configure group consent permissions](../manage-apps/configure-user-consent-groups.md).
-
-### Plan communications
-
-Communication is critical to the success of any new service. Proactively inform your users how and when their experience will change and how to gain support if needed.
-
-Although My Apps doesnΓÇÖt typically create user issues, itΓÇÖs important to prepare. Create guides and a list of all resources for your support personnel before your launch.
-
-#### Communications templates
-
-Microsoft provides [customizable templates for emails and other communications](https://aka.ms/APTemplates) for My Apps. You can adapt these assets for use in other communications channels as appropriate for your corporate culture.
-
-## Plan your SSO configuration
-
-It's best if SSO is enabled for all apps in the My Apps portal so that users have a seamless experience without the need to enter their credentials.
-
-Azure AD supports multiple SSO options.
-
-* To learn more, see [Single sign-on options in Azure AD](sso-options.md).
-* To learn more about using Azure AD as an identity provider for an app, see the [Quickstart Series on Application Management](../manage-apps/view-applications-portal.md).
-
-### Use federated SSO if possible
-
-For the best user experience with the My Apps page, start with the integration of cloud applications that are available for federated single sign-on (SSO), such as OpenID Connect or SAML. Federated SSO allows users to have a consistent one-click experience when signing in to applications and tends to be more robust in configuration control.
-
-For more information about configuring single sign-on for your application, see [Plan single sign-on deployment](plan-sso-deployment.md).
-
-### Considerations for special SSO circumstances
-
-> [!TIP]
-> For a better user experience, use Federated SSO with Azure AD (OpenID Connect/SAML) when an application supports it, instead of password-based SSO and ADFS.
-
-To sign in to password-based SSO applications, or to applications that are accessed by Azure AD Application Proxy, users need to install and use the My Apps secure sign-in extension. Users are prompted to install the extension when they first launch the password-based SSO or Application Proxy application.
-
-![Screenshot of](./media/my-apps-deployment-plan/ap-dp-install-myapps.png)
-
-For detailed information on the extension, see [Installing My Apps browser extension](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-If you must integrate these applications, you should define a mechanism to deploy the extension at scale with [supported browsers](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510). Options include:
-
-* [User-driven download and configuration for Chrome, Firefox, Microsoft Edge, or IE](../user-help/my-apps-portal-end-user-access.md)
-* [Configuration Manager for Internet Explorer](/mem/configmgr/core/clients/deploy/deploy-clients-to-windows-computers)
-
-The extension allows users to launch any app from its search bar, finding access to recently used applications, and having a link to the My Apps page.
-
-![Screenshot of my apps extension search](./media/my-apps-deployment-plan/my-app-extsension.png)
-
-#### Plan for mobile access
-
-For applications that use password-based SSO or accessed by using [Microsoft Azure AD Application Proxy](../app-proxy/application-proxy.md), you must use Microsoft Edge mobile. For other applications, any mobile browser can be used. Be sure to enable password-based SSO in your mobile settings, which can be off by default. For example, **Settings -> Privacy and Security -> Azure AD Password SSO**.
-
-### Linked SSO
-
-Applications can be added by using the Linked SSO option. You can configure an application tile that links to the URL of your existing web application. Linked SSO allows you to start directing users to the My Apps portal without migrating all the applications to Azure AD SSO. You can gradually move to Azure AD SSO-configured applications without disrupting the usersΓÇÖ experience.
-
-## Plan the user experience
-
-By default, all applications to which the user has access and all applications configured for self-service discovery appear in the userΓÇÖs My Apps panel. For many organizations, this can be a very large list, which can become burdensome if not organized
-
-### Plan My Apps collections
-
-Every Azure AD application to which a user has access will appear on My Apps in the **Apps** collection. Use collections to group related applications and present them on a separate tab, making them easier to find. For example, you can use collections to create logical groupings of applications for specific job roles, tasks, projects, and so on.
-
-End users can also customize their experience by
-
-* Creating their own app collections.
-* [Hiding and reordering app collections](access-panel-collections.md).
-
-![Screenshot of self-service configuration](./media/my-apps-deployment-plan/collections.png)
-
-ThereΓÇÖs an option to hide apps from the My Apps portal, while still allowing access from other locations, such as the Microsoft 365 portal. Learn more: [Hide an application from userΓÇÖs experience in Azure Active Directory](hide-application-from-user-portal.md).
-
-> [!IMPORTANT]
-> Only 950 apps to which a user has access can be accessed through My Apps. This includes apps hidden by either the user or the administrator.
-
-### Plan self-service group management membership
-
-You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure AD. The owner of the group can approve or deny membership requests and delegate control of group membership. Self-service group management features arenΓÇÖt available for mail-enabled security groups or distribution lists.
-
-To plan for self-service group membership, determine if youΓÇÖll allow all users in your organization to create and manage groups or only a subset of users. If youΓÇÖre allowing a subset of users, youΓÇÖll need to set up a group to which those people are added.
-
-See [Set up self-service group management in Azure Active Directory](../enterprise-users/groups-self-service-management.md) for details on enabling these scenarios.
-
-### Plan self-service application access
-
-You can enable users to discover and request access to applications via the My Apps panel. To do so, you must first
-
-* enable self-service group management
-* enable app for SSO
-* create a group for application access
-
-![Screen shot of My Apps self service configuration](./media/my-apps-deployment-plan/my-apps-self-service.png)
-
-When users request access, they're requesting access to the underlying group, and group owners can be delegate permission to manage the group membership and thus application access. Approval workflows are available for explicit approval to access applications. Users who are approvers will receive notifications within the My Apps portal when there are pending request for access to the application.
-
-## Plan reporting and auditing
-
-Azure AD provides [reports that offer technical and business insights](../reports-monitoring/overview-reports.md). Work with your business and technical application owners to assume ownership of these reports and to consume them regularly. The following table provides some examples of typical reporting scenarios.
-
-| Example| Manage risk| Increase productivity| Governance and compliance |
-| - | - | - | -|
-| Report types| Application permissions and usage| Account provisioning activity| Review who is accessing the applications |
-| Potential actions| Audit access; revoke permissions| Remediate any provisioning errors| Revoke access |
-
-Azure AD keeps most auditing data for 30 days. The data is available via Azure Admin Portal or API for you to download into your analysis systems.
-
-#### Auditing
-
-Audit logs for application access are available for 30 days. If your organization requires longer retention, export the logs to a Security Information Event and Management (SIEM) tool, such as Splunk or ArcSight.
-
-For auditing, reporting, and disaster recovery backups, document the required frequency of download, what the target system is, and whoΓÇÖs responsible for managing each backup. You might not need separate auditing and reporting backups. Your disaster recovery backup should be a separate entity.
-
-## Validate your deployment
-
-Ensure your My Apps deployment is thoroughly tested and a rollback plan is in place.
-
-Conduct the following tests with both corporate-owned devices and personal devices. These test cases should also reflect your business use cases. Following are a few cases based on typical technical scenarios. Add others specific to your needs.
-
-#### Application SSO access test case examples:
-
-| Business case| Expected result |
-| - | - |
-| User signs in into the My Apps portal| User can sign in and see their applications |
-| User launches a federated SSO application| User is automatically signed in to the application |
-| User launches a password SSO application for the first time| User needs to install the My Apps extension |
-| User launches a password SSO application a subsequent time| User is automatically signed in to the application |
-| User launches an app from Microsoft 365 Portal| User is automatically signed in to the application |
-| User launches an app from the Managed Browser| User is automatically signed in to the application |
-
-#### Application self-service capabilities test case examples
-
-| Business case| Expected result |
-| - | - |
-| User can manage membership to the application| User can add/remove members who have access to the app |
-| User can edit the application| User can edit the applicationΓÇÖs description and credentials for password SSO applications |
-
-### Rollback steps
-
-ItΓÇÖs important to plan what to do if your deployment doesnΓÇÖt go as planned. If SSO configuration fails during deployment, you must understand how to [troubleshoot SSO issues](../hybrid/tshoot-connect-sso.md) and reduce impact to your users. In extreme circumstances, you might need to [roll back SSO](plan-sso-deployment.md).
-
-## Manage your implementation
-
-Use the least privileged role to accomplish a required task within Azure Active Directory. [Review the different roles that are available](../roles/permissions-reference.md) and choose the right one to solve your needs for each persona for this application. Some roles might need to be applied temporarily and removed after the deployment is completed.
-
-| Personas| Roles| Azure AD role |
-| - | - | - |
-| Helpdesk admin| Tier 1 support| None |
-| Identity admin| Configure and debug when issues impact Azure AD| Global admin |
-| Application admin| User attestation in application, configuration on users with permissions| None |
-| Infrastructure admins| Cert rollover owner| Global admin |
-| Business owner/stakeholder| User attestation in application, configuration on users with permissions| None |
-
-You can use [Privileged Identity Management](../privileged-identity-management/pim-configure.md) to manage your roles to provide additional auditing, control, and access review for users with directory permissions.
-
-## Next steps
-
-[Plan a deployment of Azure AD Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md)
-
-[Plan an Application Proxy deployment](../app-proxy/application-proxy-deployment-plan.md)
active-directory Myapps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/myapps-overview.md
+
+ Title: My Apps portal overview
+description: Learn about how to manage applications in the My Apps portal.
++++++++ Last updated : 05/05/2022+
+#Customer intent: As an Azure AD administrator, I want to make applications available to users in the My Apps portal.
+++
+# My Apps portal overview
+
+[My Apps](https://myapps.microsoft.com) is a web-based portal that is used for managing and launching applications in Azure Active Directory (Azure AD). To work with applications in My Apps, use an organizational account in Azure AD and obtain access granted by the Azure AD administrator. My Apps is separate from the Azure portal and doesn't require users to have an Azure subscription or Microsoft 365 subscription.
+
+Users access the My Apps portal to:
+
+- Discover applications to which they have access
+- Request new applications that the organization supports for self-service
+- Create personal collections of applications
+- Manage access to applications
+
+The following conditions determine whether an application in the enterprise applications list in the Azure portal appears to a user or group in the My Apps portal:
+
+- The application is set to be visible in its properties
+- The application is assigned to the user or group
+
+> [!NOTE]
+> The **Users can only see Office 365 apps in the Office 365 portal** property in the Azure portal can affect whether users can only see Office 365 applications in the Office 365 portal. If this setting is set to **No**, then users will be able to see Office 365 applications in both the My Apps portal and the Office 365 portal. This setting can be found under **Manage** in **Enterprise applications > User settings**.
+
+Administrators can configure:
+
+- Consent experiences including terms of service
+- Self-service application discovery and access requests
+- Collections of applications
+- Company and application branding
+
+## Understand application properties
+
+Properties that are defined for an application can affect how the user interacts with it in the My Apps portal.
+
+- **Enabled for users to sign in?** ΓÇô If this property is set to **Yes**, then assigned users are able to sign into the application from the My Apps portal.
+- **Name** - The name of the application that users see on the My Apps portal. Administrators see the name when they manage access to the application.
+- **Homepage URL** -The URL that is launched when the application is selected in the My Apps portal.
+- **Logo** - The application logo that users see on the My Apps portal.
+- **Visible to users** - Makes the application visible in the My Apps portal. When this value is set to **Yes**, applications may still not appear in the My Apps portal if they donΓÇÖt yet have users or groups assigned to it. Only assigned users are able to see the application in the My Apps portal.
+
+For more information, see [Properties of an enterprise application](application-properties.md).
+
+### Discover applications
+
+When signed in to the My Apps portal, the applications that have been made visible are shown. For an application to be visible in the My Apps portal, set the appropriate properties in the Azure portal. Also in the Azure portal, assign a user or group with the appropriate members.
+
+In the My Apps portal, to search for an application, enter an application name in the search box at the top of the page to find an application. The applications that are listed can be formatted in **List view** or a **Grid view**.
++
+> [!IMPORTANT]
+> It can take several minutes for an application to appear in the My Apps portal after it has been added to the tenant in the Azure portal. There may also be a delay in how soon users can access the application after it has been added.
+
+Applications can be hidden. For more information, see [Hide an Enterprise application](hide-application-from-user-portal.md).
+
+## Assign company branding
+
+In the Azure portal, define the logo and name for the application to represent company branding in the My Apps portal. The banner logo appears at the top of the page, such as the Contoso demo logo shown below.
++
+For more information, see [Add branding to your organization's sign-in page](../fundamentals/customize-branding.md).
+
+## Access applications
+
+Multiple factors affect how and whether an application can be accessed by users. Permissions that are assigned to the application can affect what can be done with it. Applications can be configured to allow self-service access, or access may be only granted by an administrator of the tenant.
+
+### My Apps Secure Sign-in Extension
+
+Install the My Apps secure sign-in extension to sign in to some applications. The extension is required for sign-in to password-based SSO applications, or to applications that are accessed by Azure AD Application Proxy. Users are prompted to install the extension when they first launch the password-based single sign-on or an Application Proxy application.
+
+To integrate these applications, define a mechanism to deploy the extension at scale with supported browsers. Options include:
+
+- User-driven download and configuration for Chrome, Microsoft Edge, or IE
+- Configuration Manager for Internet Explorer
+
+The extension allows users to launch any application from its search bar, finding access to recently used applications, and having a link to the My Apps portal. For applications that use password-based SSO or accessed by using Microsoft Azure AD Application Proxy, use Microsoft Edge mobile. For other applications, any mobile browser can be used. Be sure to enable password-based SSO in the mobile settings, which can be off by default. For example, **Settings -> Privacy and Security -> Azure AD Password SSO**.
+
+To download and install the extension:
+
+- **Microsoft Edge** - From the Microsoft Store, go to the [My Apps Secure Sign-in Extension](https://microsoftedge.microsoft.com/addons/detail/my-apps-secure-signin-ex/gaaceiggkkiffbfdpmfapegoiohkiipl) feature, and then select **Get to get the extension for Microsoft Edge legacy browser**.
+- **Google Chrome** - From the Chrome Web Store, go to the [My Apps Secure Sign-in Extension](https://chrome.google.com/webstore/detail/my-apps-secure-sign-in-ex/ggjhpefgjjfobnfoldnjipclpcfbgbhl) feature, and then select **Add to Chrome**.
+
+An icon is added to the right of the address bar, which enables sign in and customization of the extension.
+
+### Permissions
+
+Permissions that have been granted to an application can be reviewed by looking at the permissions tab for it. To access the permissions tab, select the upper right corner of the tile that represents the application and then select **Manage your application**.
+
+The permissions that are shown have been consented to by an administrator or have been consented to by the user. Permissions consented to by the user can be revoked by the user.
+
+The following image shows the `email` permission for Microsoft Graph consented to the application by the administrator of the tenant.
++
+### Self-service access
+
+Access can be granted on a tenant level, assigned to specific users, or from self-service access. Before users can self-discover applications from the My Apps portal, enable self-service application access in the Azure portal. This feature is available for applications when added using these methods:
+
+- The Azure AD application gallery
+- Azure AD Application Proxy
+- Using user or admin consent
+
+Enable users to discover and request access to applications by using the My Apps portal. To do so, complete the following tasks in the Azure portal:
+
+- Enable self-service group management
+- Enable the application for single sign-on
+- Create a group for application access
+
+When users request access, they request access to the underlying group, and group owners can be delegated permission to manage the group membership and application access. Approval workflows are available for explicit approval to access applications. Users who are approvers receive notifications within the My Apps portal when there are pending requests for access to the application.
+
+For more information, see [Enable self-service application assignment](manage-self-service-access.md)
+
+### Single sign-on
+
+Enable single sign-on (SSO) in the Azure portal for all applications that are made available in the My Apps portal whenever possible. If SSO is set up, users have a seamless experience without the need to enter their credentials. To learn more, see [Single sign-on options in Azure AD](what-is-single-sign-on.md#single-sign-on-options).
+
+Applications can be added by using the Linked SSO option. Configure an application tile that links to the URL of the existing web application. Linked SSO allows the direction of users to the My Apps portal without migrating all the applications to Azure AD SSO. Gradually move to Azure AD SSO-configured applications to prevent disrupting the usersΓÇÖ experience.
+
+For more information, see [Add linked single sign-on to an application](configure-linked-sign-on.md).
+
+## Create collections
+
+By default, all applications are listed together on a single page. Collections can be used to group together related applications and present them on a separate tab, making them easier to find. For example, use collections to create logical groupings of applications for specific job roles, tasks, projects, and so on. Every application to which a user has access appears in the default Apps collection, but a user can remove applications from the collection.
+
+Users can also customize their experience by:
+
+- Creating their own application collections
+- Hiding and reordering application collections
+
+Applications can be hidden from the My Apps portal by a user or administrator. A hidden application can still be accessed from other locations, such as the Microsoft 365 portal. Only 950 applications to which a user has access can be accessed through the My Apps portal.
+
+For more information, see [Create collections on the My Apps portal](access-panel-collections.md).
+
+## Next steps
+
+Learn more about application management in [What is enterprise application management?](what-is-application-management.md)
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
Previously updated : 04/26/2022 Last updated : 05/06/2022 # Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
Last updated 04/26/2022
Azure Active Directory (Azure AD) Verifiable Credentials safeguards your organization with an identity solution that's seamless and decentralized. The service allows you to issue and verify credentials. For issuers, Azure AD provides a service that they can customize and use to issue their own verifiable credentials. For verifiers, the service provides a free REST API that makes it easy to request and accept verifiable credentials in your apps and services.
-In this tutorial, you learn how to configure your Azure AD tenant so it can use this credentials service.
+In this tutorial, you learn how to configure your Azure AD tenant so it can use the verifiable credentials service.
Specifically, you learn how to: > [!div class="checklist"]
->
-> - Set up a service principal.
> - Create an Azure Key Vault instance.
-> - Register an application in Azure AD.
> - Set up the Verifiable Credentials service.
+> - Register an application in Azure AD.
+ The following diagram illustrates the Azure AD Verifiable Credentials architecture and the component you configure.
A Key Vault [access policy](../../key-vault/general/assign-access-policy.md) def
1. To save the changes, select **Save**.
+## Set up Verifiable Credentials
+
+To set up Azure AD Verifiable Credentials, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com/), search for *verifiable credentials*. Then, select **Verifiable Credentials (Preview)**.
+
+1. From the left menu, select **Getting started**.
+
+1. Set up your organization by providing the following information:
+
+ 1. **Organization name**: Enter a name to reference your business within Verifiable Credentials. Your customers don't see this name.
+
+ 1. **Domain**: Enter a domain that's added to a service endpoint in your decentralized identity (DID) document. The domain is what binds your DID to something tangible that the user might know about your business. Microsoft Authenticator and other digital wallets use this information to validate that your DID is linked to your domain. If the wallet can verify the DID, it displays a verified symbol. If the wallet can't verify the DID, it informs the user that the credential was issued by an organization it couldn't validate.
+
+ >[!IMPORTANT]
+ > The domain can't be a redirect. Otherwise, the DID and domain can't be linked. Make sure to use HTTPS for the domain. For example: `https://contoso.com`.
+
+ 1. **Key vault**: Enter the name of the key vault that you created earlier.
+
+1. Select **Save and create credential**.
+
+ ![Screenshots that shows how to set up Verifiable Credentials.](media/verifiable-credentials-configure-tenant/verifiable-credentials-getting-started.png)
+ ## Register an application in Azure AD Azure AD Verifiable Credentials Request Service needs to be able to get access tokens to issue and verify. To get access tokens, register a web application and grant API permission for the API Verifiable Credential Request Service that you set up in the previous step.
To add the required permissions, follow these steps:
1. Select **Grant admin consent for \<your tenant name\>**.
-## Set up Verifiable Credentials
-
-To set up Azure AD Verifiable Credentials, follow these steps:
-
-1. In the [Azure portal](https://portal.azure.com/), search for *verifiable credentials*. Then, select **Verifiable Credentials (Preview)**.
-
-1. From the left menu, select **Getting started**.
-
-1. Set up your organization by providing the following information:
-
- 1. **Organization name**: Enter a name to reference your business within Verifiable Credentials. Your customers don't see this name.
-
- 1. **Domain**: Enter a domain that's added to a service endpoint in your decentralized identity (DID) document. The domain is what binds your DID to something tangible that the user might know about your business. Microsoft Authenticator and other digital wallets use this information to validate that your DID is linked to your domain. If the wallet can verify the DID, it displays a verified symbol. If the wallet can't verify the DID, it informs the user that the credential was issued by an organization it couldn't validate.
-
- >[!IMPORTANT]
- > The domain can't be a redirect. Otherwise, the DID and domain can't be linked. Make sure to use HTTPS for the domain. For example: `https://contoso.com`.
-
- 1. **Key vault**: Enter the name of the key vault that you created earlier.
-
-1. Select **Save and create credential**.
-
- ![Screenshots that shows how to set up Verifiable Credentials.](media/verifiable-credentials-configure-tenant/verifiable-credentials-getting-started.png)
## Next steps
aks Enable Host Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/enable-host-encryption.md
ms.devlang: azurecli
# Host-based encryption on Azure Kubernetes Service (AKS)
-With host-based encryption, the data stored on the VM host of your AKS agent nodes' VMs is encrypted at rest and flows encrypted to the Storage service. This means the temp disks are encrypted at rest with platform-managed keys. The cache of OS and data disks is encrypted at rest with either platform-managed keys or customer-managed keys depending on the encryption type set on those disks. By default, when using AKS, OS and data disks are encrypted at rest with platform-managed keys, meaning that the caches for these disks are also by default encrypted at rest with platform-managed keys. You can specify your own managed keys following [Bring your own keys (BYOK) with Azure disks in Azure Kubernetes Service](azure-disk-customer-managed-keys.md). The cache for these disks will then also be encrypted using the key that you specify in this step.
+With host-based encryption, the data stored on the VM host of your AKS agent nodes' VMs is encrypted at rest and flows encrypted to the Storage service. This means the temp disks are encrypted at rest with platform-managed keys. The cache of OS and data disks is encrypted at rest with either platform-managed keys or customer-managed keys depending on the encryption type set on those disks.
+
+By default, when using AKS, OS and data disks use server-side encryption with platform-managed keys. The caches for these disks are also encrypted at rest with platform-managed keys. You can specify your own managed keys following [Bring your own keys (BYOK) with Azure disks in Azure Kubernetes Service](azure-disk-customer-managed-keys.md). The cache for these disks will then also be encrypted using the key that you specify in this step.
Host-based encryption is different than server-side encryption (SSE), which is used by Azure Storage. Azure-managed disks use Azure Storage to automatically encrypt data at rest when saving data. Host-based encryption uses the host of the VM to handle encryption before the data flows through Azure Storage.
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
A successful cluster creation using your own kubelet managed identity contains t
}, ```
+### Update an existing cluster using kubelet identity (Preview)
+
+Update kubelet identity on an existing cluster with your existing identities.
+
+#### Install the `aks-preview` Azure CLI
+
+You also need the *aks-preview* Azure CLI extension version 0.5.64 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+#### Updating your cluster with kubelet identity (Preview)
+
+Now you can use the following command to update your cluster with your existing identities. Provide the control plane identity id via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`:
+
+```azurecli-interactive
+az aks update \
+ --resource-group myResourceGroup \
+ --name myManagedCluster \
+ --enable-managed-identity \
+ --assign-identity <identity-id> \
+ --assign-kubelet-identity <kubelet-identity-id>
+```
+
+A successful cluster update using your own kubelet managed identity contains the following output:
+
+```output
+ "identity": {
+ "principalId": null,
+ "tenantId": null,
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/<subscriptionid>/resourcegroups/resourcegroups/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity": {
+ "clientId": "<client-id>",
+ "principalId": "<principal-id>"
+ }
+ }
+ },
+ "identityProfile": {
+ "kubeletidentity": {
+ "clientId": "<client-id>",
+ "objectId": "<object-id>",
+ "resourceId": "/subscriptions/<subscriptionid>/resourcegroups/resourcegroups/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity"
+ }
+ },
+```
+ ## Next steps * Use [Azure Resource Manager templates ][aks-arm-template] to create Managed Identity enabled clusters.
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
All of the tasks that you do on resources using the Azure Resource Manager must
Before calling the APIs that generate the backup and restore, you need to get a token. The following example uses the [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package to retrieve the token.
+> [!IMPORTANT]
+> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](/azure/active-directory/develop/msal-migration) for more details.
+ ```csharp using Microsoft.IdentityModel.Clients.ActiveDirectory; using System;
app-service App Service Key Vault References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-key-vault-references.md
If your vault is configured with [network restrictions](../key-vault/general/ove
1. Make sure the application has outbound networking capabilities configured, as described in [App Service networking features](./networking-features.md) and [Azure Functions networking options](../azure-functions/functions-networking-options.md).
- Linux applications attempting to use private endpoints additionally require that the app be explicitly configured to have all traffic route through the virtual network. This requirement will be removed in a forthcoming update. To set this, use the following CLI command:
+ Linux applications attempting to use private endpoints additionally require that the app be explicitly configured to have all traffic route through the virtual network. This requirement will be removed in a forthcoming update. To set this, use the following Azure CLI or Azure PowerShell command:
+
+ # [Azure CLI](#tab/azure-cli)
```azurecli
- az webapp config set --subscription <sub> -g <rg> -n <appname> --generic-configurations '{"vnetRouteAllEnabled": true}'
+ az webapp config set --subscription <sub> -g MyResourceGroupName -n MyAppName --generic-configurations '{"vnetRouteAllEnabled": true}'
+ ```
+
+ # [Azure PowerShell](#tab/azure-powershell)
+
+ ```azurepowershell
+ Update-AzFunctionAppSetting -Name MyAppName -ResourceGroupName MyResourceGroupName -AppSetting @{vnetRouteAllEnabled = $true}
```
+
+
2. Make sure that the vault's configuration accounts for the network or subnet through which your app will access it.
Once you have granted permissions to the user-assigned identity, follow these st
1. Configure the app to use this identity for Key Vault reference operations by setting the `keyVaultReferenceIdentity` property to the resource ID of the user-assigned identity.
+ # [Azure CLI](#tab/azure-cli)
+
```azurecli-interactive userAssignedIdentityResourceId=$(az identity show -g MyResourceGroupName -n MyUserAssignedIdentityName --query id -o tsv) appResourceId=$(az webapp show -g MyResourceGroupName -n MyAppName --query id -o tsv) az rest --method PATCH --uri "${appResourceId}?api-version=2021-01-01" --body "{'properties':{'keyVaultReferenceIdentity':'${userAssignedIdentityResourceId}'}}" ```
+ # [Azure PowerShell](#tab/azure-powershell)
+
+ ```azurepowershell-interactive
+ $userAssignedIdentityResourceId = Get-AzUserAssignedIdentity -ResourceGroupName MyResourceGroupName -Name MyUserAssignedIdentityName | Select-Object -ExpandProperty Id
+ $appResourceId = Get-AzFunctionApp -ResourceGroupName MyResourceGroupName -Name MyAppName | Select-Object -ExpandProperty Id
+
+ $Path = "{0}?api-version=2021-01-01" -f $appResourceId
+ Invoke-AzRestMethod -Method PATCH -Path $Path -Payload "{'properties':{'keyVaultReferenceIdentity':'$userAssignedIdentityResourceId'}}"
+ ```
+
+
This configuration will apply to all references for the app.
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
The following features are supported for Linux containers:
1. From the left navigation, click **Configuration** > **Path Mappings** > **New Azure Storage Mount**. 1. Configure the storage mount according to the following table. When finished, click **OK**.
+ ::: zone pivot="code-windows"
+ | Setting | Description |
+ |-|-|
+ | **Name** | Name of the mount configuration. Spaces are not allowed. |
+ | **Configuration options** | Select **Basic** if the storage account is not using [private endpoints](../storage/common/storage-private-endpoints.md). Otherwise, select **Advanced**. |
+ | **Storage accounts** | Azure Storage account. It must contain an Azure Files share. |
+ | **Share name** | Files share to mount. |
+ | **Access key** (Advanced only) | [Access key](../storage/common/storage-account-keys-manage.md) for your storage account. |
+ | **Mount path** | Directory inside your file/blob storage that you want to mount. Only `/mounts/pathname` is supported.|
+ ::: zone-end
::: zone pivot="container-windows" | Setting | Description | |-|-|
automation Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/deploy-updates.md
This article describes how to schedule an update deployment and review the proce
Under each scenario, the deployment you create targets that selected machine or server, or in the case of creating a deployment from your Automation account, you can target one or more machines. When you schedule an update deployment from an Azure VM or Azure Arc-enabled server, the steps are the same as deploying from your Automation account, with the following exceptions:
-* The operating system is automatically pre-selected based on the OS of the machine
-* The target machine to update is set to target itself automatically
-* When configuring the schedule, you can specify **Update now**, occurs once, or uses a recurring schedule.
+* The operating system is automatically pre-selected based on the OS of the machine.
+* The target machine to update is set to target itself automatically.
> [!IMPORTANT] > By creating an update deployment, you accept the terms of the Software License Terms (EULA) provided by the company offering updates for their operating system.
To schedule a new update deployment, perform the following steps. Depending on t
9. Select **Schedule settings**. The default start time is 30 minutes after the current time. You can set the start time to any time from 10 minutes in the future.
- > [!NOTE]
- > This option is different if you selected an Azure Arc-enabled server. You can select **Update now** or a start time 20 minutes into the future.
- 10. Use the **Recurrence** to specify if the deployment occurs once or uses a recurring schedule, then select **OK**. 11. In the **Pre-scripts + Post-scripts** region, select the scripts to run before and after your deployment. To learn more, see [Manage pre-scripts and post-scripts](pre-post-scripts.md).
automation Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/query-logs.md
A record with a type of `UpdateRunProgress` is created that provides update depl
| CorrelationId | Unique identifier of the runbook job run for the update. | | EndTime | The time when the synchronization process ended. *This property is currently not used. See TimeGenerated.* | | ErrorResult | Windows Update error code generated if an update fails to install. |
-| InstallationStatus | The possible installation states of an update on the client computer,<br> `NotStarted` - job not triggered yet.<br> `Failed` - job started but failed with an exception.<br> `InProgress` - job in progress.<br> `MaintenanceWindowExceeded` - if execution was remaining but maintenance window interval reached.<br> `Succeeded` - job succeeded.<br> `InstallFailed` - update failed to install successfully.<br> `NotIncluded`<br> `Excluded` |
+| InstallationStatus | The possible installation states of an update on the client computer,<br> `NotStarted` - job not triggered yet.<br> `Failed` - job started but failed with an exception.<br> `InProgress` - job in progress.<br> `MaintenanceWindowExceeded` - if execution was remaining but maintenance window interval reached.<br> `Succeeded` - job succeeded.<br> `InstallFailed` - update failed to install successfully.<br> `NotIncluded` - the corresponding update's classification doesn't match with customer's entries in input classification list.<br> `Excluded` - user enters a KBID in excluded list. While patching, if KBID in excluded list matches with the system detected update KB ID, it is marked as excluded. |
| KBID | Knowledge base article ID for the Windows update. | | ManagementGroupName | Name of the Operations Manager management group or Log Analytics workspace. | | OSType | Type of operating system. Values are Windows or Linux. |
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
Add support for specifying labels and annotations on the secondary service endpo
- If three replicas, then `REQUIRED_SECONDARIES_TO_COMMIT = 1`. - If one or two replicas, then `REQUIRED_SECONDARIES_TO_COMMIT = 0`.
+In this release, the default value of the readable secondary service is `Cluster IP`. The secondary service type can be set in the Kubernetes yaml/json at `spec.services.readableSecondaries.type`. In the next release, the default value will be the same as the primary service type.
+ ### User experience improvements Notifications added in Azure Portal if billing data has not been uploaded to Azure recently.
For complete release version information, see [Version log](version-log.md).
- Support for readable secondary replicas: - To set readable secondary replicas use `--readable-secondaries` when you create or update an Arc-enabled SQL Managed Instance deployment.
- - Set `--readable secondaries` to any value between 0 and the number of replicas minus 1.
+ - Set `--readable-secondaries` to any value between 0 and the number of replicas minus 1.
- `--readable-secondaries` only applies to Business Critical tier. - Automatic backups are taken on the primary instance in a Business Critical service tier when there are multiple replicas. When a failover happens, backups move to the new primary. - RWX capable storage class is required for backups, for both General Purpose and Business Critical service tiers.
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
eastus AzureArcTest1 microsoft.kubernetes/connectedclusters
-## Connect a cluster with custom certificate
-
-If you need the outbound communication from Arc agents to authenticate via a certificate, pass the certificate during onboarding. In case you need to pass multiple certificates, combine them into a single certificate chain and pass it through.
-
-### [Azure CLI](#tab/azure-cli)
-
-Run the connect command with parameters specified:
-
-```azurecli
-az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-cert <path-to-cert-file>
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-This scenario is not supported via the powershell cmdlet.
--- ## Connect using an outbound proxy server If your cluster is behind an outbound proxy server, requests must be routed via the outbound proxy server.
If your cluster is behind an outbound proxy server, requests must be routed via
+For outbound proxy servers where only a trusted certificate needs to be provided without the proxy server endpoint inputs, `az connectedk8s connect` can be run with just the `--proxy-cert` input specified. In case multiple trusted certificates are expected, the combined certificate chain can be provided in a single file using the `--proxy-cert` parameter.
+
+### [Azure CLI](#tab/azure-cli)
+
+Run the connect command with the `--proxy-cert` parameter specified:
+
+```azurecli
+az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-cert <path-to-cert-file>
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+The ability to pass in the proxy certificate only without the proxy server endpoint details is not yet supported via PowerShell.
+++ ## Verify cluster connection Run the following command:
azure-functions Functions Networking Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-networking-options.md
When you run a Premium plan, you can connect non-HTTP trigger functions to servi
:::image type="content" source="media/functions-networking-options/virtual-network-trigger-toggle.png" alt-text="VNETToggle":::
+### [Azure CLI](#tab/azure-cli)
+ You can also enable virtual network triggers by using the following Azure CLI command: ```azurecli-interactive az resource update -g <resource_group> -n <function_app_name>/config/web --set properties.functionsRuntimeScaleMonitoringEnabled=1 --resource-type Microsoft.Web/sites ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+You can also enable virtual network triggers by using the following Azure PowerShell command:
+
+```azurepowershell-interactive
+$Resource = Get-AzResource -ResourceGroupName <resource_group> -ResourceName <function_app_name>/config/web -ResourceType Microsoft.Web/sites
+$Resource.Properties.functionsRuntimeScaleMonitoringEnabled = $true
+$Resource | Set-AzResource -Force
+```
+++ > [!TIP] > Enabling virtual network triggers may have an impact on the performance of your application since your App Service plan instances will need to monitor your triggers to determine when to scale. This impact is likely to be very small.
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
In the Premium plan, you can have your app always ready on a specified number of
You can configure the number of always ready instances in the Azure portal by selected your **Function App**, going to the **Platform Features** tab, and selecting the **Scale Out** options. In the function app edit window, always ready instances are specific to that app.
-![Elastic Scale Settings](./media/functions-premium-plan/scale-out.png)
+![Elastic scale settings in the portal](./media/functions-premium-plan/scale-out.png)
# [Azure CLI](#tab/azurecli)
You can also configure always ready instances for an app with the Azure CLI.
```azurecli-interactive az resource update -g <resource_group> -n <function_app_name>/config/web --set properties.minimumElasticInstanceCount=<desired_always_ready_count> --resource-type Microsoft.Web/sites ```+
+# [Azure PowerShell](#tab/azure-powershell)
+
+You can also configure always ready instances for an app with the Azure PowerShell.
+
+```azurepowershell-interactive
+$Resource = Get-AzResource -ResourceGroupName <resource_group> -ResourceName <function_app_name>/config/web -ResourceType Microsoft.Web/sites
+$Resource.Properties.minimumElasticInstanceCount = <desired_always_ready_count>
+$Resource | Set-AzResource -Force
+```
+ ### Pre-warmed instances
Consider this example of how always-ready instances and pre-warmed instances wor
As soon as the first trigger comes in, the five always-ready instances become active, and a pre-warmed instance is allocated. The app is now running with six provisioned instances: the five now-active always ready instances, and the sixth pre-warmed and inactive buffer. If the rate of executions continues to increase, the five active instances are eventually used. When the platform decides to scale beyond five instances, it scales into the pre-warmed instance. When that happens, there are now six active instances, and a seventh instance is instantly provisioned and fill the pre-warmed buffer. This sequence of scaling and pre-warming continues until the maximum instance count for the app is reached. No instances are pre-warmed or activated beyond the maximum.
+# [Portal](#tab/portal)
+
+You can configure the number of pre-warmed instances in the Azure portal by selecting the **Scale Out** options under **Settings** of a function app deployed to that plan and then adjusting the **Always Ready Instances** count.
+
+![Pre-warmed instance Settings in the portal](./media/functions-premium-plan/scale-out.png)
+
+# [Azure CLI](#tab/azurecli)
+ You can modify the number of pre-warmed instances for an app using the Azure CLI. ```azurecli-interactive az resource update -g <resource_group> -n <function_app_name>/config/web --set properties.preWarmedInstanceCount=<desired_prewarmed_count> --resource-type Microsoft.Web/sites ```
+# [Azure PowerShell](#tab/azure-powershell)
+
+You can modify the number of pre-warmed instances for an app using the Azure PowerShell.
+
+```azurepowershell-interactive
+$Resource = Get-AzResource -ResourceGroupName <resource_group> -ResourceName <function_app_name>/config/web -ResourceType Microsoft.Web/sites
+$Resource.Properties.preWarmedInstanceCount = <desired_prewarmed_count>
+$Resource | Set-AzResource -Force
+```
+++ ### Maximum function app instances In addition to the [plan maximum instance count](#plan-and-sku-settings), you can configure a per-app maximum. The app maximum can be configured using the [app scale limit](./event-driven-scaling.md#limit-scale-out).
When you create the plan, there are two plan size settings: the minimum number o
If your app requires instances beyond the always-ready instances, it can continue to scale out until the number of instances hits the maximum burst limit. You're billed for instances beyond your plan size only while they are running and allocated to you, on a per-second basis. The platform makes it's best effort at scaling your app out to the defined maximum limit.
-You can configure the plan size and maximums in the Azure portal by selecting the **Scale Out** options in the plan or a function app deployed to that plan (under **Platform Features**).
+# [Portal](#tab/portal)
+
+You can configure the plan size and maximums in the Azure portal by selecting the **Scale Out** options under **Settings** of a function app deployed to that plan.
+
+![Elastic plan size settings in the portal](./media/functions-premium-plan/scale-out.png)
+
+# [Azure CLI](#tab/azurecli)
You can also increase the maximum burst limit from the Azure CLI:
You can also increase the maximum burst limit from the Azure CLI:
az functionapp plan update -g <resource_group> -n <premium_plan_name> --max-burst <desired_max_burst> ```
+# [Azure PowerShell](#tab/azure-powershell)
+
+You can also increase the maximum burst limit from the Azure PowerShell:
+
+```azurepowershell-interactive
+Update-AzFunctionAppPlan -ResourceGroupName <resource_group> -Name <premium_plan_name> -MaximumWorkerCount <desired_max_burst> -Force
+```
+++ The minimum for every plan will be at least one instance. The actual minimum number of instances will be autoconfigured for you based on the always ready instances requested by apps in the plan. For example, if app A requests five always ready instances, and app B requests two always ready instances in the same plan, the minimum plan size will be calculated as five. App A will be running on all 5, and app B will only be running on 2. > [!IMPORTANT]
The minimum for every plan will be at least one instance. The actual minimum num
In most circumstances, this autocalculated minimum is sufficient. However, scaling beyond the minimum occurs at a best effort. It's possible, though unlikely, that at a specific time scale-out could be delayed if additional instances are unavailable. By setting a minimum higher than the autocalculated minimum, you reserve instances in advance of scale-out.
+# [Portal](#tab/portal)
+
+You can configure the minimum instances in the Azure portal by selecting the **Scale Out** options under **Settings** of a function app deployed to that plan.
+
+![Minimum instance settings in the portal](./media/functions-premium-plan/scale-out.png)
+
+# [Azure CLI](#tab/azurecli)
+ Increasing the calculated minimum for a plan can be done using the Azure CLI. ```azurecli-interactive az functionapp plan update -g <resource_group> -n <premium_plan_name> --min-instances <desired_min_instances> ```
+# [Azure PowerShell](#tab/azure-powershell)
+
+Increasing the calculated minimum for a plan can be done using the Azure PowerShell.
+
+```azurepowershell-interactive
+Update-AzFunctionAppPlan -ResourceGroupName <resource_group> -Name <premium_plan_name> -MinimumWorkerCount <desired_min_instances> -Force
+```
+++ ### Available instance SKUs When creating or scaling your plan, you can choose between three instance sizes. You will be billed for the total number of cores and memory provisioned, per second that each instance is allocated to you. Your app can automatically scale out to multiple instances as needed.
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
Last updated 03/31/2022
# Install Log Analytics agent on Linux computers
-The Log Analytics agents are on a **deprecation path** and will no longer be supported after **August 31, 2024**. If you use the Log Analytics agents to ingest data to Azure Monitor, make sure to [migrate to the new Azure Monitor agent](./azure-monitor-agent-migration.md) prior to that date.
-- This article provides details on installing the Log Analytics agent on Linux computers using the following methods: * [Install the agent for Linux using a wrapper-script](#install-the-agent-using-wrapper-script) hosted on GitHub. This is the recommended method to install and upgrade the agent when the computer has connectivity with the Internet, directly or through a proxy server. * [Manually download and install](#install-the-agent-manually) the agent. This is required when the Linux computer does not have access to the Internet and will be communicating with Azure Monitor or Azure Automation through the [Log Analytics gateway](./gateway.md).
+The installation methods described in this article are typically used for virtual machines on-premises or in other clouds. See [Installation options](./log-analytics-agent.md#installation-options) for more efficient options you can use for Azure virtual machines.
+ >[!IMPORTANT]
-> The installation methods described in this article are typically used for virtual machines on-premises or in other clouds. See [Installation options](./log-analytics-agent.md#installation-options) for more efficient options you can use for Azure virtual machines.
+> The Log Analytics agents are on a **deprecation path** and will no longer be supported after **August 31, 2024**. If you use the Log Analytics agents to ingest data to Azure Monitor, make sure to [migrate to the new Azure Monitor agent](./azure-monitor-agent-migration.md) prior to that date.
azure-monitor Agent Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows.md
Last updated 03/31/2022
# Install Log Analytics agent on Windows computers
-The Log Analytics agents are on a **deprecation path** and will no longer be supported after **August 31, 2024**. If you use the Log Analytics agents to ingest data to Azure Monitor, make sure to [migrate to the new Azure Monitor agent](./azure-monitor-agent-migration.md) prior to that date.
- This article provides details on installing the Log Analytics agent on Windows computers using the following methods: * Manual installation using the [setup wizard](#install-agent-using-setup-wizard) or [command line](#install-agent-using-command-line). * [Azure Automation Desired State Configuration (DSC)](#install-agent-using-dsc-in-azure-automation).
+The installation methods described in this article are typically used for virtual machines on-premises or in other clouds. See [Installation options](./log-analytics-agent.md#installation-options) for more efficient options you can use for Azure virtual machines.
+ >[!IMPORTANT]
-> The installation methods described in this article are typically used for virtual machines on-premises or in other clouds. See [Installation options](./log-analytics-agent.md#installation-options) for more efficient options you can use for Azure virtual machines.
+>The Log Analytics agents are on a **deprecation path** and will no longer be supported after **August 31, 2024**. If you use the Log Analytics agents to ingest data to Azure Monitor, make sure to [migrate to the new Azure Monitor agent](./azure-monitor-agent-migration.md) prior to that date.
> [!NOTE] > Installing the Log Analytics agent will typically not require you to restart the machine.
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md
Several other features don't have a direct cost, but you instead pay for the ing
| Platform Logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there is a charge for the workspace data ingestion and collection. | | Metrics | There is no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There is a cost for cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). | | Alerts | Charged based on the type and number of [signals](alerts/alerts-overview.md#what-you-can-alert-on) used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [log alerts](alerts/alerts-unified-log.md) configured for [at scale monitoring](alerts/alerts-unified-log.md#split-by-alert-dimensions), the cost will also depend on the number of time series created by the dimensions resulting from your query. |
-| Web tests | There is a cost for [multi-step web tests](app/availability-multistep.md) in Application Insights, but this feature has been deprecated.
+| Web tests | There is a cost for [standard web tests](app/availability-standard-tests.md) and [multi-step web tests](app/availability-multistep.md) in Application Insights. Multi-step web tests have been deprecated.
## Data transfer charges Sending data to Azure Monitor can incur data bandwidth charges. As described in the [Azure Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions charged as outbound data transfer at the normal rate, although data sent to a different region via [Diagnostic Settings](essentials/diagnostic-settings.md) does not incur data transfer charges. Inbound data transfer is free. Data transfer charges are typically very small compared to the costs for data ingestion and retention. Controlling costs for Log Analytics should focus on your ingested data volume.
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix.md
Title: Azure Backup support matrix description: Provides a summary of support settings and limitations for the Azure Backup service. Previously updated : 04/28/2022 Last updated : 05/05/2022
Azure Backup has added the Cross Region Restore feature to strengthen data avail
| Backup Management type | Supported | Supported Regions | | - | | -- |
-| Azure VM | Supported for Azure VMs (including encrypted Azure VMs) with both managed and unmanaged disks. Not supported for classic VMs. | Available in all Azure public regions and sovereign regions, except for UG IOWA. |
-| SQL /SAP HANA | Available | Available in all Azure public regions and sovereign regions, except for France Central and UG IOWA. |
+| Azure VM | Supported for Azure VMs (including encrypted Azure VMs) with both managed and unmanaged disks. Not supported for classic VMs. | Available in all Azure public regions and sovereign regions, except for UG IOWA and UG Virginia. |
+| SQL /SAP HANA | Available | Available in all Azure public regions and sovereign regions, except for France Central, UG IOWA, and UG Virginia. |
| MARS Agent/On premises | No | N/A | | AFS (Azure file shares) | No | N/A |
cognitive-services How To Custom Voice Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md
To produce a good voice model, create the recordings in a quiet room with a high
### Audio files
-Each audio file should contain a single utterance (a single sentence or a single turn of a dialog system), less than 15 seconds long. All files must be in the same spoken language. Multi-language custom Text-to-Speech voices aren't supported, with the exception of the Chinese-English bi-lingual. Each audio file must have a unique numeric filename with the filename extension .wav.
+Each audio file should contain a single utterance (a single sentence or a single turn of a dialog system), less than 15 seconds long. All files must be in the same spoken language. Multi-language custom Text-to-Speech voices aren't supported, with the exception of the Chinese-English bi-lingual. Each audio file must have a unique filename with the filename extension .wav.
Follow these guidelines when preparing audio. | Property | Value | | -- | -- | | File format | RIFF (.wav), grouped into a .zip file |
-| File name | Numeric, with .wav extension. No duplicate file names allowed. |
+| File name | File name characters supported by Windows OS, with .wav extension.<br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
| Sampling rate | For creating a custom neural voice, 24,000 Hz is required. | | Sample format | PCM, at least 16-bit | | Audio length | Shorter than 15 seconds |
Follow these guidelines when preparing audio for segmentation.
| Property | Value | | -- | -- | | File format | RIFF (.wav) or .mp3, grouped into a .zip file |
-| File name | ASCII and Unicode characters supported. No duplicate names allowed. |
+| File name | File name characters supported by Windows OS, with .wav extension. <br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
| Sampling rate | For creating a custom neural voice, 24,000 Hz is required. | | Sample format |RIFF(.wav): PCM, at least 16-bit<br>mp3: at least 256 KBps bit rate| | Audio length | Longer than 20 seconds |
Follow these guidelines when preparing audio.
| Property | Value | | -- | -- | | File format | RIFF (.wav) or .mp3, grouped into a .zip file |
-| File name | ASCII and Unicode characters supported. No duplicate name allowed. |
+| File name | File name characters supported by Windows OS, with .wav extension. <br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
| Sampling rate | For creating a custom neural voice, 24,000 Hz is required. | | Sample format |RIFF(.wav): PCM, at least 16-bit<br>mp3: at least 256 KBps bit rate| | Audio length | No limit |
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
The following neural voices are in public preview.
| English (United Kingdom) | `en-GB` | Female | `en-GB-BellaNeural` <sup>New</sup> | General | | English (United Kingdom) | `en-GB` | Female | `en-GB-HollieNeural` <sup>New</sup> | General | | English (United Kingdom) | `en-GB` | Female | `en-GB-OliviaNeural` <sup>New</sup> | General |
-| English (United Kingdom) | `en-GB` | Girl | `en-GB-MaisieNeural` <sup>New</sup> | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-MaisieNeural` <sup>New</sup> | General, child voice |
| English (United Kingdom) | `en-GB` | Male | `en-GB-AlfieNeural` <sup>New</sup> | General | | English (United Kingdom) | `en-GB` | Male | `en-GB-ElliotNeural` <sup>New</sup> | General | | English (United Kingdom) | `en-GB` | Male | `en-GB-EthanNeural` <sup>New</sup> | General | | English (United Kingdom) | `en-GB` | Male | `en-GB-NoahNeural` <sup>New</sup> | General | | English (United Kingdom) | `en-GB` | Male | `en-GB-OliverNeural` <sup>New</sup> | General | | English (United Kingdom) | `en-GB` | Male | `en-GB-ThomasNeural` <sup>New</sup> | General |
+| English (United States) | `en-US` | Male | `en-US-DavisNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Female | `en-US-JaneNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Male | `en-US-JasonNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Female | `en-US-NancyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Male | `en-US-TonyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
| French (France) | `fr-FR` | Female | `fr-FR-BrigitteNeural` <sup>New</sup> | General | | French (France) | `fr-FR` | Female | `fr-FR-CelesteNeural` <sup>New</sup> | General | | French (France) | `fr-FR` | Female | `fr-FR-CoralieNeural` <sup>New</sup> | General | | French (France) | `fr-FR` | Female | `fr-FR-JacquelineNeural` <sup>New</sup> | General | | French (France) | `fr-FR` | Female | `fr-FR-JosephineNeural` <sup>New</sup> | General | | French (France) | `fr-FR` | Female | `fr-FR-YvetteNeural` <sup>New</sup> | General |
-| French (France) | `fr-FR` | Girl | `fr-FR-EloiseNeural` <sup>New</sup> | General |
+| French (France) | `fr-FR` | Female | `fr-FR-EloiseNeural` <sup>New</sup> | General, child voice |
| French (France) | `fr-FR` | Male | `fr-FR-AlainNeural` <sup>New</sup> | General | | French (France) | `fr-FR` | Male | `fr-FR-ClaudeNeural` <sup>New</sup> | General | | French (France) | `fr-FR` | Male | `fr-FR-JeromeNeural` <sup>New</sup> | General |
The following neural voices are in public preview.
| German (Germany) | `de-DE` | Female | `de-DE-LouisaNeural` <sup>New</sup> | General | | German (Germany) | `de-DE` | Female | `de-DE-MajaNeural` <sup>New</sup> | General | | German (Germany) | `de-DE` | Female | `de-DE-TanjaNeural` <sup>New</sup> | General |
-| German (Germany) | `de-DE` | Girl | `de-DE-GiselaNeural` <sup>New</sup> | General |
+| German (Germany) | `de-DE` | Female | `de-DE-GiselaNeural` <sup>New</sup> | General, child voice |
| German (Germany) | `de-DE` | Male | `de-DE-BerndNeural` <sup>New</sup> | General | | German (Germany) | `de-DE` | Male | `de-DE-ChristophNeural` <sup>New</sup> | General | | German (Germany) | `de-DE` | Male | `de-DE-KasperNeural` <sup>New</sup> | General |
The following neural voices are in public preview.
In some cases, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm. You can optimize the voice for different scenarios like customer service, newscast, and voice assistant. With roles, the same voice can act as a different age and gender. > [!NOTE]
-> Voices and styles in public preview are only available in three service [regions](regions.md#prebuilt-neural-voices): East US, West Europe, and Southeast Asia.
-
+> The angry, cheerful, excited, friendly, hopeful, sad, shouting, terrified, unfriendly, and whispering styles for DavisNeural, JaneNeural, JasonNeural, NancyNeural and TonyNeural are only available in three service regions: East US, West Europe, and Southeast Asia.
To learn how you can configure and adjust neural voice styles and roles, see [Speech Synthesis Markup Language](speech-synthesis-markup.md#adjust-speaking-styles). Use the following table to determine supported styles and roles for each neural voice. |Voice|Styles|Style degree|Roles| |--|--|--|--|
-|en-US-AriaNeural|`chat`, `cheerful`, `customerservice`, `empathetic`, `narration-professional`, `newscast-casual`, `newscast-formal`|||
-|en-US-GuyNeural|`newscast`|||
-|en-US-JennyNeural|`assistant`, `chat`,`customerservice`, `newscast`|||
-|en-US-SaraNeural|`angry`, `cheerful`, `sad`|||
+|en-US-AriaNeural|`angry`, `chat`, `cheerful`, `customerservice`, `empathetic`, `excited`, `friendly`, `hopeful`, `narration-professional`, `newscast-casual`, `newscast-formal`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|en-US-DavisNeural|`angry`, `chat`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|en-US-GuyNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `newscast`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|en-US-JaneNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|en-US-JasonNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|en-US-JennyNeural|`angry`, `assistant`, `chat`, `cheerful`,`customerservice`, `excited`, `friendly`, `hopeful`, `newscast`, `sad`, `shouting`, `terrified`, , `unfriendly`, `whispering`|||
+|en-US-NancyNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|en-US-SaraNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|en-US-TonyNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
|fr-FR-DeniseNeural |`cheerful` <sup>Public preview</sup>, `sad`<sup>Public preview</sup>||| |ja-JP-NanamiNeural|`chat`, `cheerful`, `customerservice`||| |pt-BR-FranciscaNeural|`calm`|||
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
This SSML snippet illustrates how the `<mstts:express-as>` element is used to ch
```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="en-US">
- <voice name="en-US-AriaNeural">
+ <voice name="en-US-JennyNeural">
<mstts:express-as style="cheerful"> That'd be just amazing! </mstts:express-as>
The following table has descriptions of each supported style.
|`style="disgruntled"`|Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt.| |`style="embarrassed"`|Expresses an uncertain and hesitant tone when the speaker is feeling uncomfortable.| |`style="empathetic"`|Expresses a sense of caring and understanding.|
-|`style="envious"`|Express a tone of admiration when you desire something that someone else has.|
+|`style="envious"`|Expresses a tone of admiration when you desire something that someone else has.|
+|`style="excited"`|Expresses an upbeat and hopeful tone. It sounds like something great is happening and the speaker is really happy about that.|
|`style="fearful"`|Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tension and unease.|
+|`style="friendly"`|Expresses a pleasant, inviting, and warm tone. It sounds sincere and caring.|
|`style="gentle"`|Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy.|
+|`style="hopeful"`|Expresses a warm and yearning tone. It sounds like something good will happen to the speaker.|
|`style="lyrical"`|Expresses emotions in a melodic and sentimental way.| |`style="narration-professional"`|Expresses a professional, objective tone for content reading.| |`style="narration-relaxed"`|Express a soothing and melodious tone for content reading.|
The following table has descriptions of each supported style.
|`style="newscast-formal"`|Expresses a formal, confident, and authoritative tone for news delivery.| |`style="sad"`|Expresses a sorrowful tone.| |`style="serious"`|Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence.|
+|`style="shouting"`|Speaks like from a far distant or outside and to make self be clearly heard|
+|`style="whispering"`|Speaks very softly and make a quiet and gentle sound|
+|`style="terrified"`|Expresses a very scared tone, with faster pace and a shakier voice. It sounds like the speaker is in an unsteady and frantic status.|
+|`style="unfriendly"`|Expresses a cold and indifferent tone.|
### Style degree
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/concepts.md
An Offer is extended by Job Router to a worker to handle a particular job when i
A real-world example is the ringing of an agent in a call center.
-### Offer flow
+### Offer acceptance flow
1. When Job Router finds a matching Worker for a Job, it creates an Offer and sends an [OfferIssued Event][offer_issued_event] via [Event Grid][subscribe_events].
-2. The Offer is accepted via the Job Router API.
-3. Job Router sends an [OfferAccepted Event][offer_accepted_event].
+1. The Offer is accepted via the Job Router API.
+1. The job is removed from the queue and assigned to the worker.
+1. Job Router sends an [OfferAccepted Event][offer_accepted_event].
+1. Any existing offers to other workers for this same job will be revoked and an [OfferRevoked Event][offer_revoked_event] will be sent.
- :::image type="content" source="../media/router/acs-router-accept-offer.png" alt-text="Diagram showing Communication Services' Job Router accept offer.":::
+### Offer decline flow
+
+1. When Job Router finds a matching Worker for a Job, it creates an Offer and sends an [OfferIssued Event][offer_issued_event] via [Event Grid][subscribe_events].
+1. The Offer is declined via the Job Router API.
+1. The Offer is removed from the worker, opening up capacity for another Offer for a different job.
+1. Job Router sends an [OfferDeclined Event][offer_declined_event].
+1. Job Router won't reoffer the declined Offer to the worker unless they deregister and re-register.
+
+### Offer expiry flow
+
+1. When Job Router finds a matching Worker for a Job, it creates an Offer and sends an [OfferIssued Event][offer_issued_event] via [Event Grid][subscribe_events].
+1. The Offer is not accepted or declined within the TTL period defined by the Distribution Policy.
+1. Job Router will expire the Offer and an [OfferExpired Event][offer_expired_event] will be sent.
+1. The worker is considered unavailable and will be automatically deregistered.
+1. A [WorkerDeregistered Event][worker_deregistered_event] will be sent.
## Distribution Policy
An exception policy controls the behavior of a Job based on a trigger and execut
[subscribe_events]: ../../how-tos/router-sdk/subscribe-events.md [worker_registered_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerregistered
+[worker_deregistered_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerderegistered
[job_classified_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterjobclassified [offer_issued_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferissued [offer_accepted_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferaccepted
+[offer_declined_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferdeclined
+[offer_expired_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferexpired
+[offer_revoked_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferrevoked
[worker-scoring]: ../../how-tos/router-sdk/customize-worker-scoring.md
container-apps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication.md
Container Apps uses [federated identity](https://en.wikipedia.org/wiki/Federated
| - | - | - | | [Microsoft Identity Platform](../active-directory/fundamentals/active-directory-whatis.md) | `/.auth/login/aad` | [Microsoft Identity Platform](authentication-azure-active-directory.md) | | [Facebook](https://developers.facebook.com/docs/facebook-login) | `/.auth/login/facebook` | [Facebook](authentication-facebook.md) |
-| [GitHub](https://docs.github.com/en/developers/apps/building-oauth-apps/authorizing-oauth-apps) | `/.auth/login/github` | [Google](authentication-github.md) |
+| [GitHub](https://docs.github.com/en/developers/apps/building-oauth-apps/authorizing-oauth-apps) | `/.auth/login/github` | [GitHub](authentication-github.md) |
| [Google](https://developers.google.com/identity/choose-auth) | `/.auth/login/google` | [Google](authentication-google.md) | | [Twitter](https://developer.twitter.com/en/docs/basics/authentication) | `/.auth/login/twitter` | [Twitter](authentication-twitter.md) | | Any [OpenID Connect](https://openid.net/connect/) provider | `/.auth/login/<providerName>` | [OpenID Connect](authentication-openid.md) |
In a client-directed sign-in, the application signs in the user to the identity
To validate the provider token, container app must first be configured with the desired provider. At runtime, after you retrieve the authentication token from your provider, post the token to `/.auth/login/<provider>` for validation. For example: ```console
-POST https://<appname>.azurewebsites.net/.auth/login/aad HTTP/1.1
+POST https://<hostname>.azurecontainerapps.io/.auth/login/aad HTTP/1.1
Content-Type: application/json {"id_token":"<token>","access_token":"<token>"}
container-apps Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/billing.md
The following resources are free during each calendar month, per subscription:
This article describes how to calculate the cost of running your container app. For pricing details in your account's currency, see [Azure Container Apps Pricing](https://azure.microsoft.com/pricing/details/container-apps/). > [!NOTE]
-> If you use Container Apps with [your own virtual network](vnet-custom.md#managed-resources) or your apps utilize other Azure resources, additional charges may apply.
+> If you use Container Apps with [your own virtual network](networking.md#managed-resources) or your apps utilize other Azure resources, additional charges may apply.
## Resource consumption charges
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
+
+ Title: Securing a custom VNET in Azure Container Apps Preview
+description: Firewall settings to secure a custom VNET in Azure Container Apps Preview
++++ Last updated : 4/15/2022+++
+# Securing a custom VNET in Azure Container Apps
+
+Firewall settings Network Security Groups (NSGs) needed to configure virtual networks closely resemble the settings required by Kubernetes.
+
+Some outbound dependencies of Azure Kubernetes Service (AKS) clusters rely exclusively on fully qualified domain names (FQDN), therefore securing an AKS cluster purely with NSGs isn't possible. Refer to [Control egress traffic for cluster nodes in Azure Kubernetes Service](/azure/aks/limit-egress-traffic) for details.
+
+* You can lock down a network via NSGs with more restrictive rules than the default NSG rules.
+* To fully secure a cluster, use a combination of NSGs and a firewall.
+
+## NSG allow rules
+
+The following tables describe how to configure a collection of NSG allow rules.
+
+### Inbound
+
+| Protocol | Port | ServiceTag | Description |
+|--|--|--|--|
+| Any | \* | Control plane subnet address space | Allow communication between IPs in the control plane subnet. This address is passed to as a parameter when you create an environment. For example, `10.0.0.0/21`. |
+| Any | \* | App subnet address space | Allow communication between nodes in the app subnet. This address is passed as a parameter when you create an environment. For example, `10.0.8.0/21`. |
+
+### Outbound with ServiceTags
+
+| Protocol | Port | ServiceTag | Description
+|--|--|--|--|
+| UDP | `1194` | `AzureCloud.<REGION>` | Required for internal AKS secure connection between underlying nodes and control plane. Replace `<REGION>` with the region where your container app is deployed. |
+| TCP | `9000` | `AzureCloud.<REGION>` | Required for internal AKS secure connection between underlying nodes and control plane. Replace `<REGION>` with the region where your container app is deployed. |
+| TCP | `443` | `AzureMonitor` | Allows outbound calls to Azure Monitor. |
+
+### Outbound with wild card IP rules
+
+As the following rules require allowing all IPs, use a Firewall solution to lock down to specific FQDNs.
+
+| Protocol | Port | IP | Description |
+|--|--|--|--|
+| TCP | `443` | \* | Allow all outbound on port `443` provides a way to allow all FQDN based outbound dependencies that don't have a static IP. |
+| UDP | `123` | \* | NTP server. If using firewall, allowlist `ntp.ubuntu.com:123`. |
+| Any | \* | Control plane subnet address space | Allow communication between IPs in the control plane subnet. This address is passed as a parameter when you create an environment. For example, `10.0.0.0/21`. |
+| Any | \* | App subnet address space | Allow communication between nodes in the App subnet. This address is passed as a parameter when you create an environment. For example, `10.0.8.0/21`. |
+
+## Firewall configuration
+
+### Outbound FQDN dependencies
+
+| FQDN | Protocol | Port | Description |
+|--|--|--|--|
+| `*.hcp.<REGION>.azmk8s.io` | HTTPS | `443` | Required for internal AKS secure connection between nodes and control plane. |
+| `mcr.microsoft.com` | HTTPS | `443` | Required to access images in Microsoft Container Registry (MCR). This registry contains first-party images and charts (for example, coreDNS). These images are required for the correct creation and functioning of the cluster, including scale and upgrade operations. |
+| `*.data.mcr.microsoft.com` | HTTPS | `443` | Required for MCR storage backed by the Azure content delivery network (CDN). |
+| `management.azure.com` | HTTPS | `443` | Required for Kubernetes operations against the Azure API. |
+| `login.microsoftonline.com` | HTTPS | `443` | Required for Azure Active Directory authentication. |
+| `packages.microsoft.com` | HTTPS | `443` | This address is the Microsoft packages repository used for cached apt-get operations. Example packages include Moby, PowerShell, and Azure CLI. |
+| `acs-mirror.azureedge.net` | HTTPS | `443` | This address is for the repository required to download and install required binaries like `kubenet` and Azure Container Networking Interface. |
+| `dc.services.visualstudio.com` | HTTPS | `443` | This endpoint is used for metrics and monitoring using Azure Monitor. |
+| `*.ods.opinsights.azure.com` | HTTPS | `443` | This endpoint is used by Azure Monitor for ingesting log analytics data. |
+| `*.oms.opinsights.azure.com` | HTTPS | `443` | This endpoint is used by `omsagent`, which is used to authenticate the log analytics service. |
+| `*.monitoring.azure.com` | HTTPS | `443` | This endpoint is used to send metrics data to Azure Monitor. |
container-apps Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/health-probes.md
Container Apps support the following probes:
- [Startup](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes): Delay reporting on a liveness or readiness state for slower apps with a startup probe. - [Readiness](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes): Signals that a replica is ready to accept traffic.
-For a full listing of the specification supported in Azure Container Apps, refer to [Azure Rest API specs](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/app/resource-manager/Microsoft.App/stable/2022-03-01/CommonDefinitions.json#L119-L236).
+For a full listing of the specification supported in Azure Container Apps, refer to [Azure REST API specs](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/app/resource-manager/Microsoft.App/stable/2022-03-01/CommonDefinitions.json#L119-L236).
## HTTP probes
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
+
+ Title: Networking architecture in Azure Container Apps
+description: Learn how to configure virtual networks in Azure Container Apps
++++ Last updated : 05/06/2022+++
+# Networking architecture in Azure Container Apps
+
+Azure Container Apps run in the context of an [environment](environment.md), which is supported by a virtual network (VNET). When you create an environment, you can provide a custom VNET, otherwise a VNET is automatically generated for you. Generated VNETs are inaccessible to you as they're created in Microsoft's tenent. To take full control over your VNET, provide an existing VNET to Container Apps as you create your environment.
+
+The following articles feature step-by-step instructions for creating Container Apps environments with different accessibility levels.
+
+| Accessibility level | Description |
+|--|--|
+| [External](vnet-custom.md) | Container Apps environments deployed as external resources are available for public requests. External environments are deployed with a virtual IP on an external, public facing IP address. |
+| [Internal](vnet-custom-internal.md) | When set to internal, the environment has no public endpoint. Internal environments are deployed with a virtual IP (VIP) mapped to an internal IP address. The internal endpoint is an Azure internal load balancer (ILB) and IP addresses are issued from the custom VNET's list of private IP addresses. |
+
+## Custom VNET configuration
+
+As you create a custom VNET, keep in mind the following situations:
+
+- If you want your container app to restrict all outside access, create an [internal Container Apps environment](vnet-custom-internal.md).
+
+- When you provide your own VNET, the network needs a single subnet.
+
+- Network addresses are assigned from a subnet range you define as the environment is created.
+
+ - You can define the subnet range used by the Container Apps environment.
+ - Once the environment is created, the subnet range is immutable.
+ - A single load balancer and single Kubernetes service are associated with each container apps environment.
+ - Each [revision](revisions.md) is assigned an IP address in the subnet.
+ - You can restrict inbound requests to the environment exclusively to the VNET by deploying the environment as [internal](vnet-custom-internal.md).
+
+As you begin to design the network around your container app, refer to [Plan virtual networks](/azure/virtual-network/virtual-network-vnet-plan-design-arm) for important concerns surrounding running virtual networks on Azure.
++
+<!--
+https://docs.microsoft.com/azure/azure-functions/functions-networking-options
+
+https://techcommunity.microsoft.com/t5/apps-on-azure-blog/azure-container-apps-virtual-network-integration/ba-p/3096932
+-->
+
+### HTTP edge proxy behavior
+
+Azure Container Apps uses [Envoy proxy](https://www.envoyproxy.io/) as an edge HTTP proxy. TLS is terminated on the edge and requests are routed based on their traffic split rules and routes traffic to the correct application.
+
+HTTP applications scale based on the number of HTTP requests and connections. Envoy routes internal traffic inside clusters. Downstream connections support HTTP1.1 and HTTP2 and Envoy automatically detects and upgrades the connection should the client connection be upgraded. Upstream connection is defined by setting the `transport` property on the [ingress](azure-resource-manager-api-spec.md#propertiesconfiguration) object.
+
+### Ingress configuration
+
+Under the [ingress](azure-resource-manager-api-spec.md#propertiesconfiguration) section, you can configure the following settings:
+
+- **Accessibility level**: You can set your container app as externally or internally accessible in the environment. An environment variable `CONTAINER_APP_ENV_DNS_SUFFIX` is used to automatically resolve the FQDN suffix for your environment.
+
+- **Traffic split rules**: You can define traffic split rules between different revisions of your application.
+
+### Scenarios
+
+The following scenarios describe configuration settings for common use cases.
+
+#### Rapid iteration
+
+In situations where you're frequently iterating development of your container app, you can set traffic rules to always shift all traffic to the latest deployed revision.
+
+The following example routes all traffic to the latest deployed revision:
+
+```json
+"ingress": {
+ "traffic": [
+ {
+ "latestRevision": true,
+ "weight": 100
+ }
+ ]
+}
+```
+
+Once you're satisfied with the latest revision, you can lock traffic to that revision by updating the `ingress` settings to:
+
+```json
+"ingress": {
+ "traffic": [
+ {
+ "latestRevision": false, // optional
+ "revisionName": "myapp--knowngoodrevision",
+ "weight": 100
+ }
+ ]
+}
+```
+
+#### Update existing revision
+
+Consider a situation where you have a known good revision that's serving 100% of your traffic, but you want to issue and update to your app. You can deploy and test new revisions using their direct endpoints without affecting the main revision serving the app.
+
+Once you're satisfied with the updated revision, you can shift a portion of traffic to the new revision for testing and verification.
+
+The following configuration demonstrates how to move 20% of traffic over to the updated revision:
+
+```json
+"ingress": {
+ "traffic": [
+ {
+ "revisionName": "myapp--knowngoodrevision",
+ "weight": 80
+ },
+ {
+ "revisionName": "myapp--newerrevision",
+ "weight": 20
+ }
+ ]
+}
+```
+
+#### Staging microservices
+
+When building microservices, you may want to maintain production and staging endpoints for the same app. Use labels to ensure that traffic doesn't switch between different revisions.
+
+The following example demonstrates how to apply labels to different revisions.
+
+```json
+"ingress": {
+ "traffic": [
+ {
+ "revisionName": "myapp--knowngoodrevision",
+ "weight": 100
+ },
+ {
+ "revisionName": "myapp--98fdgt",
+ "weight": 0,
+ "label": "staging"
+ }
+ ]
+}
+```
+
+## Portal dependencies
+
+For every app in Azure Container Apps, there are two URLs.
+
+The first URL is generated by Container Apps and is used to access your app. See the *Application Url* in the *Overview* window of your container app in the Azure portal for the fully qualified domain name (FQDN) of your container app.
+
+The second URL grants access to the log streaming service and the console. If necessary, you may need to add `https://azurecontainerapps.dev/` to the allowlist of your firewall or proxy.
+
+## Ports and IP addresses
+
+The VNET associated with a Container Apps environment uses a single subnet with 255 addresses.
+
+The following ports are exposed for inbound connections.
+
+| Use | Port(s) |
+|--|--|
+| HTTP/HTTPS | 80, 443 |
+
+Container Apps reserves 60 IPs in your VNET, and the amount may grow as your container environment scales.
+
+IP addresses are broken down into the following types:
+
+| Type | Description |
+|--|--|
+| Public inbound IP address | Used for app traffic in an external deployment, and management traffic in both internal and external deployments. |
+| Outbound public IP | Used as the "from" IP for outbound connections that leave the virtual network. These connections aren't routed down a VPN. |
+| Internal load balancer IP address | This address only exists in an internal deployment. |
+| App-assigned IP-based TLS/SSL addresses | These addresses are only possible with an external deployment, and when IP-based TLS/SSL binding is configured. |
+
+## Restrictions
+
+Subnet address ranges can't overlap with the following reserved ranges:
+
+- 169.254.0.0/16
+- 172.30.0.0/16
+- 172.31.0.0/16
+- 192.0.2.0/24
+
+## Subnet
+
+As a Container Apps environment is created, you provide resource IDs for a single subnet.
+
+If you're using the CLI, the parameter to define the subnet resource ID is `infrastructure-subnet-resource-id`. The subnet hosts infrastructure components and user app containers.
+
+If you're using the Azure CLI and the [platformReservedCidr](vnet-custom-internal.md#networking-parameters) range is defined, both subnets must not overlap with the IP range defined in `platformReservedCidr`.
+
+## Routes
+
+There's no forced tunneling in Container Apps routes.
+
+## Managed resources
+
+When you deploy an internal or an external environment into your own network, a new resource group prefixed with `MC_` is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform, and shouldn't be modified. The resource group contains Public IP addresses used specifically for outbound connectivity from your environment and a load balancer. As the load balancer is created in your subscription, there are extra costs associated with deploying the service to a custom virtual network.
+
+## Next steps
+
+- [Deploy with an external environment](vnet-custom.md)
+- [Deploy with an internal environment](vnet-custom-internal.md)
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
+
+ Title: Provide an internal virtual network to an Azure Container Apps Preview environment
+description: Learn how to provide an internal VNET to an Azure Container Apps environment.
++++ Last updated : 2/18/2022+
+zone_pivot_groups: azure-cli-or-portal
++
+# Provide an virtual network to an internal Azure Container Apps (Preview) environment
+
+The following example shows you how to create a Container Apps environment in an existing virtual network.
+
+> [!IMPORTANT]
+> In order to ensure the environment deployment within your custom VNET is successful, configure your VNET with an "allow-all" configuration by default. The full list of traffic dependencies required to configure the VNET as "deny-all" is not yet available. Refer to [Known issues for public preview](https://github.com/microsoft/azure-container-apps/wiki/Known-Issues-for-public-preview) for additional details.
++
+<!-- Create -->
+
+7. Select the **Networking** tab to create a VNET.
+8. Select **Yes** next to *Use your own virtual network*.
+9. Next to the *Virtual network* box, select the **Create new** link.
+10. Enter **my-custom-vnet** in the name box.
+11. Select the **OK** button.
+12. Next to the *Control plane subnet* box, select the **Create new** link and enter the following values:
+
+ | Setting | Value |
+ |||
+ | Subnet name | Enter **my-control-plane-vnet**. |
+ | Virtual Network Address Block | Keep the default values. |
+ | Subnet Address Block | Keep the default values. |
+
+13. Select the **OK** button.
+14. Next to the *Control plane subnet* box, select the **Create new** link and enter the following values:
+
+ | Setting | Value |
+ |||
+ | Subnet name | Enter **my-apps-vnet**. |
+ | Virtual Network Address Block | Keep the default values. |
+ | Subnet Address Block | Keep the default values. |
+
+15. Under *Virtual IP*, select **Internal**.
+16. Select **Create**.
+
+<!-- Deploy -->
+++
+## Prerequisites
+
+- Azure account with an active subscription.
+ - If you don't have one, you [can create one for free](https://azure.microsoft.com/free/).
+- Install the [Azure CLI](/cli/azure/install-azure-cli) version 2.28.0 or higher.
++
+Next, declare a variable to hold the VNET name.
+
+# [Bash](#tab/bash)
+
+```bash
+VNET_NAME="my-custom-vnet"
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$VNET_NAME="my-custom-vnet"
+```
+++
+Now create an instance of the virtual network to associate with the Container Apps environment. The virtual network must have two subnets available for the container apps instance.
+
+> [!NOTE]
+> You can use an existing virtual network, but two empty subnets are required to use with Container Apps.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az network vnet create \
+ --resource-group $RESOURCE_GROUP \
+ --name $VNET_NAME \
+ --location $LOCATION \
+ --address-prefix 10.0.0.0/16
+```
+
+```azurecli
+az network vnet subnet create \
+ --resource-group $RESOURCE_GROUP \
+ --vnet-name $VNET_NAME \
+ --name control-plane \
+ --address-prefixes 10.0.0.0/21
+```
+
+```azurecli
+az network vnet subnet create \
+ --resource-group $RESOURCE_GROUP \
+ --vnet-name $VNET_NAME \
+ --name applications \
+ --address-prefixes 10.0.8.0/21
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az network vnet create `
+ --resource-group $RESOURCE_GROUP `
+ --name $VNET_NAME `
+ --location $LOCATION `
+ --address-prefix 10.0.0.0/16
+```
+
+```powershell
+az network vnet subnet create `
+ --resource-group $RESOURCE_GROUP `
+ --vnet-name $VNET_NAME `
+ --name control-plane `
+ --address-prefixes 10.0.0.0/21
+```
+
+```powershell
+az network vnet subnet create `
+ --resource-group $RESOURCE_GROUP `
+ --vnet-name $VNET_NAME `
+ --name applications `
+ --address-prefixes 10.0.8.0/21
+```
+++
+With the VNET established, you can now query for the VNET, control plane, and app subnet IDs.
+
+# [Bash](#tab/bash)
+
+```bash
+VNET_RESOURCE_ID=`az network vnet show --resource-group ${RESOURCE_GROUP} --name ${VNET_NAME} --query "id" -o tsv | tr -d '[:space:]'`
+```
+
+```bash
+CONTROL_PLANE_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name $VNET_NAME --name control-plane --query "id" -o tsv | tr -d '[:space:]'`
+```
+
+```bash
+APP_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name ${VNET_NAME} --name applications --query "id" -o tsv | tr -d '[:space:]'`
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$VNET_RESOURCE_ID=(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
+```
+
+```powershell
+$CONTROL_PLANE_SUBNET=(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name control-plane --query "id" -o tsv)
+```
+
+```powershell
+$APP_SUBNET=(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name applications --query "id" -o tsv)
+```
+++
+Finally, create the Container Apps environment with the internal VNET and subnets.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp env create \
+ --name $CONTAINERAPPS_ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
+ --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET \
+ --location "$LOCATION" \
+ --app-subnet-resource-id $APP_SUBNET \
+ --controlplane-subnet-resource-id $CONTROL_PLANE_SUBNET \
+ --internal-only
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az containerapp env create `
+ --name $CONTAINERAPPS_ENVIRONMENT `
+ --resource-group $RESOURCE_GROUP `
+ --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID `
+ --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET `
+ --location "$LOCATION" `
+ --app-subnet-resource-id $APP_SUBNET `
+ --controlplane-subnet-resource-id $CONTROL_PLANE_SUBNET `
+ --internal-only
+```
+++
+> [!NOTE]
+> As you call `az conatinerapp create` to create the container app inside your environment, make sure the value for the `--image` parameter is in lower case.
+
+The following table describes the parameters used in for `containerapp env create`.
+
+| Parameter | Description |
+|||
+| `name` | Name of the container apps environment. |
+| `resource-group` | Name of the resource group. |
+| `logs-workspace-id` | The ID of the Log Analytics workspace. |
+| `logs-workspace-key` | The Log Analytics client secret. |
+| `location` | The Azure location where the environment is to deploy. |
+| `app-subnet-resource-id` | The resource ID of a subnet where containers are injected into the container app. This subnet must be in the same VNET as the subnet defined in `--control-plane-subnet-resource-id`. |
+| `controlplane-subnet-resource-id` | The resource ID of a subnet for control plane infrastructure components. This subnet must be in the same VNET as the subnet defined in `--app-subnet-resource-id`. |
+| `internal-only` | Optional parameter that scopes the environment to IP addresses only available the custom VNET. |
+
+With your environment created with your custom-virtual network, you can create container apps into the environment using the `az containerapp create` command.
+
+### Optional configuration
+
+You have the option of deploying a private DNS and defining custom networking IP ranges for your Container Apps environment.
+
+#### Deploy with a private DNS
+
+If you want to deploy your container app with a private DNS, run the following commands.
+
+First, extract identifiable information from the environment.
+
+# [Bash](#tab/bash)
+
+```bash
+ENVIRONMENT_DEFAULT_DOMAIN=`az containerapp env show --name ${CONTAINERAPPS_ENVIRONMENT} --resource-group ${RESOURCE_GROUP} --query defaultDomain --out json | tr -d '"'`
+```
+
+```bash
+ENVIRONMENT_STATIC_IP=`az containerapp env show --name ${CONTAINERAPPS_ENVIRONMENT} --resource-group ${RESOURCE_GROUP} --query staticIp --out json | tr -d '"'`
+```
+
+```bash
+VNET_ID=`az network vnet show --resource-group ${RESOURCE_GROUP} --name ${VNET_NAME} --query id --out json | tr -d '"'`
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$ENVIRONMENT_DEFAULT_DOMAIN=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query defaultDomain -o tsv)
+```
+
+```powershell
+$ENVIRONMENT_STATIC_IP=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query staticIp -o tsv)
+```
+
+```powershell
+$VNET_ID=(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query id -o tsv)
+```
+++
+Next, set up the private DNS.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az network private-dns zone create \
+ --resource-group $RESOURCE_GROUP \
+ --name $ENVIRONMENT_DEFAULT_DOMAIN
+```
+
+```azurecli
+az network private-dns link vnet create \
+ --resource-group $RESOURCE_GROUP \
+ --name $VNET_NAME \
+ --virtual-network $VNET_ID \
+ --zone-name $ENVIRONMENT_DEFAULT_DOMAIN -e true
+```
+
+```azurecli
+az network private-dns record-set a add-record \
+ --resource-group $RESOURCE_GROUP \
+ --record-set-name "*" \
+ --ipv4-address $ENVIRONMENT_STATIC_IP \
+ --zone-name $ENVIRONMENT_DEFAULT_DOMAIN
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az network private-dns zone create `
+ --resource-group $RESOURCE_GROUP `
+ --name $ENVIRONMENT_DEFAULT_DOMAIN
+```
+
+```powershell
+az network private-dns link vnet create `
+ --resource-group $RESOURCE_GROUP `
+ --name $VNET_NAME `
+ --virtual-network $VNET_ID `
+ --zone-name $ENVIRONMENT_DEFAULT_DOMAIN -e true
+```
+
+```powershell
+az network private-dns record-set a add-record `
+ --resource-group $RESOURCE_GROUP `
+ --record-set-name "*" `
+ --ipv4-address $ENVIRONMENT_STATIC_IP `
+ --zone-name $ENVIRONMENT_DEFAULT_DOMAIN
+```
+++
+#### Networking parameters
+
+There are three optional networking parameters you can choose to define when calling `containerapp env create`. Use these options when you have a peered VNET with separate address ranges. Explicitly configuring these ranges ensures the addresses used by the Container Apps environment doesn't conflict with other ranges in the network infrastructure.
+
+You must either provide values for all three of these properties, or none of them. If they arenΓÇÖt provided, the CLI generates the values for you.
+
+| Parameter | Description |
+|||
+| `platform-reserved-cidr` | The address range used internally for environment infrastructure services. Must have a size between `/21` and `/12`. |
+| `platform-reserved-dns-ip` | An IP address from the `platform-reserved-cidr` range that is used for the internal DNS server. The address can't be the first address in the range, or the network address. For example, if `platform-reserved-cidr` is set to `10.2.0.0/16`, then `platform-reserved-dns-ip` can't be `10.2.0.0` (this is the network address), or `10.2.0.1` (infrastructure reserves use of this IP). In this case, the first usable IP for the DNS would be `10.2.0.2`. |
+| `docker-bridge-cidr` | The address range assigned to the Docker bridge network. This range must have a size between `/28` and `/12`. |
+
+- The `platform-reserved-cidr` and `docker-bridge-cidr` address ranges can't conflict with each other, or with the ranges of either provided subnet. Further, make sure these ranges don't conflict with any other address range in the VNET.
+
+- If these properties arenΓÇÖt provided, the CLI autogenerates the range values based on the address range of the VNET to avoid range conflicts.
++
+## Clean up resources
+
+If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the **my-container-apps** resource group.
++
+# [Bash](#tab/bash)
+
+```azurecli
+az group delete \
+ --name $RESOURCE_GROUP
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az group delete `
+ --name $RESOURCE_GROUP
+```
++++
+## Additional resources
+
+- Refer to [What is Azure Private Endpoint](../private-link/private-endpoint-overview.md) for more details on configuring your private endpoint.
+
+- To set up DNS name resolution for internal services, you must [set up your own DNS server](../dns/index.yml).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Managing autoscaling behavior](scale-app.md)
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
Title: Provide a virtual network to an Azure Container Apps Preview environment
-description: Learn how to provide a VNET to an Azure Container Apps environment.
+ Title: Provide an external virtual network to an Azure Container Apps Preview environment
+description: Learn how to provide an external VNET to an Azure Container Apps environment.
Previously updated : 2/3/2022 Last updated : 2/18/2022 zone_pivot_groups: azure-cli-or-portal
-# Provide a virtual network to an Azure Container Apps (Preview) environment
+# Provide an virtual network to an external Azure Container Apps (Preview) environment
-As you create an Azure Container Apps [environment](environment.md), a virtual network (VNET) is created for you, or you can provide your own. Network addresses are assigned from a subnet range you define as the environment is created.
--- You control the subnet range used by the Container Apps environment.-- Once the environment is created, the subnet range is immutable.-- A single load balancer and single Kubernetes service are associated with each container apps environment.-- Each [revision pod](revisions.md) is assigned an IP address in the subnet.-- You can restrict inbound requests to the environment exclusively to the VNET by deploying the environment as internal.
+The following example shows you how to create a Container Apps environment in an existing virtual network.
> [!IMPORTANT] > In order to ensure the environment deployment within your custom VNET is successful, configure your VNET with an "allow-all" configuration by default. The full list of traffic dependencies required to configure the VNET as "deny-all" is not yet available. Refer to [Known issues for public preview](https://github.com/microsoft/azure-container-apps/wiki/Known-Issues-for-public-preview) for additional details. -
-## Restrictions
-
-Subnet address ranges can't overlap with the following reserved ranges:
--- 169.254.0.0/16-- 172.30.0.0/16-- 172.31.0.0/16-- 192.0.2.0/24-
-Additionally, subnets must have a size between /21 and /12.
-
-## Subnet types
-
-As a Container Apps environment is created, you provide resource IDs for two different subnets. Both subnets must be defined in the same container apps.
--- **App subnet**: Subnet for user app containers. Subnet that contains IP ranges mapped to applications deployed as containers.-- **Control plane subnet**: Subnet for [control plane infrastructure](../azure-resource-manager/management/control-plane-and-data-plane.md) components and user app containers.--
-If the [platformReservedCidr](#networking-parameters) range is defined, both subnets must not overlap with the IP range defined in `platformReservedCidr`.
--
-## Accessibility level
-
-You can deploy your Container Apps environment with an internet-accessible endpoint or with an IP address in your VNET. The accessibility level determines the type of load balancer used with your Container Apps instance.
-
-### External
-
-Container Apps environments deployed as external resources are available for public requests. External environments are deployed with a virtual IP on an external, public facing IP address.
-
-### Internal
-
-When set to internal, the environment has no public endpoint. Internal environments are deployed with a virtual IP (VIP) mapped to an internal IP address. The internal endpoint is an Azure internal load balancer (ILB) and IP addresses are issued from the custom VNET's list of private IP addresses.
--
-To create an internal only environment, provide the `--internal-only` parameter to the `az containerapp env create` command.
--
-## Managed resources
-
-When you deploy an internal or an external environment into your own network, a new resource group prefixed with `MC_` is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform, and shouldn't be modified. The resource group contains Public IP addresses used specifically for outbound connectivity from your environment as well as a load balancer. As the load balancer is created in your subscription, there are additional costs associated with deploying the service to a custom virtual network.
-
-## Example
-
-The following example shows you how to create a Container Apps environment in an existing virtual network.
- ::: zone pivot="azure-portal" <!-- Create -->
container-registry Container Registry Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-private-link.md
For many scenarios, disable registry access from public networks. This configura
### Disable public access - CLI +
+> [!NOTE]
+>If the public access is disabled, the `az acr build` commands will no longer work.
+ To disable public access using the Azure CLI, run [az acr update][az-acr-update] and set `--public-network-enabled` to `false`. ```azurecli az acr update --name $REGISTRY_NAME --public-network-enabled false ```
+## Execute the `az acr build` with private endpoint and private registry
+
+Consider the following options to execute the `az acr build` successfully.
+> [!NOTE]
+> Once you disable public network [access here](/azure/container-registry/container-registry-private-link#disable-public-access), then `az acr build` commands will no longer work.
+
+1. Assign a [dedicated agent pool.](/azure/container-registry/tasks-agent-pools#Virtual-network-support)
+2. If agent pool is not available in the region, add the regional [Azure Container Registry Service Tag IPv4](/azure/virtual-network/service-tags-overview#use-the-service-tag-discovery-api) to the [firewall access rules.](/azure/container-registry/container-registry-firewall-access-rules#allow-access-by-ip-address-range)
+3. Create an ACR task with a managed identity, and enable trusted services to [access network restricted ACR.](/azure/container-registry/allow-access-trusted-services#example-acr-tasks)
+ ## Validate private link connection You should validate that the resources within the subnet of the private endpoint connect to your registry over a private IP address, and have the correct private DNS zone integration.
container-registry Container Registry Repository Scoped Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-repository-scoped-permissions.md
To configure repository-scoped permissions, you create a *token* with an associa
* Configure multiple tokens with identical permissions to a set of repositories * Update token permissions when you add or remove repository actions in the scope map, or apply a different scope map
- Azure Container Registry also provides several system-defined scope maps you can apply when creating tokens. The permissions of system-defined scope maps apply to all repositories in your registry.
+ Azure Container Registry also provides several system-defined scope maps you can apply when creating tokens. The permissions of system-defined scope maps apply to all repositories in your registry.The individual *actions* corresponds to the limit of [Repositories per scope map.](container-registry-skus.md)
The following image shows the relationship between tokens and scope maps.
container-registry Container Registry Troubleshoot Login https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-troubleshoot-login.md
May include one or more of the following:
* Unable to login to registry and you receive error `unauthorized: authentication required` or `unauthorized: Application not registered with AAD` * Unable to login to registry and you receive Azure CLI error `Could not connect to the registry login server` * Unable to push or pull images and you receive Docker error `unauthorized: authentication required`
-* Unable to access a registry using `az acr login` and you receive error `CONNECTIVITY_REFRESH_TOKEN_ERROR. Access to registry was denied. Response code: 403.Unable to get admin user credentials with message: Admin user is disabled.Unable to authenticate using AAD or admin login credentials.`
+* Unable to access a registry using `az acr login` and you receive error `CONNECTIVITY_REFRESH_TOKEN_ERROR. Access to registry was denied. Response code: 403. Unable to get admin user credentials with message: Admin user is disabled. Unable to authenticate using AAD or admin login credentials.`
* Unable to access registry from Azure Kubernetes Service, Azure DevOps, or another Azure service * Unable to access registry and you receive error `Error response from daemon: login attempt failed with status: 403 Forbidden` - See [Troubleshoot network issues with registry](container-registry-troubleshoot-access.md) * Unable to access or view registry settings in Azure portal or manage registry using the Azure CLI
May include one or more of the following:
* Docker isn't configured properly in your environment - [solution](#check-docker-configuration) * The registry doesn't exist or the name is incorrect - [solution](#specify-correct-registry-name) * The registry credentials aren't valid - [solution](#confirm-credentials-to-access-registry)
-* The registry public access is disabled.Public network access rules on the registry prevent access - [solution](container-registry-troubleshoot-access.md#configure-public-access-to-registry)
+* The registry public access is disabled. Public network access rules on the registry prevent access - [solution](container-registry-troubleshoot-access.md#configure-public-access-to-registry)
* The credentials aren't authorized for push, pull, or Azure Resource Manager operations - [solution](#confirm-credentials-are-authorized-to-access-registry) * The credentials are expired - [solution](#check-that-credentials-arent-expired)
container-registry Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/zone-redundancy.md
Zone redundancy is a feature of the Premium container registry service tier. Fo
|Americas |Europe |Africa |Asia Pacific | |||||
- |Brazil South<br/>Canada Central<br/>Central US<br/>East US<br/>East US 2<br/>South Central US<br/>US Government Virginia<br/>West US 2<br/>West US 3 |France Central<br/>Germany West Central<br/>North Europe<br/>Norway East<br/>West Europe<br/>UK South |South Africa North<br/> |Australia East<br/>Central India<br/>Japan East<br/>Korea Central<br/> |
+ |Brazil South<br/>Canada Central<br/>Central US<br/>East US<br/>East US 2<br/>South Central US<br/>US Government Virginia<br/>West US 2<br/>West US 3 |France Central<br/>Germany West Central<br/>North Europe<br/>Norway East<br/>West Europe<br/>UK South |South Africa North<br/> |Australia East<br/>Central India<br/>Japan East<br/>Korea Central<br/>Southeast Asia<br/>East Asia<br/> |
* Region conversions to availability zones aren't currently supported. To enable availability zone support in a region, the registry must either be created in the desired region, with availability zone support enabled, or a replicated region must be added with availability zone support enabled. * A registry with an AZ-enabled stamp creates a home region replication with an AZ-enabled stamp by default. The AZ stamp can't be disabled once it's enabled.
cost-management-billing Cost Analysis Common Uses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-analysis-common-uses.md
Dimensions allow you to organize your costs based on various metadata values sho
1. Select the **Group by** filter. [![Select a Group by item](./media/cost-analysis-common-uses/group-by.png)](./media/cost-analysis-common-uses/group-by.png#lightbox) 1. Optionally, you save the view for later use.
-1. Click a pie chart below the graph to view more detailed data.
+1. Select a pie chart below the graph to view more detailed data.
[![View cost breakdown by selected dimensions](./media/cost-analysis-common-uses/drill-down.png)](./media/cost-analysis-common-uses/drill-down.png#lightbox) ## View costs per day or by month
Reserved instances provide a way for you to save money with Azure. With reservat
1. In the Azure portal, navigate to cost analysis for your scope. For example, **Cost Management + Billing** > **Cost Management** > **Cost analysis**. 1. Add a filter for **Pricing Model: Reservation**.
-1. Under **Scope** and next to the cost shown, click the down arrow symbol, select either **Actual cost** or **Amortized cost** metric.
+1. Under **Scope** and next to the cost shown, select the down arrow symbol, select either **Actual cost** or **Amortized cost** metric.
![Select a cost metric](./media/cost-analysis-common-uses/metric-cost.png)
Support for tags applies to usage reported *after* the tag was applied to the re
Your usage details report file, in CSV format, provides a breakdown of all the charges that accrued towards an invoice. You can use the report to compare it to, and better understand, your invoice. Each billed charge on your invoice corresponds to broken-down charges in the usage report. 1. In the Azure portal, navigate to the **Usage and Charges** tab for a billing account or subscription. For example: **Cost Management + Billing** > **Billing** > **Usage + charges**.
-1. Select the line item to download from and then click the download symbol.
+1. Select the line item to download from and then select the download symbol.
[![Download usage and charges](./media/cost-analysis-common-uses/download1.png)](./media/cost-analysis-common-uses/download1.png#lightbox) 1. Select the usage file to download. ![Choose a usage file to download](./media/cost-analysis-common-uses/download2.png)
Costs are only shown for your active enrollment. If you transferred an enrollmen
1. In the Azure portal, navigate to **Cost Management + Billing** > **Overview**.
-1. Click **Breakdown** for the current month and view your Azure Prepayment (previously called monetary commitment) burn down.
+1. Select **Breakdown** for the current month and view your Azure Prepayment (previously called monetary commitment) burn down.
[![EA costs overview - breakdown summary](./media/cost-analysis-common-uses/breakdown1.png)](./media/cost-analysis-common-uses/breakdown1.png#lightbox)
-1. Click the **Usage and Charges** tab and view the prior month's breakdown in the chosen timespan.
+1. Select the **Usage and Charges** tab and view the prior month's breakdown in the chosen timespan.
[![Usage and charges tab](./media/cost-analysis-common-uses/breakdown2.png)](./media/cost-analysis-common-uses/breakdown2.png#lightbox) ## View enrollment monthly cost by term
cost-management-billing Tutorial Acm Opt Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-opt-recommendations.md
The 5% or less CPU utilization setting is the default, but you can adjust the se
Although some scenarios can result in low utilization by design, you can often save money by changing the size of your virtual machines to less expensive sizes. Your actual savings might vary if you choose a resize action. Let's walk through an example of resizing a virtual machine.
-In the list of recommendations, click the **Right-size or shutdown underutilized virtual machines** recommendation. In the list of virtual machine candidates, choose a virtual machine to resize and then click the virtual machine. The virtual machine's details are shown so that you can verify the utilization metrics. The **potential yearly savings** value is what you can save if you shut down or remove the VM. Resizing a VM will probably save you money, but you won't save the full amount of the potential yearly savings.
+In the list of recommendations, select the **Right-size or shutdown underutilized virtual machines** recommendation. In the list of virtual machine candidates, choose a virtual machine to resize and then select the virtual machine. The virtual machine's details are shown so that you can verify the utilization metrics. The **potential yearly savings** value is what you can save if you shut down or remove the VM. Resizing a VM will probably save you money, but you won't save the full amount of the potential yearly savings.
![Example of Recommendation details](./media/tutorial-acm-opt-recommendations/recommendation-details.png)
Next, you're presented with a list of available resize options. Choose the one t
![Example list of available VM sizes where you can choose a size](./media/tutorial-acm-opt-recommendations/choose-size.png)
-After you choose a suitable size, click **Resize** to start the resize action.
+After you choose a suitable size, select **Resize** to start the resize action.
Resizing requires an actively running virtual machine to restart. If the virtual machine is in a production environment, we recommend that you run the resize operation after business hours. Scheduling the restart can reduce disruptions caused by momentarily unavailability.
cost-management-billing Account Admin Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/account-admin-tasks.md
To change the active payment method to a credit card that is already saved:
![Screenshot that shows box checked next to credit card](./media/account-admin-tasks/subscription-checked-payment-method-x.png)
-1. Click **Set active** in the command bar.
+1. Select **Set active** in the command bar.
![Screenshot that shows set active button](./media/account-admin-tasks/subscription-checked-payment-method-set-active.png) ### Edit credit card details
-To edit credit card details such as the expiration date or address, click on the credit card that you'd like to edit. A credit card form will appear on the right.
+To edit credit card details such as the expiration date or address, select the credit card that you'd like to edit. A credit card form will appear on the right.
![Screenshot that shows credit card selected](./media/account-admin-tasks/subscription-edit-payment-method-x.png)
-Update the credit card details and click **Save**.
+Update the credit card details and select **Save**.
### Remove a credit card from the account
Update the credit card details and click **Save**.
![Screenshot that shows box checked next to credit card](./media/account-admin-tasks/subscription-checked-payment-method-x.png)
-1. Click **Delete** in the command bar.
+1. Select **Delete** in the command bar.
![Screenshot that shows delete button](./media/account-admin-tasks/subscription-checked-payment-method-delete.png)
If you are eligible to pay by invoice (check/wire transfer), you can switch your
![Screenshot shows Payment methods page with Pay by invoice selected.](./media/account-admin-tasks/subscription-payment-methods-pay-by-invoice.png) 1. Enter the address for the invoice payment method.
-1. Click **Next**.
+1. Select **Next**.
If you want to be approved to pay by invoice, see [learn how to pay by invoice](pay-by-invoice.md). ### Edit invoice payment address
-To edit the address of your invoice payment method, click on **Invoice** in the list of payment methods for your subscription. The address form will open on the right.
+To edit the address of your invoice payment method, select **Invoice** in the list of payment methods for your subscription. The address form will open on the right.
## Remove spending limit
The spending limit isnΓÇÖt available for subscriptions with commitment plans or
> [!NOTE] > If you don't see some of your Visual Studio subscriptions here, it might be because you changed a subscription directory at some point. For these subscriptions, you need to switch the directory to the original directory (the directory in which you initially signed up). Then, repeat step 2.
-1. In the Subscription overview, click the orange banner to remove the spending limit.
+1. In the Subscription overview, select the orange banner to remove the spending limit.
![Screenshot that shows remove spending limit banner](./media/account-admin-tasks/msdn-remove-spending-limit-banner-x.png)
The spending limit isnΓÇÖt available for subscriptions with commitment plans or
![Screenshot that shows remove spending limit blade](./media/account-admin-tasks/remove-spending-limit-blade-x.png)
-1. Click **Select payment method** to choose a payment method for your subscription. This will become the active payment method for your subscription.
+1. Select **Select payment method** to choose a payment method for your subscription. This will become the active payment method for your subscription.
-1. Click **Finish**.
+1. Select **Finish**.
## Add credits to Azure in Open subscription
If you have an Azure in Open Licensing subscription, you can add credits to your
1. If you chose product key: - Enter the product key
- - Click **Validate**
+ - Select **Validate**
1. If you chose credit card:
- - Click **Select payment method** to add a credit card or select an existing one.
+ - Select **Select payment method** to add a credit card or select an existing one.
- Specify the amount of credits you want to add.
-1. Click **Apply**
+1. Select **Apply**
## Usage details files comparison
cost-management-billing Azure Account For Microsoft 365 Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/azure-account-for-microsoft-365-subscription.md
If you already have both a Microsoft 365 account and an Azure subscription, you
## Get a Microsoft 365 subscription by using your Azure account 1. Go to the [Microsoft 365 product page](https://www.microsoft.com/microsoft-365/business/all-business), and select a plan.
-2. Click **Sign in** on the upper-right corner of the page.
+2. Select **Sign in** on the upper-right corner of the page.
![screenshot of Microsoft 365 trial page](./media/azure-account-for-microsoft-365-subscription/12-office-365-trial-page.png) 3. Sign in with your Azure account credentials. If you're creating a subscription for your organization, use an Azure account that's a member of the Global Admin or Billing Admin directory role in your Azure Active Directory tenant. ![Screenshot of Microsoft sign-in](./media/azure-account-for-microsoft-365-subscription/13-office-365-sign-in.png)
-4. Click **Try now**.
+4. Select **Try now**.
![Screenshot that confirms your order for Microsoft 365.](./media/azure-account-for-microsoft-365-subscription/14-office-365-confirm-your-order.png)
-5. On the order receipt page, click **Continue**.
+5. On the order receipt page, select **Continue**.
![Screenshot of the Microsoft 365 order receipt](./media/azure-account-for-microsoft-365-subscription/15-office-365-order-receipt.png) Now you're all set. If you created the Microsoft 365 subscription for your organization, use the following steps to check that your Azure AD users are now in Microsoft 365. 1. Open the Microsoft 365 admin center.
-2. Expand **USERS**, and then click **Active Users**.
+2. Expand **USERS**, and then select **Active Users**.
![Screenshot of the Microsoft 365 admin center users](./media/azure-account-for-microsoft-365-subscription/16-microsoft-365-admin-center-users.png)
After you sign up, the Microsoft 365 subscription is added to the same Azure Act
## <a id="RoleInAzureAD"></a>Check my account permissions in Azure AD 1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Click **All services**, and then search for **Active Directory**.
+2. Select **All services**, and then search for **Active Directory**.
![Screenshot of Active Directory in the Azure portal](./media/azure-account-for-microsoft-365-subscription/billing-more-services-active-directory.png)
-3. Click **Users and groups** > **All users**.
+3. Select **Users and groups** > **All users**.
4. Select the user name. ![Screenshot that shows the Azure Active Directory users](./media/azure-account-for-microsoft-365-subscription/billing-users-groups.png)
-5. Click **Directory role**.
+5. Select **Directory role**.
![Screenshot that shows the Azure portal directory role](./media/azure-account-for-microsoft-365-subscription/billing-user-directory-role.png) 6. The role **Global administrator** or **Limited administrator** > **Billing administrator** is required to create a Microsoft 365 subscription for users in your existing Azure Active Directory.
cost-management-billing Create Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-subscription.md
You need the following permissions to create subscriptions:
1. If you have access to multiple billing accounts, select the billing account for which you want to create the subscription.
-1. Fill the form and click **Create**. The tables below list the fields on the form for each type of billing account.
+1. Fill the form and select **Create**. The tables below list the fields on the form for each type of billing account.
**Enterprise Agreement**
cost-management-billing Download Azure Invoice Daily Usage Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/download-azure-invoice-daily-usage-date.md
For most subscriptions you can download your invoice from the Azure portal. If y
![Screenshot that shows the Billing & usage option](./media/download-azure-invoice-daily-usage-date/billingandusage.png)
-3. Click the download button to download a copy of your PDF invoice and then select **Download invoice**. If it says **Not available**, see [Why don't I see an invoice for the last billing period?](#noinvoice)
+3. Select the download symbol to download a copy of your PDF invoice and then select **Download invoice**. If it says **Not available**, see [Why don't I see an invoice for the last billing period?](#noinvoice)
![Screenshot that shows billing periods, the download option, and total charges for each billing period](./media/download-azure-invoice-daily-usage-date/downloadinvoice.png)
-4. You can also download your a daily breakdown of consumed quantities and estimated charges by clicking **Download csv**.
+4. You can also download your a daily breakdown of consumed quantities and estimated charges by selecting **Download csv**.
![Screenshot that shows Download invoice and usage page](./media/download-azure-invoice-daily-usage-date/usageandinvoice.png)
Invoices are generated for each [billing profile](../understand/mca-overview.md#
2. Select a billing profile. 3. Select **Invoices**. 4. In the invoice grid, find the row of the invoice you want to download.
-5. Click on the download button at the end of the row.
+5. Select the download symbol at the end of the row.
6. In the download context menu, select **Invoice**. If you don't see an invoice for the last billing period, see the following section.
You can opt in and configure additional recipients to receive your Azure invoice
### Get your subscription's invoices in email
-1. Select your subscription from the [Subscriptions page](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade). Opt in for each subscription you own. Click **Invoices** then **Email my invoice**.
+1. Select your subscription from the [Subscriptions page](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade). Opt in for each subscription you own. Select **Invoices** then **Email my invoice**.
![Screenshot that shows the opt-in flow](./media/download-azure-invoice-daily-usage-date/invoicesdeeplink01.png)
-2. Click **Opt in** and accept the terms.
+2. Select **Opt in** and accept the terms.
![Screenshot that shows the opt-in flow step 2](./media/download-azure-invoice-daily-usage-date/invoicearticlestep02.png)
If you don't get an email after following the steps, make sure your email addres
### Opt out of getting your subscription's invoices in email
-You can opt out of getting your invoice by email by following the steps above and clicking **Opt out of emailed invoices**. This option removes any email addresses set to receive invoices in email. You can reconfigure recipients if you opt back in.
+You can opt out of getting your invoice by email by following the steps above and select **Opt out of emailed invoices**. This option removes any email addresses set to receive invoices in email. You can reconfigure recipients if you opt back in.
![Screenshot that shows the opt-out flow](./media/download-azure-invoice-daily-usage-date/invoicearticlestep04.png)
If you have a Microsoft Customer Agreement, you can opt in to get your invoice i
1. Under **Settings**, select **Properties**. 1. Under **Email Invoice**, select **Update email invoice preference**. 1. Select **Opt in**.
-1. Click **Update**.
+1. Select **Update**.
### Opt out of getting your billing profile invoices in email
-You can opt out of getting your invoice by email by following the steps above and clicking **Opt out**. All Owners, Contributors, Readers, and Invoice managers will be opted out of getting the invoice by email, too. If you are a Reader, you cannot change the email invoice preference.
+You can opt out of getting your invoice by email by following the steps above and select **Opt out**. All Owners, Contributors, Readers, and Invoice managers will be opted out of getting the invoice by email, too. If you are a Reader, you cannot change the email invoice preference.
## Azure Government support for invoices
cost-management-billing Ea Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-azure-marketplace.md
To download the price list:
1. In the Azure Enterprise portal, go to **Reports** > **Price Sheet**. 1. In the top-right corner, find the link to Azure Marketplace price list under your username.
-1. Right-click the link and select **Save Target As**.
+1. Select and hold (or right-click) the link and select **Save Target As**.
1. On the **Save** window, change the title of the document to `AzureMarketplacePricelist.zip`, which will change the file from an .xlsx to a .zip file. 1. After the download is complete, you'll have a zip file with country-specific price lists. 1. LSPs should reference the individual country file for country-specific pricing. LSPs can use the **Notifications** tab to be aware of SKUs that are net new or retired.
cost-management-billing Ea Portal Agreements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-agreements.md
Please make sure to review the commercial information - monetary balance informa
**Step One: Add price markup**
-1. From the Enterprise Portal, click **Reports** on the left navigation.
-1. Under _Usage Summary_, click the blue **Markup** wording.
-1. Enter the markup percentage (between 0 to 100) and click **Preview**.
+1. From the Enterprise Portal, select **Reports** on the left navigation.
+1. Under _Usage Summary_, select the blue **Markup** wording.
+1. Enter the markup percentage (between 0 to 100) and select **Preview**.
**Step Two: Review and validate**
Both the service prices and the Prepayment balances will be marked up by the sam
**Step Three: Publish**
-After pricing is reviewed and validated, click **Publish**.
+After pricing is reviewed and validated, select **Publish**.
  Pricing with markup will be available to enterprise administrators immediately after selecting publish. Edits can't be made to markup. You must disable markup and begin from Step One. ### Which enrollments have a markup enabled?
-To check if an enrollment has a markup published, click **Manage** on the left navigation, and click on the **Enrollment** tab. Select the enrollment box to check, and view the markup status under _Enrollment Detail_. It will display the current status of the markup feature for that EA as Disabled, Preview, or Published.
+To check if an enrollment has a markup published, select **Manage** on the left navigation, and select the **Enrollment** tab. Select the enrollment box to check, and view the markup status under _Enrollment Detail_. It will display the current status of the markup feature for that EA as Disabled, Preview, or Published.
### How can the customer download usage estimates?
Enterprise Administrators can assign Account Owners to provision previously purc
### View the price sheet to check included quantity 1. Sign in as an Enterprise Administrator.
-1. Click **Reports** on the left navigation.
-1. Click the **Price Sheet** tab.
-1. Click the 'Download' icon in the top-right corner.
+1. Select **Reports** on the left navigation.
+1. Select the **Price Sheet** tab.
+1. Select the 'Download' icon in the top-right corner.
1. Find the corresponding Plan SKU part numbers with filter on column "Included Quantity" and select values greater than "0". Direct customer can view price sheet in Azure portal. See [view price sheet in Azure portal](ea-pricing.md#download-pricing-for-an-enterprise-agreement).
Direct customer can view price sheet in Azure portal. See [view price sheet in A
**Step One: Sign in to account** 1. From the Azure EA Portal, select the **Manage** tab and navigate to **Subscription** on the top menu. 1. Verify that you're logged in as the account owner of this account.
-1. Click **+Add Subscription**.
-1. Click **Purchase**.
+1. Select **+Add Subscription**.
+1. Select **Purchase**.
The first time you add a subscription to an account, you'll need to provide your contact information. When adding later subscriptions, your contact information will be populated for you.
The first time you add a subscription to your account, you'll be asked to accept
All new subscriptions will be added with the default "Microsoft Azure Enterprise" subscription name. It's important to update the subscription name to differentiate it from the other subscriptions within your Enterprise Enrollment and ensure that it's recognizable on reports at the enterprise level.
-Click **Subscriptions**, click on the subscription you created, and then click **Edit Subscription Details.**
+Select **Subscriptions**, select the subscription you created, and then select **Edit Subscription Details.**
-Update the subscription name and service administrator and click on the checkmark. The subscription name will appear on reports and it will also be the name of the project associated with the subscription on the development portal.
+Update the subscription name and service administrator and select the checkmark. The subscription name will appear on reports and it will also be the name of the project associated with the subscription on the development portal.
New subscriptions may take up to 24 hours to propagate in the subscriptions list. Only account owners can view and manage subscriptions.
When new Account Owners (AO) are added to the enrollment for the first time, the
This scenario occurs when the customer has deployed services under the wrong enrollment number or selected the wrong services.
-To validate if you're deploying under the right enrollment, you can check your included units information via the price sheet. Please sign in as an Enterprise Administrator and click on **Reports** on the left navigation and select **Price Sheet** tab. Click the Download icon in the top-right corner and find the corresponding Plan SKU part numbers with filter on column "Included Quantity" and select values greater than "0".
+To validate if you're deploying under the right enrollment, you can check your included units information via the price sheet. Please sign in as an Enterprise Administrator and select **Reports** on the left navigation and select **Price Sheet** tab. Select the Download symbol in the top-right corner and find the corresponding Plan SKU part numbers with filter on column "Included Quantity" and select values greater than "0".
Ensure that your OMS plan is showing on the price sheet under included units. If there are no included units for OMS plan on your enrollment, your OMS plan may be under another enrollment. Please contact Azure Enterprise Portal Support at [https://aka.ms/AzureEntSupport](https://aka.ms/AzureEntSupport).
cost-management-billing Ea Portal Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-rest-apis.md
Role owners can perform the following steps in the Azure EA portal. Navigate to
### Generate or retrieve the API Key 1. Sign in as an enterprise administrator.
-2. Click **Reports** on the left navigation window and then click the **Download Usage** tab.
-3. Click **API Access Key**.
+2. Select **Reports** on the left navigation window and then select the **Download Usage** tab.
+3. Select **API Access Key**.
4. Under **Enrollment Access Keys**, select the generate key symbol to generate either a primary or secondary key. 5. Select **Expand Key** to view the entire generated API access key. 6. Select **Copy** to get the API access key for immediate use.
Role owners can perform the following steps in the Azure EA portal. Navigate to
If you want to give the API access keys to people who aren't enterprise administrators in your enrollment, perform the following steps:
-1. In the left navigation window, click **Manage**.
-2. Click the pencil symbol next to **DA view charges** (Department Administrator view charges).
-3. Select **Enable** and then click **Save**.
-4. Click the pencil symbol next to **AO view charges** (Account Owner view charges).
-5. Select **Enable** and then click **Save**.
+1. In the left navigation window, select **Manage**.
+2. Select the pencil symbol next to **DA view charges** (Department Administrator view charges).
+3. Select **Enable** and then select **Save**.
+4. Select the pencil symbol next to **AO view charges** (Account Owner view charges).
+5. Select **Enable** and then select **Save**.
![Example showing DA and AO view charges enabled](./media/ea-portal-rest-apis/create-ea-generate-or-retrieve-api-key-enable-ao-do-view.png) The preceding steps give API access key holders with access to cost and pricing information in usage reports.
cost-management-billing Ea Portal Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-troubleshoot.md
The first work or school account added to the enrollment determines the _default
To update the Authentication Level: 1. Sign in to the Azure [EA portal](https://ea.azure.com/) as an Enterprise Administrator.
-2. Click **Manage** on the left navigation panel.
-3. Click the **Enrollment** tab.
+2. Select **Manage** on the left navigation panel.
+3. Select the **Enrollment** tab.
4. Under **Enrollment Details**, select **Auth Level**.
-5. Click the pencil symbol.
-6. Click **Save**.
+5. Select the pencil symbol.
+6. Select **Save**.
![Example showing authentication levels ](./media/ea-portal-troubleshoot/create-ea-authentication-level-types.png)
If you get an error message when you try to sign in to the Azure EA portal, use
- Use an in-private or incognito browser session to sign in so that no cookies or cached information from previous or existing sessions are kept. Clear your browser's cache and use an in-private or incognito window to open https://ea.azure.com. - If you get an _Invalid User_ error when using a Microsoft account, it might be because you have multiple Microsoft accounts. The one that you're trying to sign in with isn't the primary email address. Or, if you get an _Invalid User_ error, it might be because the wrong account type was used when the user was added to the enrollment. For example, a work or school account instead of a Microsoft account. In this example, you have another EA admin add the correct account or you need to contact [support](https://support.microsoft.com/supportforbusiness/productselection?sapId=cf791efa-485b-95a3-6fad-3daf9cd4027c).
- - If you need to check the primary alias, go to [https://account.live.com](https://account.live.com). Then, click **Your Info** and then click **Manage how to sign in to Microsoft**. Follow the prompts to verify an alternate email address and obtain a code to access sensitive information. Enter the security code. Select **Set it up later** if you don't want to set up two-factor authentication.
+ - If you need to check the primary alias, go to [https://account.live.com](https://account.live.com). Then, select **Your Info** and then select **Manage how to sign in to Microsoft**. Follow the prompts to verify an alternate email address and obtain a code to access sensitive information. Enter the security code. Select **Set it up later** if you don't want to set up two-factor authentication.
- You'll see the **Manage how to sign in to Microsoft** page where you can view your account aliases. Check that the primary alias is the one that you're using to sign in to the Azure EA portal. If it isn't, you can make it your primary alias. Or, you can use the primary alias for Azure EA portal instead. ## Next steps
cost-management-billing Ea Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-pricing.md
If you have an MCA, you must be the billing profile owner, contributor, reader,
1. Select a billing profile. Depending on your access, you might need to select a billing account first. 1. Select **Invoices**. 1. In the invoice grid, find the row of the invoice corresponding to the price sheet you want to download.
-1. Click the ellipsis (`...`) at the end of the row.
+1. Select the ellipsis (`...`) at the end of the row.
![Screenshot that shows the ellipsis selected](./media/ea-pricing/billingprofile-invoicegrid-new.png) 1. If you want to see prices for the services in the selected invoice, select **Invoice price sheet**. 1. If you want to see prices for all Azure services for the given billing period, select **Azure price sheet**.
cost-management-billing Grant Access To Create Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/grant-access-to-create-subscription.md
To [create subscriptions under an enrollment account](programmatically-create-su
# [PowerShell](#tab/azure-powershell)
- Use the [Get-AzEnrollmentAccount](/powershell/module/az.billing/get-azenrollmentaccount) cmdlet to list all enrollment accounts you have access to. Select **Try it** to open [Azure Cloud Shell](https://shell.azure.com/). To paste the code, right-click the shell windows, and the select **Paste**.
+ Use the [Get-AzEnrollmentAccount](/powershell/module/az.billing/get-azenrollmentaccount) cmdlet to list all enrollment accounts you have access to. Select **Try it** to open [Azure Cloud Shell](https://shell.azure.com/). To paste the code, select and hold (or right-click) the shell windows, and the select **Paste**.
```azurepowershell-interactive Get-AzEnrollmentAccount
To [create subscriptions under an enrollment account](programmatically-create-su
# [Azure CLI](#tab/azure-cli)
- Use the [az billing enrollment-account list](/cli/azure/billing) command to list all enrollment accounts you have access to. Select **Try it** to open [Azure Cloud Shell](https://shell.azure.com/). To paste the code, right-click the shell windows, and the select **Paste**.
+ Use the [az billing enrollment-account list](/cli/azure/billing) command to list all enrollment accounts you have access to. Select **Try it** to open [Azure Cloud Shell](https://shell.azure.com/). To paste the code, select and hold (or right-click) the shell windows, and the select **Paste**.
```azurecli-interactive az billing enrollment-account list
cost-management-billing Mca Section Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-section-invoice.md
To create a billing profile, you need to be a **billing account owner** or a **b
[![Screenshot that shows billing profile list with Add selected.](./media/mca-section-invoice/mca-list-profiles.png)](./media/mca-section-invoice/mca-list-profiles-zoomed-in.png#lightbox)
-4. Fill the form and click **Create**.
+4. Fill the form and select **Create**.
[![Screenshot that shows billing profile creation page](./media/mca-section-invoice/mca-add-profile.png)](./media/mca-section-invoice/mca-add-profile-zoomed-in.png#lightbox)
Once you have customized your billing account based on your needs, you can link
7. Select an Azure plan and enter a friendly name for your subscription.
-9. Click **Create**.
+9. Select **Create**.
### Link existing subscriptions and products
If you have existing Azure subscriptions or other products such as Azure Marketp
[![Screenshot that shows the option to change invoice section](./media/mca-section-invoice/mca-select-change-invoice-section.png)](./media/mca-section-invoice/mca-select-change-invoice-section-zoomed-in.png#lightbox)
-4. In the page, click on ellipsis (three dots) for the subscription or product that you want to link to a new invoice section. Select **Change invoice section**.
+4. In the page, select the ellipsis (three dots) for the subscription or product that you want to link to a new invoice section. Select **Change invoice section**.
5. Select the new billing profile and the invoice section from the dropdown.
cost-management-billing No Subscriptions Found https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/no-subscriptions-found.md
This problem occurs if you selected at the wrong directory, or if your account d
To fix this issue:
-* Make sure that the correct Azure directory is selected by clicking your account at the top right.
+* Make sure that the correct Azure directory is selected by selecting your account at the top right.
![Select the directory at the top right of the Azure portal](./media/no-subscriptions-found/directory-switch.png) * If the right Azure directory is selected but you still receive the error message, [assign the Owner role to your account](../../role-based-access-control/role-assignments-portal.md).
cost-management-billing Open Banking Strong Customer Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/open-banking-strong-customer-authentication.md
If your bank rejects the charges, your Azure account status will change to **Pas
1. Sign in to the [Azure portal](https://portal.azure.com/) as the Account Administrator. 2. Search on **Cost Management + Billing.** 3. On the **Cost Management + Billing** **Overview** page, review the status column in the **My subscriptions** grid.
-4. If your subscription is labeled **Past due**, click **Settle balance**. You're prompted to complete multi-factor authentication during the process.
+4. If your subscription is labeled **Past due**, select **Settle balance**. You're prompted to complete multi-factor authentication during the process.
### Settle outstanding charges for Marketplace and reservation purchases
cost-management-billing Charge Back Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/charge-back-usage.md
Users with an individual subscription can get the amortized cost data from their
## See reservation usage data for show back and charge back 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Navigate to **Cost Management + Billing**
-3. Select **Cost analysis** from left navigation
+2. Navigate to **Cost Management + Billing**.
+3. Select **Cost analysis** from left navigation.
4. Under **Actual Cost**, select the **Amortized Cost** metric. 5. To see which resources were used by a reservation, apply a filter for **Reservation** and then select reservations. 6. Set the **Granularity** to **Monthly** or **Daily**.
Here's a video showing how to view reservation usage costs at subscription, reso
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4sQOw] ## Get the data for show back and charge back+ 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Navigate to **Cost Management + Billing**
-3. Select **Export** from left navigation
-4. Click on **Add** button
-5. Select Amortized cost as the metric button and setup your export
+2. Navigate to **Cost Management + Billing**.
+3. Select **Export** from left navigation.
+4. Select **Add**.
+5. Select Amortized cost as the metric button and setup your export.
-the EffectivePrice for the usage that gets reservation discount is the prorated cost of the reservation (instead of being zero). This helps you know the monetary value of reservation consumption by a subscription, resource group or a resource, and can help you charge back for the reservation utilization internally. The dataset also has unused reservation hours.
+The EffectivePrice for the usage that gets reservation discount is the prorated cost of the reservation (instead of being zero). This helps you know the monetary value of reservation consumption by a subscription, resource group or a resource, and can help you charge back for the reservation utilization internally. The dataset also has unused reservation hours.
## Get Azure consumption and reservation usage data using API
If you're an EA admin, you can download the CSV file that contains new usage dat
In the Azure portal, navigate to [Cost management + billing](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade/BillingAccounts). 1. Select the billing account.
-2. Click **Usage + charges**.
-3. Click **Download**.
+2. Select **Usage + charges**.
+3. Select **Download**.
![Example showing where to Download the CSV usage data file in the Azure portal](./media/understand-reserved-instance-usage-ea/portal-download-csv.png) 4. In **Usage Details**, select **Amortized usage data**.
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
While applying reservation discounts on your usage, Azure processes the reservat
3. Reservations scoped to a management group 4. Reservations with a shared scope (multiple subscriptions), described previously
-You can always update the scope after you buy a reservation. To do so, go to the reservation, click **Configuration**, and rescope the reservation. Rescoping a reservation isn't a commercial transaction. Your reservation term isn't changed. For more information about updating the scope, see [Update the scope after you purchase a reservation](manage-reserved-vm-instance.md#change-the-reservation-scope).
+You can always update the scope after you buy a reservation. To do so, go to the reservation, select **Configuration**, and rescope the reservation. Rescoping a reservation isn't a commercial transaction. Your reservation term isn't changed. For more information about updating the scope, see [Update the scope after you purchase a reservation](manage-reserved-vm-instance.md#change-the-reservation-scope).
:::image type="content" source="./media/prepare-buy-reservation/rescope-reservation-management-group.png" alt-text="Example showing a reservation scope change" lightbox="./media/prepare-buy-reservation/rescope-reservation-management-group.png" :::
cost-management-billing Prepay App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-app-service.md
To buy an instance:
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Select **All services** > **Reservations**.
-3. Select **Add** to purchase a new reservation and then click **Instance**.
+3. Select **Add** to purchase a new reservation and then select **Instance**.
4. Enter required fields. Running Premium v3 reserved instances that match the attributes you select qualify for the reservation discount. The actual number of your Premium v3 reserved instances that get the discount depend on the scope and quantity selected. If you have an EA agreement, you can use the **Add more option** to quickly add additional instances. The option isn't available for other subscription types.
To buy an instance:
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Select **All services** > **Reservations**.
-3. Select **Add** to purchase a new reservation and then click **Instance**.
+3. Select **Add** to purchase a new reservation and then select **Instance**.
4. Enter required fields. Running Isolated v2 reserved instances that match the attributes you select qualify for the reservation discount. The actual number of your Isolated v2 reserved instances that get the discount depend on the scope and quantity selected. If you have an EA agreement, you can use the **Add more option** to quickly add additional instances. The option isn't available for other subscription types.
You can buy Isolated Stamp reserved capacity in the [Azure portal](https://porta
- **Shared scope** ΓÇö Applies the reservation discount to matching resources in eligible subscriptions that are in the billing context. For Enterprise Agreement customers, the billing context is the enrollment. For individual subscriptions with pay-as-you-go rates, the billing scope is all eligible subscriptions created by the account administrator. - **Management group** - Applies the reservation discount to the matching resource in the list of subscriptions that are a part of both the management group and billing scope. 1. Select a **Region** to choose an Azure region that's covered by the reserved capacity and add the reservation to the cart.
-1. Select an Isolated Plan type and then click **Select**.
+1. Select an Isolated Plan type and then select **Select**.
![Example ](./media/prepay-app-service/app-service-isolated-stamp-select.png)
-1. Enter the quantity of App Service Isolated stamps to reserve. For example, a quantity of three would give you three reserved stamps a region. Click **Next: Review + Buy**.
-1. Review and click **Buy now**.
+1. Enter the quantity of App Service Isolated stamps to reserve. For example, a quantity of three would give you three reserved stamps a region. Select **Next: Review + Buy**.
+1. Review and select **Buy now**.
After purchase, go to [Reservations](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/ReservationsBrowseBlade) to view the purchase status and monitor it at any time.
cost-management-billing Prepay Jboss Eap Integrated Support App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-jboss-eap-integrated-support-app-service.md
To buy an instance:
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Select **All services** > **Reservations**.
-3. Select **Add** to purchase a new reservation and then click **Instance**.
+3. Select **Add** to purchase a new reservation and then select **Instance**.
4. Enter required fields. If you have an EA agreement, you can use the **Add more option** to quickly add additional instances. The option isn't available for other subscription types.
cost-management-billing Reservation Discount Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-discount-databricks.md
Note: enabling Photon will increase the DBU count.
## Determine plan use
-To determine your DBCU plan use, go to the Azure portal > **Reservations** and click the purchased Databricks plan. Your utilization to-date is shown with any remaining units. For more information about determining your reservation use, see the [See reservation usage](reservation-apis.md#see-reservation-usage) article.
+To determine your DBCU plan use, go to the Azure portal > **Reservations** and select the purchased Databricks plan. Your utilization to-date is shown with any remaining units. For more information about determining your reservation use, see the [See reservation usage](reservation-apis.md#see-reservation-usage) article.
## How discount application shows in usage data
cost-management-billing Reservation Renew https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-renew.md
There's no obligation to renew and you can opt out of the renewal at any time be
Go to Azure portal > **Reservations**. 1. Select the reservation.
-2. Click **Renewal**.
+2. Select **Renewal**.
3. Select **Automatically purchase a new reservation upon expiry**. ![Example showing reservation renewal](./media/reservation-renew/reservation-renewal.png)
cost-management-billing Understand Reserved Instance Usage Ea https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-reserved-instance-usage-ea.md
If you're an EA admin, you can download the CSV file that contains new usage dat
In the Azure portal, navigate to [Cost management + billing](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade/BillingAccounts). 1. Select the billing account.
-2. Click **Usage + charges**.
-3. Click **Download**.
+2. Select **Usage + charges**.
+3. Select **Download**.
![Example showing where to Download the CSV usage data file in the Azure portal](./media/understand-reserved-instance-usage-ea/portal-download-csv.png)
-4. In **Download Usage + Charges** , under **Usage Details Version 2** , select **All Charges (usage and purchases)** and then click download. Repeat for **Amortized charges (usage and purchases)**.
+4. In **Download Usage + Charges** , under **Usage Details Version 2** , select **All Charges (usage and purchases)** and then select download. Repeat for **Amortized charges (usage and purchases)**.
## Download usage for your Microsoft Customer Agreement
To view and download usage data for a billing profile, you must be a billing pro
2. Select a billing profile. 3. Select **Invoices**. 4. In the invoice grid, find the row of the invoice corresponding to the usage you want to download.
-5. Click on the ellipsis (`...`) at the end of the row.
+5. Select the ellipsis (`...`) at the end of the row.
6. In the download context menu, select **Azure usage and charges**. ## Common cost and usage tasks
cost-management-billing View Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/view-reservations.md
If you're a billing administrator, use following steps to view and manage all re
- If you're a Microsoft Customer Agreement billing profile owner, in the left menu, select **Billing profiles**. In the list of billing profiles, select one. 1. In the left menu, select **Products + services** > **Reservations**. 1. The complete list of reservations for your EA enrollment or billing profile is shown.
-1. Billing administrators can take ownership of a reservation by selecting one or multiple reservations, clicking on **Grant access** and selecting **Grant access** in the window that appears.
+1. Billing administrators can take ownership of a reservation by selecting one or multiple reservations, selecting **Grant access** and selecting **Grant access** in the window that appears.
### Add billing administrators
cost-management-billing Download Azure Daily Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-daily-usage.md
To view and download usage data for a billing profile, you must be a billing pro
2. Select a billing profile. 3. Select **Invoices**. 4. In the invoice grid, find the row of the invoice corresponding to the usage you want to download.
-5. Click on the ellipsis (`...`) at the end of the row.
+5. Select the ellipsis (`...`) at the end of the row.
6. In the download context menu, select **Azure usage and charges**. ### Download usage for open charges
You can also download month-to-date usage for the current billing period, meanin
1. Search for **Cost Management + Billing**. 2. Select a billing profile.
-3. In the **Overview** blade, click **Download Azure usage and charges**.
+3. In the **Overview** blade, select **Download Azure usage and charges**.
### Download usage for pending charges
cost-management-billing Download Azure Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-invoice.md
Azure Government customers canΓÇÖt request their invoice by email. They can only
1. Select your subscription from the [Subscriptions page](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) in the Azure portal. 1. Select **Invoices** from the billing section. ![Screenshot that shows a user selecting invoices option for a subscription](./media/download-azure-invoice/select-subscription-invoice.png)
-1. Select the invoice that you want to download and then click **Download invoices**.
+1. Select the invoice that you want to download and then select **Download invoices**.
![Screenshot that the download option for an MOSP invoice](./media/download-azure-invoice/downloadinvoice-subscription.png)
-1. You can also download a daily breakdown of consumed quantities and charges by clicking the download icon and then clicking **Prepare Azure usage file** button under the usage details section. It may take a few minutes to prepare the CSV file.
+1. You can also download a daily breakdown of consumed quantities and charges by selecting the download icon and then selecting **Prepare Azure usage file** button under the usage details section. It may take a few minutes to prepare the CSV file.
![Screenshot that shows Download invoice and usage page](./media/download-azure-invoice/usage-and-invoice-subscription.png) For more information about your invoice, see [Understand your bill for Microsoft Azure](../understand/review-individual-bill.md). For help identify unusual costs, see [Analyze unexpected charges](analyze-unexpected-charges.md).
You must have an account admin role on the support plan subscription to download
1. Select **Invoices** from the left-hand side. 1. Select your support plan subscription. [![Screenshot that shows an MOSP support plan invoice billing profile list](./media/download-azure-invoice/cmb-invoices.png)](./media/download-azure-invoice/cmb-invoices-zoomed-in.png#lightbox)
-1. Select the invoice that you want to download and then click **Download invoices**.
+1. Select the invoice that you want to download and then select **Download invoices**.
![Screenshot that shows the download option for an MOSP support plan invoice ](./media/download-azure-invoice/download-invoice-support-plan.png) ## Allow others to download your subscription invoice
To download an invoice:
3. Select **Invoices** from the left-hand side.
-4. Select your Azure subscription and then click **Allow others to download invoice**.
+4. Select your Azure subscription and then select **Allow others to download invoice**.
[![Screenshot that shows selecting access to invoice](./media/download-azure-invoice/cmb-select-access-to-invoice.png)](./media/download-azure-invoice/cmb-select-access-to-invoice-zoomed-in.png#lightbox)
You must have an owner, contributor, reader, or an invoice manager role on a bil
4. In the invoices table, select the invoice that you want to download.
-5. Click on the **Download invoice pdf** button at the top of the page.
+5. Select **Download invoice pdf** at the top of the page.
[![Screenshot that shows downloading invoice pdf](./media/download-azure-invoice/mca-billingprofile-download-invoice.png)](./media/download-azure-invoice/mca-billingprofile-download-invoice-zoomed-in.png#lightbox)
-6. You can also download your daily breakdown of consumed quantities and estimated charges by clicking **Download Azure usage**. It may take a few minutes to prepare the csv file.
+6. You can also download your daily breakdown of consumed quantities and estimated charges by selecting **Download Azure usage**. It may take a few minutes to prepare the csv file.
## Get your billing profile's invoice in email
There could be several reasons that you don't see an invoice:
- You have access to the invoice through one of your other accounts.
- - This situation typically happens when you click on a link in the email, asking you to view your invoice in the portal. You click on the link and you see an error message - `We can't display your invoices. Please try again`. Verify that you're signed in with the email address that has permissions to view the invoices.
+ - This situation typically happens when you select a link in the email, asking you to view your invoice in the portal. You select the link and you see an error message - `We can't display your invoices. Please try again`. Verify that you're signed in with the email address that has permissions to view the invoices.
- You have access to the invoice through a different identity.
cost-management-billing Mca Download Tax Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mca-download-tax-document.md
You can download tax documents for your Azure invoice if you have access to invo
1. Depending on your access, you might need to select a Billing account or Billing profile. 1. In the left menu, select **Invoices** under **Billing**. 1. In the invoice grid, find the row of the invoice corresponding to the tax document you want to download.
-1. Click the download icon or the ellipsis (`...`) at the end of the row.
+1. Select the download icon or the ellipsis (`...`) at the end of the row.
7. Select **Tax document** in the download menu. Depending on the country/region of your billing profile, you might see more than one tax document per invoice. ## Check billing account type
cost-management-billing Mosp New Customer Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mosp-new-customer-experience.md
Your new experience includes the following cost management and billing capabilit
4. The table lists Azure subscriptions that you're paying for. In the billing profile column, you would find the billing profile that is billed for the subscription. The subscription charges are displayed on the invoice for the billing profile. To consolidate the charges for all your subscriptions on a single invoice, you need to link all your subscriptions to a single billing profile. :::image type="content" source="./media/mosp-new-customer-experience/list-azure-subscriptions.png" alt-text="Screenshot that shows the list of Azure subscriptions." lightbox="./media/mosp-new-customer-experience/list-azure-subscriptions.png" ::: 5. Pick a billing profile that you want to use.
-6. Select a subscription that is not linked to the billing profile that you chose in step 5. Click on ellipsis (three dots) for the subscription. Select **Change invoice section**.
+6. Select a subscription that is not linked to the billing profile that you chose in step 5. Select the ellipsis (three dots) for the subscription. Select **Change invoice section**.
:::image type="content" source="./media/mosp-new-customer-experience/select-change-invoice-section.png" alt-text="Screenshot that shows where to find the option to change invoice section." lightbox="./media/mosp-new-customer-experience/select-change-invoice-section-zoomed-in.png" ::: 7. Select the billing profile that you chose in step #5. :::image type="content" source="./media/mosp-new-customer-experience/change-invoice-section.png" alt-text="Screenshot that shows how to change invoice section." lightbox="./media/mosp-new-customer-experience/change-invoice-section-zoomed-in.png" :::
After your Azure billing account is updated, you'll get an email from Microsoft
- You have access to perform billing administration through one of your other emails.
- This typically happens when you get an email asking you to accept the terms of the Microsoft Customer Agreement. You click on the link and you see an error message - `You don't have permission to accept the agreement. This typically happens when you sign in with an email, which doesnΓÇÖt have permission to accept the agreement. Check youΓÇÖve signed in with the correct email address. If you are still seeing the error, see Why I can't accept an agreement`. Verify that you're signed in with the email address that has permission to perform billing administration.
+ This typically happens when you get an email asking you to accept the terms of the Microsoft Customer Agreement. You select the link and you see an error message - `You don't have permission to accept the agreement. This typically happens when you sign in with an email, which doesnΓÇÖt have permission to accept the agreement. Check youΓÇÖve signed in with the correct email address. If you are still seeing the error, see Why I can't accept an agreement`. Verify that you're signed in with the email address that has permission to perform billing administration.
- You have access to the invoice through a different identity.
cost-management-billing Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/plan-manage-costs.md
After you have your Azure services running, regularly check costs to track your
Visit the [Cost Management + Billing page in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade).
-Click **Cost analysis** from the left side of the screen to see the current cost broken down by various pivots such as service, location, and subscription. After you add a service or make a purchase, wait 24 hours for the data to display. By default, cost analysis shows the cost for the scope that you are in. For example, in the screenshot below, cost for Contoso billing account is displayed. Use the Scope pill to switch to a different scope in cost analysis. For more information about scopes, see [Understand and work with scopes](../costs/understand-work-scopes.md#scopes)
+Select **Cost analysis** from the left side of the screen to see the current cost broken down by various pivots such as service, location, and subscription. After you add a service or make a purchase, wait 24 hours for the data to display. By default, cost analysis shows the cost for the scope that you are in. For example, in the screenshot below, cost for Contoso billing account is displayed. Use the Scope pill to switch to a different scope in cost analysis. For more information about scopes, see [Understand and work with scopes](../costs/understand-work-scopes.md#scopes)
![Screenshot of the cost analysis view in Azure portal](./media/plan-manage-costs/cost-analysis.png)
-You can filter by various properties such as tags, resource type, and time span. Click **Add filter** to add the filter for a property and select the values to filter. Select **Export** to export the view to a comma-separated values (.csv) file.
+You can filter by various properties such as tags, resource type, and time span. Select **Add filter** to add the filter for a property and select the values to filter. Select **Export** to export the view to a comma-separated values (.csv) file.
-Additionally, you can click the labels of the chart to see the daily spend history for that label. For example, in the screenshot below, selecting a virtual machine displays the daily cost of running your VMs.
+Additionally, you can select the labels of the chart to see the daily spend history for that label. For example, in the screenshot below, selecting a virtual machine displays the daily cost of running your VMs.
:::image type="content" source="./media/plan-manage-costs/cost-history.png" alt-text="Screenshot of the spend history view in Azure portal" lightbox="./media/plan-manage-costs/cost-history.png" :::
cost-management-billing Review Individual Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-individual-bill.md
It must be more than 30 days from the day that you subscribed to Azure. Azure bi
The first step to compare usage and costs is to download your invoice and usage files. The detailed usage CSV file shows your charges by billing period and daily usage. It doesn't include any tax information. In order to download the files, you must be an account administrator or have the Owner role.
-In the Azure portal, type *subscriptions* in the search box and then click **Subscriptions**.
+In the Azure portal, type *subscriptions* in the search box and then select **Subscriptions**.
[![Navigate to subscriptions](./media/review-individual-bill/navigate-subscriptions.png)](./media/review-individual-bill/navigate-subscriptions.png#lightbox)
-In the list of subscriptions, click the subscription.
+In the list of subscriptions, select the subscription.
-Under **Billing**, click **Invoices**.
+Under **Billing**, select **Invoices**.
-In the list of invoices, look for the one that you want to download then click the download symbol. You might need to change the timespan to view older invoices. It might take a few minutes to generate the usage details file and invoice.
+In the list of invoices, look for the one that you want to download then select the download symbol. You might need to change the timespan to view older invoices. It might take a few minutes to generate the usage details file and invoice.
![Screenshot that shows billing periods, the download option, and total charges for each billing period](./media/review-individual-bill/download-invoice.png)
-In the Download Usage + Charges window, click **Download csv** and **Download invoice**.
+In the Download Usage + Charges window, select **Download csv** and **Download invoice**.
![Screenshot that shows Download invoice and usage page](./media/review-individual-bill/usageandinvoice.png)
The summed *Cost* value should match precisely to the *usage charges* cost for t
## Compare billed charges and usage in cost analysis
-Cost analysis in the Azure portal can also help you verify your charges. To get a quick overview of your invoiced usage and charges, select your subscription from the Subscriptions page in the Azure portal. Next, click **Cost analysis** and then in the views list, click **Invoice details**.
+Cost analysis in the Azure portal can also help you verify your charges. To get a quick overview of your invoiced usage and charges, select your subscription from the Subscriptions page in the Azure portal. Next, select **Cost analysis** and then in the views list, select **Invoice details**.
![Example showing Invoice details selection](./media/review-individual-bill/cost-analysis-select-invoice-details.png)
cost-management-billing Understand Azure Marketplace Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/understand-azure-marketplace-charges.md
If you don't have an MCA or MPA, you can pay for your Marketplace invoices in th
>[!NOTE] > You will only see the **Pay now** link if the type of your invoice is **Azure Marketplace and Reservations** and the invoice payment status is due or past due.
-1. In the new page, click the blue **Select payment method** link.
+1. In the new page, select the blue **Select payment method** link.
![screenshot that shows select payment method link selected](./media/understand-azure-marketplace-charges/select-payment-method-pay-now-twd.png)
-1. After selecting a payment method, click the blue **Pay now** button in the bottom left of the page.
+1. After selecting a payment method, select the blue **Pay now** button in the bottom left of the page.
![screenshot that shows pay now button selected](./media/understand-azure-marketplace-charges/pay-now-button-twd.png) ## Change default payment for external services
If you don't have an MCA or MPA, you can pay for your Marketplace invoices in th
When purchasing an external service, you choose an Azure subscription for the resource. The payment method of the selected Azure subscription becomes the payment method for the external service. To change the payment method for an external service, you must [change the payment method of the Azure subscription](../manage/change-credit-card.md) tied to that external service. You can figure out which subscription your external service order is tied to by following these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Click on **All Resources** in the left navigation menu.
+1. Select **All Resources** in the left navigation menu.
![screenshot of all resources menu item](./media/understand-azure-marketplace-charges/all-resources.png) 1. Search for your external service. 1. Look for the name of the subscription in the **Subscription** column. ![screenshot of subscription name for resource](./media/understand-azure-marketplace-charges/sub-selected.png)
-1. Click on the subscription name and [update the active payment method](../manage/change-credit-card.md).
+1. Select the subscription name and [update the active payment method](../manage/change-credit-card.md).
## Cancel an external service order If you want to cancel your external service order, delete the resource in the [Azure portal](https://portal.azure.com). 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Click on **All Resources** in the left navigation menu.
+1. Select **All Resources** in the left navigation menu.
![Screenshot of all resources menu item](./media/understand-azure-marketplace-charges/all-resources.png) 1. Search for your external service. 1. Check the box next to the resource you want to delete.
If you want to cancel your external service order, delete the resource in the [A
![Screenshot of delete button](./media/understand-azure-marketplace-charges/delete-button.png) 1. Type *'Yes'* in the confirmation blade. ![Delete Resource](./media/understand-azure-marketplace-charges/delete-resource.PNG)
-1. Click **Delete**.
+1. Select **Delete**.
## Check billing account type [!INCLUDE [billing-check-account-type](../../../includes/billing-check-mca.md)]
data-factory Concepts Data Flow Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-expression-builder.md
In mapping data flows, expressions can be composed of column values, parameters,
Mapping data flows has built-in functions and operators that can be used in expressions. For a list of available functions, see the [mapping data flow language reference](data-transformation-functions.md).
+### User Defined Functions (Preview)
+Mapping data flows supports the creation and use of user defined functions. To see how to create and use user defined functions see [user defined functions](concepts-data-flow-udf.md).
+ #### Address array indexes When dealing with columns or functions that return array types, use brackets ([]) to access a specific element. If the index doesn't exist, the expression evaluates into NULL.
data-factory Concepts Data Flow Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-udf.md
+
+ Title: User defined functions in mapping data flows
+
+description: Learn the concepts of user defined functions in mapping data flow
++++++ Last updated : 04/20/2022++
+# User defined functions in mapping data flow
+
+A user defined function is a customized expression you can define to be able to reuse logic across multiple mapping data flows. User defined functions live in a collection called a data flow library to be able to easily group up common sets of customized functions.
+
+Whenever you find yourself building the same logic in an expression in across multiple mapping data flows this would be a good opportunity to turn that into a user defined function.
+
+## Getting started
+
+To get started with user defined functions, you must first create a data flow library. Navigate to the management page and then find data flow libraries under the author section.
+
+![Screenshot showing the A D F management pane and data flow libraries.](./media/data-flow-udf/data-flow-udf-library.png)
+++
+## Data flow library
+
+From here, you can click on +New button to create a new data flow library. Fill out the name and description and then you are ready to create your user defined function.
+![Screenshot showing the data flow libraries creation pane.](./media/data-flow-udf/data-flow-udf-library-create.png)
+
+## New user defined function
+
+To create a user defined function, from the data flow library you want to create the function in, click the +New button.
+![Screenshot showing the U D F new function button.](./media/data-flow-udf/data-flow-udf-function-new.png)
+
+Fill in the name of your user defined function.
+> [!Note]
+> You cannot use the name of an existing mapping data flow expression. For a list of the current mapping data flow expressions, see [Data transformation expressions in mapping data flow | Microsoft Docs](data-transformation-functions.md)
+
+![Screenshot showing the U D F new function creation pane.](./media/data-flow-udf/data-flow-udf-function-pane.png)
+
+User defined functions can have zero or more arguments. Arguments allow you to pass in values when your function is called and refer to those arguments in your expression logic. Arguments are automatically named from i1, i2, etc. and you can choose the data type of the argument from the dropdown.
+
+The body of the user defined function is where you specify the logic of your function. The editor provides the full [expression builder | Microsoft Docs](concepts-data-flow-expression-builder.md) experience and allows you to reference your arguments created and any [data transformation expressions in mapping data flow | Microsoft Docs](data-transformation-functions.md).
+
+> [!Note]
+> A user defined function cannot refer to another user defined function.
+
+## Using a user defined function in the expression builder
+
+User defined functions will appear in the mapping data flow expression builder under Data flow library functions. From here, you can use your custom created functions and pass in appropriate arguments (if any) that you've defined.
+
+![Screenshot showing the data flow library in the mapping data flow expression builder.](./media/data-flow-udf/data-flow-udf-expression-builder.png)
data-factory Connector Salesforce Service Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce-service-cloud.md
The Salesforce connector is built on top of the Salesforce REST/Bulk API. By def
## Prerequisites
-API permission must be enabled in Salesforce. For more information, see [Enable API access in Salesforce by permission set](https://www.data2crm.com/migration/faqs/enable-api-access-salesforce-permission-set/)
+API permission must be enabled in Salesforce.
## Salesforce request limits
To learn details about the properties, check [Lookup activity](control-flow-look
## Next steps
-For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-expression-functions.md
In Data Factory and Synapse pipelines, use the expression language of the mappin
| [power](data-flow-expressions-usage.md#power) | Raises one number to the power of another. | | [radians](data-flow-expressions-usage.md#radians) | Converts degrees to radians| | [random](data-flow-expressions-usage.md#random) | Returns a random number given an optional seed within a partition. The seed should be a fixed value and is used with the partitionId to produce random values |
-| [regexExtract](data-flow-expressions-usage.md#regexExtract) | Extract a matching substring for a given regex pattern. The last parameter identifies the match group and is defaulted to 1 if omitted. Use `<regex>`(back quote) to match a string without escaping. |
-| [regexMatch](data-flow-expressions-usage.md#regexMatch) | Checks if the string matches the given regex pattern. Use `<regex>`(back quote) to match a string without escaping. |
-| [regexReplace](data-flow-expressions-usage.md#regexReplace) | Replace all occurrences of a regex pattern with another substring in the given string Use `<regex>`(back quote) to match a string without escaping. |
+| [regexExtract](data-flow-expressions-usage.md#regexExtract) | Extract a matching substring for a given regex pattern. The last parameter identifies the match group and is defaulted to 1 if omitted. Use `` `<regex>` `` (back quote) to match a string without escaping. |
+| [regexMatch](data-flow-expressions-usage.md#regexMatch) | Checks if the string matches the given regex pattern. Use `` `<regex>` `` (back quote) to match a string without escaping. |
+| [regexReplace](data-flow-expressions-usage.md#regexReplace) | Replace all occurrences of a regex pattern with another substring in the given string Use `` `<regex>` `` (back quote) to match a string without escaping. |
| [regexSplit](data-flow-expressions-usage.md#regexSplit) | Splits a string based on a delimiter based on regex and returns an array of strings. | | [replace](data-flow-expressions-usage.md#replace) | Replace all occurrences of a substring with another substring in the given string. If the last parameter is omitted, it's default to empty string. | | [reverse](data-flow-expressions-usage.md#reverse) | Reverses a string. |
data-factory Self Hosted Integration Runtime Auto Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-auto-update.md
Last updated 06/16/2021
This article will describe how to let self-hosted integration runtime auto-update to the latest version and how ADF manages the versions of self-hosted integration runtime.
+## How to check your self-hosted integration runtime version
+You can check the version either in your self-hosted integration runtime client or in Azure data factory portal:
+++ ## Self-hosted Integration Runtime Auto-update Generally, when you install a self-hosted integration runtime in your local machine or an Azure VM, you have two options to manage the version of self-hosted integration runtime: auto-update or maintain manually. Typically, ADF releases two new versions of self-hosted integration runtime every month which includes new feature release, bug fix or enhancement. So we recommend users to update to newer version in order to get the latest feature and enhancement.
data-factory Data Factory Create Data Factories Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-create-data-factories-programmatically.md
In the walkthrough, you create a data factory with a pipeline that contains a co
The Copy Activity performs the data movement in Azure Data Factory. The activity is powered by a globally available service that can copy data between various data stores in a secure, reliable, and scalable way. See [Data Movement Activities](data-factory-data-movement-activities.md) article for details about the Copy Activity.
+> [!IMPORTANT]
+> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](/azure/active-directory/develop/msal-migration) for more details.
+ 1. Using Visual Studio 2012/2013/2015, create a C# .NET console application. 1. Launch **Visual Studio** 2012/2013/2015. 2. Click **File**, point to **New**, and click **Project**.
data-factory Data Factory Salesforce Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-salesforce-connector.md
Azure Data Factory currently supports only moving data from Salesforce to [suppo
This connector supports the following editions of Salesforce: Developer Edition, Professional Edition, Enterprise Edition, or Unlimited Edition. And it supports copying from Salesforce production, sandbox and custom domain. ## Prerequisites
-* API permission must be enabled. See [How do I enable API access in Salesforce by permission set?](https://www.data2crm.com/migration/faqs/enable-api-access-salesforce-permission-set/)
+* API permission must be enabled.
* To copy data from Salesforce to on-premises data stores, you must have at least Data Management Gateway 2.0 installed in your on-premises environment. ## Salesforce request limits
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
This page is updated monthly, so revisit it regularly.
<tr><td><b>Monitoring</b></td><td>Multiple updates to ADF monitoring experiences</td><td>New updates to the monitoring experience in Azure Data Factory including the ability to export results to a CSV, clear all filters, open a run in a new tab, and improved caching of columns and results.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-monitoring-improvements/ba-p/3295531">Learn more</a></td></tr>
-<tr><td><b>Orchestration</b></td><td>Azure Functions available in ADF managed virtual network</td><td>Now managed private endpoints for Azure Functions are available in Azure Data Factory managed virtual network. So you can leverage private link and secure the communications to Azure Functions during the orchestration.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/azure-functions-available-in-adf-managed-virtual-network/ba-p/3298383">Learn more</a></td></tr>
- <tr><td><b>User Interface</b></td><td>New Regional format support</td><td>Choose your language and the regional format that will influence how data such as dates and times appear in the Azure Data Factory Studio monitoring. These language and regional settings affect only the Azure Data Factory Studio user interface and do not change/modify your actual data.</td></tr> </table>
event-grid Handler Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-event-hubs.md
See the following examples:
## Delivery properties Event subscriptions allow you to set up HTTP headers that are included in delivered events. This capability allows you to set custom headers that are required by a destination. You can set custom headers on the events that are delivered to Azure Event Hubs.
-If you need to publish events to a specific partition within an event hub, set the `ParitionKey` property on your event subscription to specify the partition key that identifies the target event hub partition.
+If you need to publish events to a specific partition within an event hub, set the `PartitionKey` property on your event subscription to specify the partition key that identifies the target event hub partition.
| Header name | Header type | | :-- | :-- |
-|`PartitionKey` | Static |
+|`PartitionKey` | Static or dynamic |
For more information, see [Custom delivery properties](delivery-properties.md).
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-about-virtual-network-gateways.md
Each virtual network can have only one virtual network gateway per gateway type.
## <a name="gwsku"></a>Gateway SKUs [!INCLUDE [expressroute-gwsku-include](../../includes/expressroute-gwsku-include.md)]
-If you want to upgrade your gateway to a more powerful gateway SKU, you can use the 'Resize-AzVirtualNetworkGateway' PowerShell cmdlet or perform the upgrade directly in the ExpressRoute virtual network gateway configuration blade in the Azure Portal. The following upgrades are supported:
+If you want to upgrade your gateway to a more powerful gateway SKU, you can use the 'Resize-AzVirtualNetworkGateway' PowerShell cmdlet or perform the upgrade directly in the ExpressRoute virtual network gateway configuration blade in the Azure portal. The following upgrades are supported:
- Standard to High Performance - Standard to Ultra Performance
For additional technical resources and specific syntax requirements when using R
| [PowerShell](/powershell/module/servicemanagement/azure.service/#azure) |[PowerShell](/powershell/module/az.network#networking) | | [REST API](/previous-versions/azure/reference/jj154113(v=azure.100)) |[REST API](/rest/api/virtual-network/) |
+## VNet-to-VNet connectivity
+
+By default, connectivity between virtual networks are enabled when you link multiple virtual networks to the same ExpressRoute circuit. However, Microsoft advises against using your ExpressRoute circuit for communication between virtual networks and instead use [VNet peering](../virtual-network/virtual-network-peering-overview.md). For more information about why VNet-to-VNet connectivity is not recommended over ExpressRoute, see [connectivity between virtual networks over ExpressRoute](virtual-network-connectivity-guidance.md).
+ ## Next steps For more information about available connection configurations, see [ExpressRoute Overview](expressroute-introduction.md).
expressroute Expressroute Howto Linkvnet Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-portal-resource-manager.md
In this tutorial, you learn how to:
* In order to create the connection from the ExpressRoute circuit to the target ExpressRoute virtual network gateway, the number of address spaces advertised from the local or peered virtual networks needs to be equal to or less than **200**. Once the connection has been successfully created, you can add additional address spaces, up to 1,000, to the local or peered virtual networks.
+* Review guidance for [connectivity between virtual networks over ExpressRoute](virtual-network-connectivity-guidance.md).
+ * You can [view a video](https://azure.microsoft.com/documentation/videos/azure-expressroute-how-to-create-a-connection-between-your-vpn-gateway-and-expressroute-circuit) before beginning to better understand the steps. ## Connect a VNet to a circuit - same subscription
expressroute Virtual Network Connectivity Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/virtual-network-connectivity-guidance.md
+
+ Title: 'Connectivity between virtual networks over ExpressRoute'
+description: This article explains why virtual network peering is the recommended solution for VNet to VNet connectivity when using ExpressRoute.
++++ Last updated : 05/05/2022+++
+# Connectivity between virtual networks over ExpressRoute
+
+## Overview
+
+ExpressRoute private peering supports connectivity between multiple virtual networks. To achieve this connectivity, an ExpressRoute virtual network gateway gets deployed into each virtual network. Then a connection is created between the gateway and the ExpressRoute circuit. When this connection gets established, connectivity to virtual machines (VMs) and private endpoints are enabled from on-premises. When multiple virtual networks are linked to an ExpressRoute circuit, VNet to VNet connectivity is enabled. Although this behavior happens by default when linking virtual networks to the same ExpressRoute circuit, Microsoft doesn't recommend this solution. To establish connectivity between virtual networks, VNet peering should be implemented instead for the best performance possible. For more information, see [About Virtual Network Peering](../virtual-network/virtual-network-peering-overview.md) and [Manage VNet peering](../virtual-network/virtual-network-manage-peering.md).
+
+## Limitations
+
+Even though ExpressRoute supports virtual network to virtual network connectivity, there are two main limitations with this solution that make it not an ideal choice when compared to VNet peering.
+
+### ExpressRoute virtual network gateway in the data path
+
+Virtual networks that are connected to an ExpressRoute circuit are established by deploying a virtual network gateway. The gateway facilitates the management plane and data path connectivity to virtual machines (VMs) and private endpoints defined in a virtual network. These gateway resources have bandwidth, connections-per-second and packets-per-second limitations. For more information about these limitations, see [About ExpressRoute gateways](expressroute-about-virtual-network-gateways.md). When virtual network to virtual network connectivity goes through ExpressRoute, the virtual network gateway can be the source of bottleneck in terms of bandwidth and data path or control plane limitations. When you configure virtual network peering, the virtual network gateway isn't in the data path. Therefore, you won't experience those limitations seen with VNet to VNet connectivity going through ExpressRoute.
+
+### Higher latency
+
+ExpressRoute connectivity is managed by a pair of Microsoft Enterprise Edge (MSEE) devices located at [ExpressRoute peering locations](expressroute-locations-providers.md#expressroute-locations). ExpressRoute peering locations are physically separate from Azure regions, when virtual network to virtual network connectivity is enabled using ExpressRoute. Traffic from the virtual network leaves the origin Azure region and passes through the MSEE devices at the peering location. Then that traffic will go through Microsoft's global network to reach the destination Azure region. With VNet peering, traffic flows from the origin Azure region directly to the destination Azure region using Microsoft's global network, without the extra hop of the MSEE devices. Since the extra hop is no longer in the data path, you'll see lower latency and an overall better experience with your applications and network traffic.
+
+## Next steps
+
+* Learn more about [Designing for high availability](designing-for-high-availability-with-expressroute.md).
+* Plan for [Disaster recovery](designing-for-disaster-recovery-with-expressroute-privatepeering.md) and [using VPN as a backup](use-s2s-vpn-as-backup-for-expressroute-privatepeering.md).
frontdoor Front Door Traffic Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-traffic-acceleration.md
na Previously updated : 01/27/2022 Last updated : 05/05/2022 zone_pivot_groups: front-door-tiers
Front Door optimizes the traffic path from the end user to the origin server. Th
::: zone pivot="front-door-classic"
-Front Door optimizes the traffic path from the end user to the backend server. This article describes how traffic is routed from the user to Front Door and to the backend.
+Front Door optimizes the traffic path from the end user to the backend server. This article describes how traffic is routed from the user to Front Door and from Front Door to the backend.
::: zone-end
governance Built In Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-packages.md
Each row represents a package used by a built-in policy definition.
- **Definition**: Links to the policy definition in the Azure portal. - **Configuration**: Links to the `.mof` file in the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy) containing the configuration that is used to audit and/or remediate machines.-- **Required modules**: Links to the [PowerShell Desired State Configuration (DSC)](https://docs.microsoft.com/powershell/dsc/overview?view=dsc-1.1)
+- **Required modules**: Links to the [PowerShell Desired State Configuration (DSC)](/powershell/dsc/overview?view=dsc-1.1&preserve-view=true)
modules used by each configuration. The resource modules contain the script logic used to evaluate each setting in the configuration.
hdinsight Apache Hadoop Connect Hive Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-connect-hive-power-bi.md
description: Learn how to use Microsoft Power BI to visualize Hive data processe
Previously updated : 04/24/2020 Last updated : 05/06/2022 # Visualize Apache Hive data with Microsoft Power BI using ODBC in Azure HDInsight
In this article, you learned how to visualize data from HDInsight using Power BI
* [Connect Excel to HDInsight with the Microsoft Hive ODBC Driver](./apache-hadoop-connect-excel-hive-odbc-driver.md). * [Connect Excel to Apache Hadoop by using Power Query](apache-hadoop-connect-excel-power-query.md).
-* [Visualize Interactive Query Apache Hive data with Microsoft Power BI using direct query](../interactive-query/apache-hadoop-connect-hive-power-bi-directquery.md)
+* [Visualize Interactive Query Apache Hive data with Microsoft Power BI using direct query](../interactive-query/apache-hadoop-connect-hive-power-bi-directquery.md)
hdinsight Apache Hadoop On Premises Migration Best Practices Data Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-data-migration.md
Previously updated : 11/22/2019 Last updated : 05/06/2022 # Migrate on-premises Apache Hadoop clusters to Azure HDInsight - data migration best practices
The hive metastore can be migrated either by using the scripts or by using the D
Read the next article in this series: -- [Security and DevOps best practices for on-premises to Azure HDInsight Hadoop migration](apache-hadoop-on-premises-migration-best-practices-security-devops.md)
+- [Security and DevOps best practices for on-premises to Azure HDInsight Hadoop migration](apache-hadoop-on-premises-migration-best-practices-security-devops.md)
hdinsight Apache Hbase Migrate New Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-migrate-new-version.md
For more information about HDInsight versions and compatibility, see [Azure HDIn
## Apache HBase cluster migration overview
-To upgrade your Apache HBase cluster on Azure HDInsight, you complete the following basic steps. For detailed instructions, see the detailed steps and commands.
+To upgrade your Apache HBase cluster on Azure HDInsight, complete the following basic steps. For detailed instructions, see the detailed steps and commands, or use the scripts from the section [Migrate HBase using scripts](#migrate-hbase-using-scripts) for automated migration.
Prepare the source cluster: 1. Stop data ingestion.
sudo -u hbase hdfs dfs -Dfs.azure.page.blob.dir="/hbase-wals" -cp <source-contai
1. If the destination cluster is satisfactory, delete the source cluster.
+## Migrate HBase using scripts
+
+1. Execute the script [migrate-hbase-source.sh](https://github.com/Azure/hbase-utils/blob/master/scripts/migrate-hbase-source.sh) on the source cluster and [migrate-hbase-dest.sh](https://github.com/Azure/hbase-utils/blob/master/scripts/migrate-hbase-dest.sh) on the destination cluster. Use the following instructions to execute these scripts.
+ > [!NOTE]
+ > These scripts don't copy the HBase old WALs as part of the migration; therefore, the scripts are not to be used on clusters that have either HBase Backup or Replication feature enabled.
+
+2. On source cluster
+ ```bash
+ sudo bash migrate-hbase-source.sh
+ ```
+
+3. On destination cluster
+ ```bash
+ sudo bash migrate-hbase-dest.sh -f <src_default_Fs>
+ ```
+
+Mandatory argument for the above command:
+
+```
+ -f, --src-fs
+ The fs.defaultFS of the source cluster
+ For example:
+ -f wasb://anynamehbase0316encoder-2021-03-17t01-07-55-935z@anynamehbase0hdistorage.blob.core.windows.net
+```
+
+ ## Next steps To learn more about [Apache HBase](https://hbase.apache.org/) and upgrading HDInsight clusters, see the following articles:
hdinsight Hdinsight Use External Metadata Stores https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-use-external-metadata-stores.md
description: Use external metadata stores with Azure HDInsight clusters.
Previously updated : 04/01/2022 Last updated : 05/05/2022 # Use external metadata stores in Azure HDInsight
There are two ways you can set up a metastore for your HDInsight clusters:
## Default metastore
+> [!IMPORTANT]
+> The default metastore provides a basic tier Azure SQL Database with only **5 DTU and 2 GB data max size (NOT UPGRADEABLE)**! Use this for QA and testing purposes only. **For production or large workloads, we recommend migrating to an external metastore!**
+ By default, HDInsight creates a metastore with every cluster type. You can instead specify a custom metastore. The default metastore includes the following considerations: * No additional cost. HDInsight creates a metastore with every cluster type without any additional cost to you.
By default, HDInsight creates a metastore with every cluster type. You can inste
* Default metastore is recommended only for simple workloads. Workloads that don't require multiple clusters and don't need metadata preserved beyond the cluster's lifecycle.
-> [!IMPORTANT]
-> The default metastore provides an Azure SQL Database with a **basic tier 5 DTU limit (not upgradeable)**! Suitable for basic testing purposes. For large or production workloads, we recommend migrating to an external metastore.
- ## Custom metastore HDInsight also supports custom metastores, which are recommended for production clusters:
-* You specify your own Azure SQL Database as the metastore.
+* You specify your own **Azure SQL Database** as the metastore.
* The lifecycle of the metastore isn't tied to a clusters lifecycle, so you can create and delete clusters without losing metadata. Metadata such as your Hive schemas will persist even after you delete and re-create the HDInsight cluster.
Create or have an existing Azure SQL Database before setting up a custom Hive me
While creating the cluster, HDInsight service needs to connect to the external metastore and verify your credentials. Configure Azure SQL Database firewall rules to allow Azure services and resources to access the server. Enable this option in the Azure portal by selecting **Set server firewall**. Then select **No** underneath **Deny public network access**, and **Yes** underneath **Allow Azure services and resources to access this server** for Azure SQL Database. For more information, see [Create and manage IP firewall rules](/azure/azure-sql/database/firewall-configure#use-the-azure-portal-to-manage-server-level-ip-firewall-rules)
-Private endpoints for SQL stores is only supported on the clusters created with `outbound` ResourceProviderConnection. To learn more, see this [documentationa](./hdinsight-private-link.md).
+Private endpoints for SQL stores is only supported on the clusters created with `outbound` ResourceProviderConnection. To learn more, see this [documentation](./hdinsight-private-link.md).
:::image type="content" source="./media/hdinsight-use-external-metadata-stores/configure-azure-sql-database-firewall1.png" alt-text="set server firewall button":::
You can point your cluster to a previously created Azure SQL Database at any tim
:::image type="content" source="./media/hdinsight-use-external-metadata-stores/azure-portal-cluster-storage-metastore.png" alt-text="HDInsight Hive Metadata Store Azure portal":::
-## Hive metastore guidelines
+## Apache Hive metastore guidelines
> [!NOTE] > Use a custom metastore whenever possible, to help separate compute resources (your running cluster) and metadata (stored in the metastore). Start with the S2 tier, which provides 50 DTU and 250 GB of storage. If you see a bottleneck, you can scale the database up.
You can point your cluster to a previously created Azure SQL Database at any tim
* In HDInsight 4.0 if you would like to Share the metastore between Hive and Spark, you can do so by changing the property metastore.catalog.default to hive in your Spark cluster. You can find this property in Ambari Advanced spark2-hive-site-override. ItΓÇÖs important to understand that sharing of metastore only works for external hive tables, this will not work if you have internal/managed hive tables or ACID tables.
-### Updating the custom Hive metastore password
+## Updating the custom Hive metastore password
When using a custom Hive metastore database, you have the ability to change the SQL DB password. If you change the password for the custom metastore, the Hive services will not work until you update the password in the HDInsight cluster. To update the Hive metastore password:
Apache Oozie is a workflow coordination system that manages Hadoop jobs. Oozie s
For instructions on creating an Oozie metastore with Azure SQL Database, see [Use Apache Oozie for workflows](hdinsight-use-oozie-linux-mac.md).
-### Updating the custom Oozie metastore password
+## Updating the custom Oozie metastore password
When using a custom Oozie metastore database, you have the ability to change the SQL DB password. If you change the password for the custom metastore, the Oozie services will not work until you update the password in the HDInsight cluster. To update the Oozie metastore password:
hdinsight Apache Kafka Auto Create Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-auto-create-topics.md
Previously updated : 04/28/2020 Last updated : 05/06/2022 # How to configure Apache Kafka on HDInsight to automatically create topics
hdinsight Apache Spark Jupyter Notebook Install Locally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-notebook-install-locally.md
description: Learn how to install Jupyter Notebook locally on your computer and
Previously updated : 04/23/2020 Last updated : 05/06/2022 # Install Jupyter Notebook on your computer and connect to Apache Spark on HDInsight
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Azure Health Data Services is a set of managed API services based on open standa
|Feature | Related information | | : | -: |
-|FHIRPath Patch |This new feature enables you to use the FHIRPath Patch operation on FHIR resources. For more information, see [FHIR Rest API capabilities for Azure Health Data Services FHIR service](./../healthcare-apis/fhir/fhir-rest-api-capabilities.md). |
+|FHIRPath Patch |This new feature enables you to use the FHIRPath Patch operation on FHIR resources. For more information, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](./../healthcare-apis/fhir/fhir-rest-api-capabilities.md). |
### **Bug fixes**
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md
Last updated 08/31/2021
-+ # Azure IoT Central architecture
iot-central Howto Authorize Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-authorize-rest-api.md
Last updated 12/27/2021
+
iot-central Howto Create Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-analytics.md
Last updated 12/21/2021
+ # This article applies to operators, builders, and administrators.
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-admin.md
Last updated 01/04/2022
-+ # This article applies to administrators.
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-developer.md
Last updated 03/02/2022
-++ # This article applies to device developers.
iot-central Overview Iot Central Operator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-operator.md
Last updated 04/07/2022
-+ # Device groups, jobs, use dashboards and create personal dashboards # This article applies to operators.
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
Last updated 01/04/2022
-+ # This article applies to solution builders.
iot-central Quick Deploy Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-deploy-iot-central.md
Last updated 01/13/2022
-+ # Quickstart - Use your smartphone as a device to send telemetry to an IoT Central application
iot-develop Troubleshoot Embedded Device Quickstarts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/troubleshoot-embedded-device-quickstarts.md
Failed to publish temperature
* Confirm that the *Pricing and scale tier* is one of *Free* or *Standard*. **Basic is not supported** as it doesn't support cloud-to-device and device twin communication.
+## Issue: Extra messages sent when connecting to IoT Central or IoT Hub
+
+### Description
+
+Because [Defender for IoT module](/defender-for-iot/device-builders/iot-security-azure-rtos) is enabled by default from the device end, you might observe extra messages that are caused by that.
+
+### Resolution
+
+* To disable it, define `NX_AZURE_DISABLE_IOT_SECURITY_MODULE` in the NetX Duo header file `nx_port.h`.
+ ## Next steps If after reviewing the issues in this article, you still can't monitor your device in a terminal or connect to Azure IoT, there might be an issue with your device's hardware or physical configuration. See the manufacturer's page for your device to find documentation and support options.
iot-edge How To Collect And Transport Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-collect-and-transport-metrics.md
You can remotely monitor your IoT Edge fleet using Azure Monitor and built-in metrics integration. To enable this capability on your device, add the metrics-collector module to your deployment and configure it to collect and transport module metrics to Azure Monitor.
+To configure monitoring on your IoT Edge device follow the [Tutorial: Monitor IoT Edge devices](tutorial-monitor-with-workbooks.md). You'll learn how to add the metrics-collector module to your device. Otherwise, the information in this article (Collect and transport metrics) gives you an overview of the monitoring architecture and explains options you have for when it's time to configure metrics on your device.
+ > [!VIDEO https://aka.ms/docs/player?id=94a7d988-4a35-4590-9dd8-a511cdd68bee] <a href="/_themes/docs.theme/master/_themes/global/video-embed.html?id=94a7d988-4a35-4590-9dd8-a511cdd68bee" target="_blank">IoT Edge integration with Azure Monitor</a>(4:06)
iot-edge How To Create Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-iot-edge-device.md
For the latest information about which operating systems are currently supported
For Linux devices, the IoT Edge runtime is installed directly on the host device.
-IoT Edge supports X64, ARM32, and ARM64 Linux devices. Microsoft provides official installation packages for Ubuntu and Raspberry Pi OS Stretch operating systems.
+IoT Edge supports X64, ARM32, and ARM64 Linux devices. Microsoft provides official installation packages for a variety of operating systems.
### Linux containers on Windows
IoT Edge for Linux on Windows is the recommended way to run IoT Edge on Windows
For Windows devices, the IoT Edge runtime is installed directly on the host device. This platform allows you to build, deploy, and run your IoT Edge modules as Windows containers. > [!NOTE]
- > Windows containers are not the recommended way to run IoT Edge on Windows devices, as they are not supported beyond version 1.1 of Azure IoT Edge.
+ > Windows containers aren't the recommended way to run IoT Edge on Windows devices, as they aren't supported beyond version 1.1 of Azure IoT Edge.
> > Consider using IoT Edge for Linux on Windows, which will be supported in future versions. :::moniker-end <!--1.2--> :::moniker range=">=iotedge-2020-11"
-IoT Edge version 1.2 does not support Windows containers. Windows containers will not be supported beyond version 1.1. To learn more about IoT Edge with Windows containers, see the [IoT Edge 1.1](?view=iotedge-2018-06&preserve-view=true) version of this article.
+IoT Edge version 1.2 doesn't support Windows containers. Windows containers won't be supported beyond version 1.1. To learn more about IoT Edge with Windows containers, see the [IoT Edge 1.1](?view=iotedge-2018-06&preserve-view=true) version of this article.
:::moniker-end ## Choose how to provision your devices
The options available for authenticating communications between your IoT Edge de
Single device provisioning refers to provisioning an IoT Edge device without the assistance of the [IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md) (DPS). You'll see single device provisioning also referred to as **manual provisioning**.
-Using single device provisioning, you will need to manually enter provisioning information, like a connection string, on your devices. Manual provisioning is quick and easy to set up for only a few devices, but your workload will increase with the number of devices. Keep this is mind when you are considering the scalability of your solution.
+Using single device provisioning, you'll need to manually enter provisioning information, like a connection string, on your devices. Manual provisioning is quick and easy to set up for only a few devices, but your workload will increase with the number of devices. Provisioning helps when you're considering the scalability of your solution.
**Symmetric key** and **X.509 self-signed** authentication methods are available for manual provisioning. You can read more about those options in the [Choose an authentication method section](#choose-an-authentication-method).
Using single device provisioning, you will need to manually enter provisioning i
Provisioning devices at-scale refers to provisioning one or more IoT Edge devices with the assistance of the [IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md). You'll see provisioning at-scale also referred to as **autoprovisioning**.
-If your IoT Edge solution requires more than one device, autoprovisioning using DPS saves you the effort of manually entering provisioning information into the configuration files of each device you want to use. This automated model can be scaled to millions of IoT Edge devices. You can see the automated provisioning flow in the [Behind the scenes section of IoT Hub DPS overview page](../iot-dps/about-iot-dps.md#behind-the-scenes).
+If your IoT Edge solution requires more than one device, autoprovisioning using DPS saves you the effort of manually entering provisioning information into the configuration files of each device. This automated model can be scaled to millions of IoT Edge devices. You can see the automated provisioning flow in the [Behind the scenes section of IoT Hub DPS overview page](../iot-dps/about-iot-dps.md#behind-the-scenes).
You can secure your IoT Edge solution with the authentication method of your choice. **Symmetric key**, **X.509 certificates**, and **trusted platform module (TPM) attestation** authentication methods are available for provisioning devices at-scale. You can read more about those options in the [Choose an authentication method section](#choose-an-authentication-method).
To see more of the features of DPS, see the [Features section of the overview pa
### Symmetric keys attestation
-Symmetric key attestation is a simple approach to authenticating a device. This attestation method represents a "Hello world" experience for developers who are new to device provisioning, or do not have strict security requirements.
+Symmetric key attestation is a simple approach to authenticating a device. This attestation method represents a "Hello world" experience for developers who are new to device provisioning, or don't have strict security requirements.
When you create a new device identity in IoT Hub, the service creates two keys. You place one of the keys on the device, and it presents the key to IoT Hub when authenticating.
This authentication method is more secure than symmetric keys and is recommended
Using TPM attestation is the most secure method for device provisioning, as it provides authentication features in both software and hardware. Each TPM chip uses a unique endorsement key to verify its authenticity.
-TPM attestation is only available for provisioning at-scale with DPS, and only supports individual enrollments not group enrollments. Group enrollments are not available because of the device-specific nature of TPM.
+TPM attestation is only available for provisioning at-scale with DPS, and only supports individual enrollments not group enrollments. Group enrollments aren't available because of the device-specific nature of TPM.
TPM 2.0 is required when you use TPM attestation with the device provisioning service.
iot-edge How To Create Test Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-test-certificates.md
description: Create test certificates and learn how to install them on an Azure
Previously updated : 01/03/2022 Last updated : 05/05/2022
IoT Edge devices require certificates for secure communication between the runti
If you don't have a certificate authority to create the required certificates, you can use demo certificates to try out IoT Edge features in your test environment. This article describes the functionality of the certificate generation scripts that IoT Edge provides for testing.
-These certificates expire in 30 days, and should not be used in any production scenario.
+> [!WARNING]
+> These certificates expire in 30 days, and should not be used in any production scenario.
-You can create certificates on any machine, and then copy them over to your IoT Edge device.
-It's easier to use your primary machine to create the certificates rather than generating them on your IoT Edge device itself.
-By using your primary machine, you can set up the scripts once and then use them to create certificates for multiple devices.
-
-Follow these steps to create demo certificates for testing your IoT Edge scenario:
-
-1. [Set up scripts](#set-up-scripts) for certificate generation on your device.
-2. [Create the root CA certificate](#create-root-ca-certificate) that you use to sign all the other certificates for your scenario.
-3. Generate the certificates you need for the scenario you want to test:
- * [Create IoT Edge device identity certificates](#create-iot-edge-device-identity-certificates) for provisioning devices with X.509 certificate authentication, either manually or with the IoT Hub Device Provisioning Service.
- * [Create IoT Edge CA certificates](#create-iot-edge-ca-certificates) for IoT Edge devices in gateway scenarios.
- * [Create downstream device certificates](#create-downstream-device-certificates) for authenticating downstream devices in a gateway scenario.
+You can create certificates on any machine and then copy them over to your IoT Edge device, or generate the certificates directly on the IoT Edge device.
## Prerequisites A development machine with Git installed.
-## Set up scripts
+## Download test certificate scripts and set up working directory
The IoT Edge repository on GitHub includes certificate generation scripts that you can use to create demo certificates. This section provides instructions for preparing the scripts to run on your computer, either on Windows or Linux.
-If you're on a Linux machine, skip ahead to [Set up on Linux](#set-up-on-linux).
-### Set up on Windows
+# [Windows](#tab/windows)
To create demo certificates on a Windows device, you need to install OpenSSL and then clone the generation scripts and set them up to run locally in PowerShell.
In this section, you clone the IoT Edge repo and execute the scripts.
git clone https://github.com/Azure/iotedge.git ```
-3. Navigate to the directory in which you want to work. Throughout this article, we'll call this directory *\<WRKDIR>*. All certificates and keys will be created in this working directory.
-
-4. Copy the configuration and script files from the cloned repo into your working directory.
-
+2. Create a directory in which you want to work and copy the certificate scripts there. All certificate and key files will be created in this directory.
+
```powershell
- copy <path>\iotedge\tools\CACertificates\*.cnf .
- copy <path>\iotedge\tools\CACertificates\ca-certs.ps1 .
+ mkdir wrkdir
+ cd .\wrkdir\
+ cp ..\iotedge\tools\CACertificates\*.cnf .
+ cp ..\iotedge\tools\CACertificates\certGen.sh .
``` If you downloaded the repo as a ZIP, then the folder name is `iotedge-master` and the rest of the path is the same.
-5. Enable PowerShell to run the scripts.
+3. Enable PowerShell to run the scripts.
```powershell Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser ```
-6. Bring the functions used by the scripts into PowerShell's global namespace.
+4. Bring the functions used by the scripts into PowerShell's global namespace.
```powershell . .\ca-certs.ps1
In this section, you clone the IoT Edge repo and execute the scripts.
The PowerShell window will display a warning that the certificates generated by this script are only for testing purposes, and should not be used in production scenarios.
-7. Verify that OpenSSL has been installed correctly and make sure that there won't be name collisions with existing certificates. If there are problems, the script output should describe how to fix them on your system.
+5. Verify that OpenSSL has been installed correctly and make sure that there won't be name collisions with existing certificates. If there are problems, the script output should describe how to fix them on your system.
```powershell Test-CACertsPrerequisites ```
-### Set up on Linux
+# [Linux](#tab/linux)
To create demo certificates on a Linux device, you need to clone the generation scripts and set them up to run locally in bash.
To create demo certificates on a Linux device, you need to clone the generation
git clone https://github.com/Azure/iotedge.git ```
-2. Navigate to the directory in which you want to work. We'll refer to this directory throughout the article as *\<WRKDIR>*. All certificate and key files will be created in this directory.
+2. Create a directory in which you want to work and copy the certificate scripts there. All certificate and key files will be created in this directory.
-3. Copy the config and script files from the cloned IoT Edge repo into your working directory.
- ```bash
- cp <path>/iotedge/tools/CACertificates/*.cnf .
- cp <path>/iotedge/tools/CACertificates/certGen.sh .
+ mkdir wrkdir
+ cd wrkdir
+ cp ../iotedge/tools/CACertificates/*.cnf .
+ cp ../iotedge/tools/CACertificates/certGen.sh .
``` <!--
To create demo certificates on a Linux device, you need to clone the generation
``` --> ++ ## Create root CA certificate The root CA certificate is used to make all the other demo certificates for testing an IoT Edge scenario.
If you already have one root CA certificate in your working folder, don't create
The new root CA certificate will overwrite the old, and any downstream certificates made from the old one will stop working. If you want multiple root CA certificates, be sure to manage them in separate folders.
-Before proceeding with the steps in this section, follow the steps in the [Set up scripts](#set-up-scripts) section to prepare a working directory with the demo certificate generation scripts.
-
-### Windows
+# [Windows](#tab/windows)
-1. Navigate to the working directory where you placed the certificate generation scripts.
+1. Navigate to the working directory `wrkdir` where you placed the certificate generation scripts.
1. Create the root CA certificate and have it sign one intermediate certificate. The certificates are all placed in your working directory.
Before proceeding with the steps in this section, follow the steps in the [Set u
This script command creates several certificate and key files, but when articles ask for the **root CA certificate**, use the following file:
- * `<WRKDIR>\certs\azure-iot-test-only.root.ca.cert.pem`
-
- This certificate is required before you can create more certificates for your IoT Edge devices and leaf devices as described in the next sections.
+ `certs\azure-iot-test-only.root.ca.cert.pem`
-### Linux
+# [Linux](#tab/linux)
-1. Navigate to the working directory where you placed the certificate generation scripts.
+1. Navigate to the working directory `wrkdir` where you placed the certificate generation scripts.
1. Create the root CA certificate and one intermediate certificate.
Before proceeding with the steps in this section, follow the steps in the [Set u
This script command creates several certificate and key files, but when articles ask for the **root CA certificate**, use the following file:
- * `<WRKDIR>/certs/azure-iot-test-only.root.ca.cert.pem`
+ `certs/azure-iot-test-only.root.ca.cert.pem`
+++
+This certificate is required before you can create more certificates for your IoT Edge devices and leaf devices as described in the next sections.
-## Create IoT Edge device identity certificates
+## Create identity certificate for the IoT Edge device
-Device identity certificates are used to provision IoT Edge devices if you choose to use X.509 certificate authentication. These certificates work whether you use manual provisioning or automatic provisioning through the Azure IoT Hub Device Provisioning Service (DPS).
+Device identity certificates are used to provision IoT Edge devices if you choose to use X.509 certificate authentication. These certificates work whether you use manual provisioning or automatic provisioning through the Azure IoT Hub Device Provisioning Service (DPS). If you use **symmetric key** for authenticating to IoT Hub or DPS, these certificates aren't needed.
Device identity certificates go in the **Provisioning** section of the config file on the IoT Edge device.
-Before proceeding with the steps in this section, follow the steps in the [Set up scripts](#set-up-scripts) and [Create root CA certificate](#create-root-ca-certificate) sections.
+# [Windows](#tab/windows)
+
+1. Navigate to the working directory `wrkdir` that has the certificate generation scripts and root CA certificate.
-### Windows
+1. Create the IoT Edge device identity certificate and private key with the following command:
-Create the IoT Edge device identity certificate and private key with the following command:
+ ```powershell
+ New-CACertsEdgeDeviceIdentity "<device-id>"
+ ```
+
+ The name that you pass in to this command is the device ID for the IoT Edge device in IoT Hub.
-```powershell
-New-CACertsEdgeDeviceIdentity "<name>"
-```
+1. The new device identity command creates several certificate and key files:
-The name that you pass in to this command will be the device ID for the IoT Edge device in IoT Hub.
+ * `certs\iot-edge-device-identity-<device-id>-full-chain.cert.pem`
+ * `certs\iot-edge-device-identity-<device-id>.cert.pem`
+ * `private\iot-edge-device-identity-<device-id>.key.pem`
-The new device identity command creates several certificate and key files, including three that you'll use when creating an individual enrollment in DPS and installing the IoT Edge runtime:
+Then, follow these instructions depending on your method for provisioning:
-* `<WRKDIR>\certs\iot-edge-device-identity-<name>-full-chain.cert.pem`
-* `<WRKDIR>\certs\iot-edge-device-identity-<name>.cert.pem`
-* `<WRKDIR>\private\iot-edge-device-identity-<name>.key.pem`
+- To provision the IoT Edge device to IoT Hub, take the thumbprint of `iot-edge-device-identity-<device-id>.key.pem` and upload to IoT Hub following [Provision an IoT Edge for Linux on Windows device using X.509 certificates](how-to-provision-single-device-linux-on-windows-x509.md#generate-device-identity-certificates).
+- To provision the IoT Edge device using DPS, see [Provision IoT Edge for Linux on Windows devices at scale using X.509 certificates](how-to-provision-devices-at-scale-linux-on-windows-x509.md#generate-device-identity-certificates).
-For individual enrollment of the IoT Edge device in the DPS, use `iot-edge-device-identity-<name>.cert.pem`. To register the IoT Edge device to IoT Hub, use the `iot-edge-device-identity-<name>-full-chain.cert.pem` and `iot-edge-device-identity-<name>.key.pem` certificates. For more information, see [Create and provision an IoT Edge device using X.509 certificates](how-to-provision-devices-at-scale-windows-x509.md).
+# [Linux](#tab/linux)
-### Linux
+1. Navigate to the working directory `wrkdir` that has the certificate generation scripts and root CA certificate.
-Create the IoT Edge device identity certificate and private key with the following command:
+1. Create the IoT Edge device identity certificate and private key with the following command:
-```bash
-./certGen.sh create_edge_device_identity_certificate "<name>"
-```
+ ```bash
+ ./certGen.sh create_edge_device_identity_certificate "<device-id>"
+ ```
+
+ The name that you pass in to this command is the device ID for the IoT Edge device in IoT Hub.
-The name that you pass in to this command will be the device ID for the IoT Edge device in IoT Hub.
+1. The script creates several certificate and key files, including three that you use when creating an individual enrollment in DPS and installing the IoT Edge runtime:
-The script creates several certificate and key files, including three that you'll use when creating an individual enrollment in DPS and installing the IoT Edge runtime:
+ * `certs/iot-edge-device-identity-<device-id>-full-chain.cert.pem`
+ * `certs/iot-edge-device-identity-<device-id>.cert.pem`
+ * `private/iot-edge-device-identity-<device-id>.key.pem`
-* `<WRKDIR>\certs\iot-edge-device-identity-<name>-full-chain.cert.pem`
-* `<WRKDIR>/certs/iot-edge-device-identity-<name>.cert.pem`
-* `<WRKDIR>/private/iot-edge-device-identity-<name>.key.pem`
+Then, follow these instructions depending on your method for provisioning:
+
+- To provision the IoT Edge device to IoT Hub, take the thumbprint of `iot-edge-device-identity-<device-id>.key.pem` and upload to IoT Hub following [Provision an IoT Edge for Linux on Windows device using X.509 certificates](how-to-provision-single-device-linux-on-windows-x509.md#generate-device-identity-certificates).
+- To provision the IoT Edge device using DPS, see [Provision IoT Edge for Linux on Windows devices at scale using X.509 certificates](how-to-provision-devices-at-scale-linux-on-windows-x509.md#generate-device-identity-certificates).
++ ## Create IoT Edge CA certificates <!--1.1--> :::moniker range="iotedge-2018-06"
-Every IoT Edge device going to production needs a CA signing certificate that's referenced from the config file. This certificate is known as the **device CA certificate**. The device CA certificate is responsible for creating certificates for modules running on the device. It's also necessary for gateway scenarios, because the device CA certificate is how the IoT Edge device verifies its identity to downstream devices.
+Every IoT Edge device going to production needs a CA signing certificate that's referenced from the config file. This certificate is known as the **device CA certificate**. The device CA certificate is responsible for creating certificates for modules running on the device. It's also necessary for gateway scenarios, because the device CA certificate is how the IoT Edge device verifies its identity to downstream devices. To learn more, see [Understand how Azure IoT Edge uses certificates](iot-edge-certs.md).
Device CA certificates go in the **Certificate** section of the config.yaml file on the IoT Edge device.
Device CA certificates go in the **Certificate** section of the config.yaml file
<!--1.2--> :::moniker range=">=iotedge-2020-11"
-Every IoT Edge device going to production needs a CA signing certificate that's referenced from the config file. This certificate is known as the **edge CA certificate**. The edge CA certificate is responsible for creating certificates for modules running on the device. It's also necessary for gateway scenarios, because the edge CA certificate is how the IoT Edge device verifies its identity to downstream devices.
+Every IoT Edge device going to production needs a CA signing certificate that's referenced from the config file. This certificate is known as the **edge CA certificate**. The edge CA certificate is responsible for creating certificates for modules running on the device. It's also necessary for gateway scenarios, because the edge CA certificate is how the IoT Edge device verifies its identity to downstream devices. To learn more, see [Understand how Azure IoT Edge uses certificates](iot-edge-certs.md)
Edge CA certificates go in the **Edge CA** section of the config.toml file on the IoT Edge device. :::moniker-end
-Before proceeding with the steps in this section, follow the steps in the [Set up scripts](#set-up-scripts) and [Create root CA certificate](#create-root-ca-certificate) sections.
+# [Windows](#tab/windows)
-### Windows
+1. Navigate to the working directory `wrkdir` that has the certificate generation scripts and root CA certificate.
-1. Navigate to the working directory that has the certificate generation scripts and root CA certificate.
-
-2. Create the IoT Edge CA certificate and private key with the following command. Provide a name for the CA certificate.
+2. Create the IoT Edge CA certificate and private key with the following command. Provide a name for the CA certificate. The name passed to the **New-CACertsEdgeDevice** command should *not* be the same as the hostname parameter in the config file or the device's ID in IoT Hub.
```powershell New-CACertsEdgeDevice "<CA cert name>" ```
- This command creates several certificate and key files. The following certificate and key pair needs to be copied over to an IoT Edge device and referenced in the config file:
+3. This command creates several certificate and key files. The following certificate and key pair need to be copied over to an IoT Edge device and referenced in the config file:
- * `<WRKDIR>\certs\iot-edge-device-ca-<CA cert name>-full-chain.cert.pem`
- * `<WRKDIR>\private\iot-edge-device-ca-<CA cert name>.key.pem`
+ * `certs\iot-edge-device-ca-<CA cert name>-full-chain.cert.pem`
+ * `private\iot-edge-device-ca-<CA cert name>.key.pem`
-The name passed to the **New-CACertsEdgeDevice** command should not be the same as the hostname parameter in the config file, or the device's ID in IoT Hub.
-### Linux
+# [Linux](#tab/linux)
1. Navigate to the working directory that has the certificate generation scripts and root CA certificate.
-2. Create the IoT Edge CA certificate and private key with the following command. Provide a name for the CA certificate.
+2. Create the IoT Edge CA certificate and private key with the following command. Provide a name for the CA certificate. The name passed to the **create_edge_device_ca_certificate** command should *not* be the same as the hostname parameter in the config file or the device's ID in IoT Hub.
```bash ./certGen.sh create_edge_device_ca_certificate "<CA cert name>" ```
- This script command creates several certificate and key files. The following certificate and key pair needs to be copied over to an IoT Edge device and referenced in the config file:
+3. This script command creates several certificate and key files. The following certificate and key pair need to be copied over to an IoT Edge device and referenced in the config file:
- * `<WRKDIR>/certs/iot-edge-device-ca-<CA cert name>-full-chain.cert.pem`
- * `<WRKDIR>/private/iot-edge-device-ca-<CA cert name>.key.pem`
+ * `certs/iot-edge-device-ca-<CA cert name>-full-chain.cert.pem`
+ * `private/iot-edge-device-ca-<CA cert name>.key.pem`
-The name passed to the **create_edge_device_ca_certificate** command should not be the same as the hostname parameter in the config file, or the device's ID in IoT Hub.
+ ## Create downstream device certificates If you're setting up a downstream IoT device for a gateway scenario and want to use X.509 authentication, you can generate demo certificates for the downstream device. If you want to use symmetric key authentication, you don't need to create additional certificates for the downstream device.+ There are two ways to authenticate an IoT device using X.509 certificates: using self-signed certs or using certificate authority (CA) signed certs.
-For X.509 self-signed authentication, sometimes referred to as thumbprint authentication, you need to create new certificates to place on your IoT device.
-These certificates have a thumbprint in them that you share with IoT Hub for authentication.
-For X.509 certificate authority (CA) signed authentication, you need a root CA certificate registered in IoT Hub that you use to sign certificates for your IoT device.
-Any device using a certificate that was issued by the root CA certificate or any of its intermediate certificates will be permitted to authenticate.
+- For X.509 self-signed authentication, sometimes referred to as *thumbprint* authentication, you need to create new certificates to place on your IoT device. These certificates have a thumbprint in them that you share with IoT Hub for authentication.
+- For X.509 certificate authority (CA) signed authentication, you need a root CA certificate registered in IoT Hub that you use to sign certificates for your IoT device. Any device using a certificate that was issued by the root CA certificate or any of its intermediate certificates can authenticate.
The certificate generation scripts can help you make demo certificates to test out either of these authentication scenarios.
-Before proceeding with the steps in this section, follow the steps in the [Set up scripts](#set-up-scripts) and [Create root CA certificate](#create-root-ca-certificate) sections.
- ### Self-signed certificates When you authenticate an IoT device with self-signed certificates, you need to create device certificates based on the root CA certificate for your solution. Then, you retrieve a hexadecimal "fingerprint" from the certificates to provide to IoT Hub. Your IoT device also needs a copy of its device certificates so that it can authenticate with IoT Hub.
-#### Windows
+# [Windows](#tab/windows)
-1. Navigate to the working directory that has the certificate generation scripts and root CA certificate.
+1. Navigate to the working directory `wrkdir` that has the certificate generation scripts and root CA certificate.
2. Create two certificates (primary and secondary) for the downstream device. An easy naming convention to use is to create the certificates with the name of the IoT device and then the primary or secondary label. For example: ```PowerShell
- New-CACertsDevice "<device name>-primary"
- New-CACertsDevice "<device name>-secondary"
+ New-CACertsDevice "<device ID>-primary"
+ New-CACertsDevice "<device ID>-secondary"
``` This script command creates several certificate and key files. The following certificate and key pairs needs to be copied over to the downstream IoT device and referenced in the applications that connect to IoT Hub:
- * `<WRKDIR>\certs\iot-device-<device name>-primary-full-chain.cert.pem`
- * `<WRKDIR>\certs\iot-device-<device name>-secondary-full-chain.cert.pem`
- * `<WRKDIR>\certs\iot-device-<device name>-primary.cert.pem`
- * `<WRKDIR>\certs\iot-device-<device name>-secondary.cert.pem`
- * `<WRKDIR>\certs\iot-device-<device name>-primary.cert.pfx`
- * `<WRKDIR>\certs\iot-device-<device name>-secondary.cert.pfx`
- * `<WRKDIR>\private\iot-device-<device name>-primary.key.pem`
- * `<WRKDIR>\private\iot-device-<device name>-secondary.key.pem`
+ * `certs\iot-device-<device ID>-primary-full-chain.cert.pem`
+ * `certs\iot-device-<device ID>-secondary-full-chain.cert.pem`
+ * `certs\iot-device-<device ID>-primary.cert.pem`
+ * `certs\iot-device-<device ID>-secondary.cert.pem`
+ * `certs\iot-device-<device ID>-primary.cert.pfx`
+ * `certs\iot-device-<device ID>-secondary.cert.pfx`
+ * `private\iot-device-<device ID>-primary.key.pem`
+ * `private\iot-device-<device ID>-secondary.key.pem`
3. Retrieve the SHA1 fingerprint (called a thumbprint in IoT Hub contexts) from each certificate. The fingerprint is a 40 hexadecimal character string. Use the following openssl command to view the certificate and find the fingerprint: ```PowerShell
- openssl x509 -in <WRKDIR>\certs\iot-device-<device name>-primary.cert.pem -text -fingerprint
+ openssl x509 -in certs\iot-device-<device name>-primary.cert.pem -text -fingerprint
``` Run this command twice, once for the primary certificate and once for the secondary certificate. You provide fingerprints for both certificates when you register a new IoT device using self-signed X.509 certificates.
-#### Linux
+# [Linux](#tab/linux)
1. Navigate to the working directory that has the certificate generation scripts and root CA certificate.
Your IoT device also needs a copy of its device certificates so that it can auth
This script command creates several certificate and key files. The following certificate and key pairs needs to be copied over to the downstream IoT device and referenced in the applications that connect to IoT Hub:
- * `<WRKDIR>/certs/iot-device-<device name>-primary-full-chain.cert.pem`
- * `<WRKDIR>/certs/iot-device-<device name>-secondary-full-chain.cert.pem`
- * `<WRKDIR>/certs/iot-device-<device name>-primary.cert.pem`
- * `<WRKDIR>/certs/iot-device-<device name>-secondary.cert.pem`
- * `<WRKDIR>/certs/iot-device-<device name>-primary.cert.pfx`
- * `<WRKDIR>/certs/iot-device-<device name>-secondary.cert.pfx`
- * `<WRKDIR>/private/iot-device-<device name>-primary.key.pem`
- * `<WRKDIR>/private/iot-device-<device name>-secondary.key.pem`
+ * `certs/iot-device-<device name>-primary-full-chain.cert.pem`
+ * `certs/iot-device-<device name>-secondary-full-chain.cert.pem`
+ * `certs/iot-device-<device name>-primary.cert.pem`
+ * `certs/iot-device-<device name>-secondary.cert.pem`
+ * `certs/iot-device-<device name>-primary.cert.pfx`
+ * `certs/iot-device-<device name>-secondary.cert.pfx`
+ * `private/iot-device-<device name>-primary.key.pem`
+ * `private/iot-device-<device name>-secondary.key.pem`
3. Retrieve the SHA1 fingerprint (called a thumbprint in IoT Hub contexts) from each certificate. The fingerprint is a 40 hexadecimal character string. Use the following openssl command to view the certificate and find the fingerprint: ```bash
- openssl x509 -in <WRKDIR>/certs/iot-device-<device name>-primary.cert.pem -text -fingerprint | sed 's/[:]//g'
+ openssl x509 -in certs/iot-device-<device name>-primary.cert.pem -text -fingerprint | sed 's/[:]//g'
``` You provide both the primary and secondary fingerprint when you register a new IoT device using self-signed X.509 certificates. ++ ### CA-signed certificates
-When you authenticate an IoT device with CA-signed certificates, you need to upload the root CA certificate for your solution to IoT Hub.
-Then, you perform a verification to prove to IoT Hub that you own the root CA certificate.
-Finally, you use the same root CA certificate to create device certificates to put on your IoT device so that it can authenticate with IoT Hub.
+When you authenticate an IoT device with CA-signed certificates, you need to upload the root CA certificate for your solution to IoT Hub. Use the same root CA certificate to create device certificates to put on your IoT device so that it can authenticate with IoT Hub.
The certificates in this section are for the steps in the IoT Hub X.509 certificate tutorial series. See [Understanding Public Key Cryptography and X.509 Public Key Infrastructure](../iot-hub/tutorial-x509-introduction.md) for the introduction of this series.
-#### Windows
+# [Windows](#tab/windows)
-1. Upload the root CA certificate file from your working directory, `<WRKDIR>\certs\azure-iot-test-only.root.ca.cert.pem`, to your IoT hub.
+1. Upload the root CA certificate file from your working directory, `certs\azure-iot-test-only.root.ca.cert.pem`, to your IoT hub.
-2. Use the code provided in the Azure portal to verify that you own that root CA certificate.
+2. If automatic verification isn't selected, use the code provided in the Azure portal to verify that you own that root CA certificate.
```PowerShell New-CACertsVerificationCert "<verification code>"
The certificates in this section are for the steps in the IoT Hub X.509 certific
This script command creates several certificate and key files. The following certificate and key pairs needs to be copied over to the downstream IoT device and referenced in the applications that connect to IoT Hub:
- * `<WRKDIR>\certs\iot-device-<device id>.cert.pem`
- * `<WRKDIR>\certs\iot-device-<device id>.cert.pfx`
- * `<WRKDIR>\certs\iot-device-<device id>-full-chain.cert.pem`
- * `<WRKDIR>\private\iot-device-<device id>.key.pem`
+ * `certs\iot-device-<device id>.cert.pem`
+ * `certs\iot-device-<device id>.cert.pfx`
+ * `certs\iot-device-<device id>-full-chain.cert.pem`
+ * `private\iot-device-<device id>.key.pem`
-#### Linux
+# [Linux](#tab/linux)
-1. Upload the root CA certificate file from your working directory, `<WRKDIR>\certs\azure-iot-test-only.root.ca.cert.pem`, to your IoT hub.
+1. Upload the root CA certificate file from your working directory, `certs\azure-iot-test-only.root.ca.cert.pem`, to your IoT hub.
-2. Use the code provided in the Azure portal to verify that you own that root CA certificate.
+2. If automatic verification isn't selected, use the code provided in the Azure portal to verify that you own that root CA certificate.
```bash ./certGen.sh create_verification_certificate "<verification code>"
The certificates in this section are for the steps in the IoT Hub X.509 certific
This script command creates several certificate and key files. The following certificate and key pairs needs to be copied over to the downstream IoT device and referenced in the applications that connect to IoT Hub:
- * `<WRKDIR>/certs/iot-device-<device id>.cert.pem`
- * `<WRKDIR>/certs/iot-device-<device id>.cert.pfx`
- * `<WRKDIR>/certs/iot-device-<device id>-full-chain.cert.pem`
- * `<WRKDIR>/private/iot-device-<device id>.key.pem`
+ * `certs/iot-device-<device id>.cert.pem`
+ * `certs/iot-device-<device id>.cert.pfx`
+ * `certs/iot-device-<device id>-full-chain.cert.pem`
+ * `private/iot-device-<device id>.key.pem`
++
iot-edge Nested Virtualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/nested-virtualization.md
This is the baseline approach for any Windows VM that hosts Azure IoT Edge for L
If you're using Windows Server, make sure you [install the Hyper-V role](/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server). ## Deployment on Windows VM on VMware ESXi
-Both VMware ESXi [6.7](https://docs.vmware.com/en/VMware-vSphere/6.7/vsphere-esxi-67-installation-setup-guide.pdf) and [7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html) versions support nested virtualization needed for hosting Azure IoT Edge for Linux on Windows on top of a Windows virtual machine.
+Both Intel-based VMware ESXi [6.7](https://docs.vmware.com/en/VMware-vSphere/6.7/vsphere-esxi-67-installation-setup-guide.pdf) and [7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html) versions support nested virtualization needed for hosting Azure IoT Edge for Linux on Windows on top of a Windows virtual machine.
To set up an Azure IoT Edge for Linux on Windows on a VMware ESXi Windows virtual machine, use the following steps: 1. Create a Windows virtual machine on the VMware ESXi host. For more information about VMware VM deployment, see [VMware - Deploying Virtual Machines](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-39D19B2B-A11C-42AE-AC80-DDA8682AB42C.html).
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
This article provides details about which systems and components are supported b
If you experience problems while using the Azure IoT Edge service, there are several ways to seek support. Try one of the following channels for support:
-**Reporting bugs** ΓÇô The majority of development that goes into the Azure IoT Edge product happens in the IoT Edge open-source project. Bugs can be reported on the [issues page](https://github.com/azure/iotedge/issues) of the project. Bugs related to Azure IoT Edge for Linux on Windows can be reported on the [iotedge-eflow issues page](https://github.com/azure/iotedge-eflow/issues). Fixes rapidly make their way from the projects in to product updates.
+**Reporting bugs** ΓÇô Most development that goes into the Azure IoT Edge product happens in the IoT Edge open-source project. Bugs can be reported on the [issues page](https://github.com/azure/iotedge/issues) of the project. Bugs related to Azure IoT Edge for Linux on Windows can be reported on the [iotedge-eflow issues page](https://github.com/azure/iotedge-eflow/issues). Fixes rapidly make their way from the projects in to product updates.
**Microsoft Customer Support team** ΓÇô Users who have a [support plan](https://azure.microsoft.com/support/plans/) can engage the Microsoft Customer Support team by creating a support ticket directly from the [Azure portal](https://portal.azure.com/signin/index/?feature.settingsportalinstance=mpac).
Modules built as Linux containers can be deployed to either Linux or Windows dev
:::moniker range="iotedge-2018-06" | Operating System | AMD64 | ARM32v7 | ARM64 | | - | -- | - | -- |
+| Debian 11 (Bullseye) | | ![Debian + ARM32v7](./media/support/green-check.png) | |
| Raspberry Pi OS Stretch | | ![Raspberry Pi OS Stretch + ARM32v7](./media/support/green-check.png) | | | Ubuntu Server 20.04 | ![Ubuntu Server 20.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 20.04 + ARM64](./media/support/green-check.png) | | Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 18.04 + ARM64](./media/support/green-check.png) |
Modules built as Linux containers can be deployed to either Linux or Windows dev
:::moniker range=">=iotedge-2020-11" | Operating System | AMD64 | ARM32v7 | ARM64 | | - | -- | - | -- |
+| Debian 11 (Bullseye) | | ![Debian + ARM32v7](./media/support/green-check.png) | |
| Raspberry Pi OS Stretch | | ![Raspberry Pi OS Stretch + ARM32v7](./media/support/green-check.png) | | | Ubuntu Server 20.04 | ![Ubuntu Server 20.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 20.04 + ARM64](./media/support/green-check.png) | | Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 18.04 + ARM64](./media/support/green-check.png) |
All Windows operating systems must be version 1809 (build 17763). The specific b
<!-- 1.2 --> :::moniker range=">=iotedge-2020-11"
-IoT Edge 1.1 LTS is the last release channel that supports Windows containers. Starting with version 1.2, Windows containers are not supported.
+IoT Edge 1.1 LTS is the last release channel that supports Windows containers. Starting with version 1.2, Windows containers aren't supported.
For information about supported operating systems for Windows containers, refer to the [IoT Edge 1.1](?view=iotedge-2018-06&preserve-view=true) version of this article.
For information about supported operating systems for Windows containers, refer
### Tier 2
-The systems listed in the following table are considered compatible with Azure IoT Edge, but are not actively tested or maintained by Microsoft.
+The systems listed in the following table are considered compatible with Azure IoT Edge, but aren't actively tested or maintained by Microsoft.
| Operating System | AMD64 | ARM32v7 | ARM64 | | - | -- | - | -- | | [CentOS-7](https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7) | ![CentOS + AMD64](./media/support/green-check.png) | ![CentOS + ARM32v7](./media/support/green-check.png) | ![CentOS + ARM64](./media/support/green-check.png) | | [Debian 9](https://www.debian.org/releases/stretch/) | ![Debian 9 + AMD64](./media/support/green-check.png) | ![Debian 9 + ARM32v7](./media/support/green-check.png) | ![Debian 9 + ARM64](./media/support/green-check.png) | | [Debian 10](https://www.debian.org/releases/buster/) | ![Debian 10 + AMD64](./media/support/green-check.png) | ![Debian 10 + ARM32v7](./media/support/green-check.png) | ![Debian 10 + ARM64](./media/support/green-check.png) |
-| [Debian 11](https://www.debian.org/releases/bullseye/) | ![Debian 11 + AMD64](./media/support/green-check.png) | ![Debian 11 + ARM32v7](./media/support/green-check.png) | ![Debian 11 + ARM64](./media/support/green-check.png) |
+| [Debian 11](https://www.debian.org/releases/bullseye/) | ![Debian 11 + AMD64](./media/support/green-check.png) | | ![Debian 11 + ARM64](./media/support/green-check.png) |
| [Mentor Embedded Linux Flex OS](https://www.mentor.com/embedded-software/linux/mel-flex-os/) | ![Mentor Embedded Linux Flex OS + AMD64](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM32v7](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM64](./media/support/green-check.png) | | [Mentor Embedded Linux Omni OS](https://www.mentor.com/embedded-software/linux/mel-omni-os/) | ![Mentor Embedded Linux Omni OS + AMD64](./media/support/green-check.png) | | ![Mentor Embedded Linux Omni OS + ARM64](./media/support/green-check.png) | | [RHEL 7](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7) | ![RHEL 7 + AMD64](./media/support/green-check.png) | ![RHEL 7 + ARM32v7](./media/support/green-check.png) | ![RHEL 7 + ARM64](./media/support/green-check.png) |
iot-hub Tutorial Routing View Message Routing Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing-view-message-routing-results.md
#Customer intent: As a developer, I want to be able to route messages sent to my IoT hub to different destinations based on properties stored in the message. + # Tutorial: Part 2 - View the routed messages [!INCLUDE [iot-hub-include-routing-intro](../../includes/iot-hub-include-routing-intro.md)]
The following are the rules for the message routing that were set up in Part 1 o
|Value |Result| ||| |level="storage" |Write to Azure Storage.|
-|level="critical" |Write to a Service Bus queue. A Logic App retrieves the message from
- the queue and uses Office 365 to e-mail the message.|
+|level="critical" |Write to a Service Bus queue. A Logic App retrieves the message from the queue and uses Office 365 to e-mail the message.|
|default |Display this data using Power BI.| Now you create the resources to which the messages will be routed, run an app to send messages to the hub, and see the routing in action.
-## Create a Logic App
+## Create a Logic App
The Service Bus queue is to be used for receiving messages designated as critical. Set up a Logic app to monitor the Service Bus queue, and send an e-mail when a message is added to the queue.
The Service Bus queue is to be used for receiving messages designated as critica
**Subscription**: Select your Azure subscription.
- **Resource group**: Select **Create new** under the Resource Group field. Specify **ContosoResources** for the name of the resource group.
+ **Resource group**: Select **Create new** under the Resource Group field. Specify **ContosoResources** for the name of the resource group.
**Instance Details**
- **Type**: Select **Consumption** for the instance type.
+ **Type**: Select **Consumption** for the instance type.
- For **Logic App Name**, specify the name of the logic app. This tutorial uses **ContosoLogicApp**.
+ For **Logic App Name**, specify the name of the logic app. This tutorial uses **ContosoLogicApp**.
**Region**: Use the location of the nearest datacenter. This tutorial uses **West US**.
- **Enable Log Analytics**: Set this toggle button to not enable the log analytics.
+ **Enable Log Analytics**: Set this toggle button to not enable the log analytics.
![The Create Logic App screen](./media/tutorial-routing-view-message-routing-results/create-logic-app.png)
- Select **Review + Create**. It may take a few minutes for the app to deploy. When it's finished, it shows a screen giving the overview of the deployment.
+ Select **Review + Create**. It may take a few minutes for the app to deploy. When it's finished, it shows a screen giving the overview of the deployment.
-2. Go to the Logic App. If you're still on the deployment page, you can select **Go To Resource**. Another way to get to the Logic App is to select **Resource groups**, select your resource group (this tutorial uses **ContosoResources**), then select the Logic App from the list of resources.
+1. Go to the Logic App. If you're still on the deployment page, you can select **Go To Resource**. Another way to get to the Logic App is to select **Resource groups**, select your resource group (this tutorial uses **ContosoResources**), then select the Logic App from the list of resources.
- Scroll down until you see the almost-empty tile that says **Blank Logic App +** and select it. The default tab on the screen is "For You". If this pane is blank, select **All** to see the connectors and triggers available.
+ Scroll down until you see the almost-empty tile that says **Blank Logic App +** and select it. The default tab on the screen is "For You". If this pane is blank, select **All** to see the connectors and triggers available.
-3. Select **Service Bus** from the list of connectors.
+1. Select **Service Bus** from the list of connectors.
![The list of connectors](./media/tutorial-routing-view-message-routing-results/logic-app-connectors.png)
-4. This screenshot shows a list of triggers. Select the one that says **When a message is received in a queue (auto-complete)**.
+1. This screenshot shows a list of triggers. Select the one that says **When a message is received in a queue (auto-complete)**.
![The list of triggers](./media/tutorial-routing-view-message-routing-results/logic-app-triggers.png)
-5. Fill in the fields on the next screen with the connection information.
+1. Fill in the fields on the next screen with the connection information.
**Connection Name**: ContosoConnection
-
- Select the Service Bus Namespace. This tutorial uses **ContosoSBNamespace**. The name of the key (RootManageSharedAccessKey) and the rights (Listen, Manage, Send) are retrieved and loaded. Select **RootManageSharedAccessKey**. The **Create** button changes to blue (active). Select it; it shows the queue selection screen.
-6. Next, you are asked for information about the queue.
+ Select the Service Bus Namespace. This tutorial uses **ContosoSBNamespace**. The name of the key (RootManageSharedAccessKey) and the rights (Listen, Manage, Send) are retrieved and loaded. Select **RootManageSharedAccessKey**. The **Create** button changes to blue (active). Select it; it shows the queue selection screen.
+
+1. Next, provide information about the queue.
![Selecting a queue](./media/tutorial-routing-view-message-routing-results/logic-app-queue-options.png)
- **Queue Name:** This field is the name of the queue from which the message is sent. Click this dropdown list and select the queue name that was set in the setup steps. This tutorial uses **contososbqueue**.
+ **Queue Name:** This field is the name of the queue from which the message is sent. Select this dropdown list and select the queue name that was set in the setup steps. This tutorial uses **contososbqueue**.
**Queue Type:** The type of queue. Select **Main** from the dropdown list. Take the defaults for the other fields. Select **Save** to save the logic apps designer configuration.
-7. Select **+New Step**. The **Choose an operation** pane is displayed. Select **Office 365 Outlook**. In the list, find and select **Send an Email (V2)**. Sign in to your Office 365 account.
+1. Select **+New Step**. The **Choose an operation** pane is displayed. Select **Office 365 Outlook**. In the list, find and select **Send an Email (V2)**. Sign in to your Office 365 account.
-8. Fill in the fields to be used when sending an e-mail about the message in the queue.
+1. Fill in the fields to be used when sending an e-mail about the message in the queue.
- ![Select to send-an-email from one of the Outlook connectors](./media/tutorial-routing-view-message-routing-results/logic-app-send-email.png)
+ ![Select to send-an-email from one of the Outlook connectors](./media/tutorial-routing-view-message-routing-results/logic-app-send-email.png)
**To:** Put in the e-mail address where the warning is to be sent. **Subject:** Fill in the subject for the e-mail.
- **Body**: Fill in some text for the body. Click **Add dynamic content**, it will show fields you can pick from the e-mail to include. If you don't see any, select **See More** to see more options. Select **Content** to have the body from the e-mail displayed in the error message.
+ **Body**: Fill in some text for the body. Select **Add dynamic content**, it will show fields you can pick from the e-mail to include. If you don't see any, select **See More** to see more options. Select **Content** to have the body from the e-mail displayed in the error message.
-9. Click **Save** to save your changes. Close the Logic app Designer.
+1. Select **Save** to save your changes. Close the Logic app Designer.
## Set up Azure Stream Analytics
To see the data in a Power BI visualization, first set up a Stream Analytics job
### Create the Stream Analytics job
-1. Put **stream** **analytics** **job** in the [Azure portal](https://portal.azure.com) search box and select **Enter**. Select **Create** to get to the Stream Analytics job screen, and then **create** again to get to the create screen.
+1. Put **stream** **analytics** **job** in the [Azure portal](https://portal.azure.com) search box and select **Enter**. Select **Create** to get to the Stream Analytics job screen, and then **create** again to get to the create screen.
-2. Enter the following information for the job.
+1. Enter the following information for the job.
**Job name**: The name of the job. The name must be globally unique. This tutorial uses **contosoJob**.
To see the data in a Power BI visualization, first set up a Stream Analytics job
![Create the stream analytics job](./media/tutorial-routing-view-message-routing-results/stream-analytics-create-job.png)
-3. Select **Create** to create the job. It may take a few minutes to deploy.
+1. Select **Create** to create the job. It may take a few minutes to deploy.
- To return to the job, select **Go to resource**. You can also select **Resource groups**. This tutorial uses **ContosoResources**. Then select the resource group, then select the Stream Analytics job in the list of resources.
+ To return to the job, select **Go to resource**. You can also select **Resource groups**. This tutorial uses **ContosoResources**. Then select the resource group, then select the Stream Analytics job in the list of resources.
### Add an input to the Stream Analytics job 1. Under **Job Topology**, select **Inputs**.
-2. In the **Inputs** pane, select **Add stream input** and select IoT Hub. On the screen that comes up, fill in the following fields:
+1. In the **Inputs** pane, select **Add stream input** and select IoT Hub. On the screen that comes up, fill in the following fields:
**Input alias**: This tutorial uses **contosoinputs**. Select **Select IoT Hub from your subscriptions**, then select your subscription from the dropdown list.
-
+ **IoT Hub**: Select the IoT hub. This tutorial uses **ContosoTestHub**. **Consumer group**: Select the consumer group set up in Part 1 of this tutorial. This tutorial uses **contosoconsumers**. **Shared access policy name**: Select **service**. The portal fills in the Shared Access Policy Key for you.
- **Endpoint**: Select **Messaging**. (If you select Operations Monitoring, you get the telemetry data about the IoT hub rather than the data you're sending through.)
+ **Endpoint**: Select **Messaging**. (If you select Operations Monitoring, you get the telemetry data about the IoT hub rather than the data you're sending through.)
- For the rest of the fields, accept the defaults.
+ For the rest of the fields, accept the defaults.
![Set up the inputs for the stream analytics job](./media/tutorial-routing-view-message-routing-results/stream-analytics-job-inputs.png)
-3. Select **Save**.
+1. Select **Save**.
### Add an output to the Stream Analytics job 1. Under **Job Topology**, select **Outputs**.
-2. In the **Outputs** pane, select **Add**, and then select **Power BI**. On the screen that comes up, fill in the following fields:
+1. In the **Outputs** pane, select **Add**, and then select **Power BI**. On the screen that comes up, fill in the following fields:
- **Output alias**: The unique alias for the output. This tutorial uses **contosooutputs**.
+ **Output alias**: The unique alias for the output. This tutorial uses **contosooutputs**.
Select **Select Group workspace from your subscriptions**. In **Group workspace**, specify **My workspace**.
- **Authentication mode**: Select **User token**.
+ **Authentication mode**: Select **User token**.
- **Dataset name**: Name of the dataset to be used in Power BI. This tutorial uses **contosodataset**.
+ **Dataset name**: Name of the dataset to be used in Power BI. This tutorial uses **contosodataset**.
**Table name**: Name of the table to be used in Power BI. This tutorial uses **contosotable**.
-3. Select **Authorize**, and sign in to your Power BI account. (Signing in may take more than one try).
+1. Select **Authorize**, and sign in to your Power BI account. (Signing in may take more than one try).
![Set up the outputs for the stream analytics job](./media/tutorial-routing-view-message-routing-results/stream-analytics-job-outputs.png)
-4. Select **Save**.
+1. Select **Save**.
### Configure the query of the Stream Analytics job 1. Under **Job Topology**, select **Query**.
-2. Replace `[YourInputAlias]` with the input alias of the job. This tutorial uses **contosoinputs**.
+1. Replace `[YourInputAlias]` with the input alias of the job. This tutorial uses **contosoinputs**.
-3. Replace `[YourOutputAlias]` with the output alias of the job. This tutorial uses **contosooutputs**.
+1. Replace `[YourOutputAlias]` with the output alias of the job. This tutorial uses **contosooutputs**.
![Set up the query for the stream analytics job](./media/tutorial-routing-view-message-routing-results/stream-analytics-job-query.png)
-4. Select **Save**.
+1. Select **Save**.
-5. Close the Query pane. You return to the view of the resources in the Resource Group. Select the Stream Analytics job. This tutorial calls it **contosoJob**.
+1. Close the Query pane. You return to the view of the resources in the Resource Group. Select the Stream Analytics job. This tutorial calls it **contosoJob**.
### Run the Stream Analytics job
In Part 1 of this tutorial, you set up a device to simulate using an IoT device.
This application sends messages for each of the different message routing methods. There is also a folder in the download that contains the complete Azure Resource Manager template and parameters file, as well as the Azure CLI and PowerShell scripts.
-If you didn't download the files from the repository in Part 1 of this tutorial, go ahead and download them now from [IoT Device Simulation](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/main.zip). Selecting this link downloads a repository with several applications in it; the solution you are looking for is iot-hub/Tutorials/Routing/IoT_SimulatedDevice.sln.
+If you didn't download the files from the repository in Part 1 of this tutorial, go ahead and download them now from [IoT Device Simulation](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/main.zip). Selecting this link downloads a repository with several applications in it; the solution for this tutorial is iot-hub/Tutorials/Routing/IoT_SimulatedDevice.sln.
-Double-click on the solution file (IoT_SimulatedDevice.sln) to open the code in Visual Studio, then open Program.cs. Substitute `{your hub name}` with the IoT hub host name. The format of the IoT hub host name is **{iot-hub-name}.azure-devices.net**. For this tutorial, the hub host name is **ContosoTestHub.azure-devices.net**. Next, substitute `{your device key}` with the device key you saved earlier when setting up the simulated device.
+Double-click on the solution file (IoT_SimulatedDevice.sln) to open the code in Visual Studio, then open Program.cs. Substitute `{your hub name}` with the IoT hub host name. The format of the IoT hub host name is **{iot-hub-name}.azure-devices.net**. For this tutorial, the hub host name is **ContosoTestHub.azure-devices.net**. Next, substitute `{your device key}` with the device key you saved earlier when setting up the simulated device.
- ```csharp
- static string s_myDeviceId = "Contoso-Test-Device";
- static string s_iotHubUri = "ContosoTestHub.azure-devices.net";
- // This is the primary key for the device. This is in the portal.
- // Find your IoT hub in the portal > IoT devices > select your device > copy the key.
- static string s_deviceKey = "{your device key}";
- ```
+```csharp
+ static string s_myDeviceId = "Contoso-Test-Device";
+ static string s_iotHubUri = "ContosoTestHub.azure-devices.net";
+ // This is the primary key for the device. This is in the portal.
+ // Find your IoT hub in the portal > IoT devices > select your device > copy the key.
+ static string s_deviceKey = "{your device key}";
+```
## Run and test Run the console application. Wait a few minutes. You can see the messages being sent on the console screen of the application.
-The app sends a new device-to-cloud message to the IoT hub every second. The message contains a JSON-serialized object with the device ID, temperature, humidity, and message level, which defaults to `normal`. It randomly assigns a level of `critical` or `storage`, causing the message to be routed to the storage account or to the Service Bus queue (which triggers your Logic App to send an e-mail). The default (`normal`) readings can be displayed in a BI report.
+The app sends a new device-to-cloud message to the IoT hub every second. The message contains a JSON-serialized object with the device ID, temperature, humidity, and message level, which defaults to `normal`. It randomly assigns a level of `critical` or `storage`, causing the message to be routed to the storage account or to the Service Bus queue (which triggers your Logic App to send an e-mail). The default (`normal`) readings can be displayed in a BI report.
If everything is set up correctly, at this point you should see the following results:
If everything is set up correctly, at this point you should see the following re
![The resulting emails](./media/tutorial-routing-view-message-routing-results/results-in-email.png)
- This result means the following statements are true.
+ This result means the following statements are true.
* The routing to the Service Bus queue is working correctly. * The Logic App retrieving the message from the Service Bus queue is working correctly.
- * The Logic App connector to Outlook is working correctly.
+ * The Logic App connector to Outlook is working correctly.
-2. In the [Azure portal](https://portal.azure.com), select **Resource groups** and select your Resource Group. This tutorial uses **ContosoResources**.
+1. In the [Azure portal](https://portal.azure.com), select **Resource groups** and select your Resource Group. This tutorial uses **ContosoResources**.
- Select the storage account, select **Containers**, then select the container that stores your results. This tutorial uses **contosoresults**. You should see a folder, and you can drill down through the directories until you see one or more files. Open one of those files; they contain the entries routed to the storage account.
+ Select the storage account, select **Containers**, then select the container that stores your results. This tutorial uses **contosoresults**. You should see a folder, and you can drill down through the directories until you see one or more files. Open one of those files; they contain the entries routed to the storage account.
![The result files in storage](./media/tutorial-routing-view-message-routing-results/results-in-storage.png) This result means the following statement is true.
- * The routing to the storage account is working correctly.
+* The routing to the storage account is working correctly.
With the application still running, set up the Power BI visualization to see the messages coming through the default endpoint.
With the application still running, set up the Power BI visualization to see the
1. Sign in to your [Power BI](https://powerbi.microsoft.com/) account.
-2. Select **My Workspace**. It shows at least one dataset that was created. If there's nothing there, run the **Simulated Device** application for another 5-10 minutes to stream more data. After the workspace appears, it will have a dataset called ContosoDataset. Right-click on the three vertical dots to the right of the dataset name. In the dropdown list, select **Create report**.
+1. Select **My Workspace**. It shows at least one dataset that was created. If there's nothing there, run the **Simulated Device** application for another 5-10 minutes to stream more data. After the workspace appears, it will have a dataset called ContosoDataset. Right-click on the three vertical dots to the right of the dataset name. In the dropdown list, select **Create report**.
- ![Power BI creating report](./media/tutorial-routing-view-message-routing-results/bi-personal-workspace.png)
+ ![Power BI creating report](./media/tutorial-routing-view-message-routing-results/bi-personal-workspace.png)
-3. Look in the **Visualizations** section on the right-hand side and select **Line chart** to select a line chart in the BI report page. Drag the graphic so it fills the space horizontally. Now in the **Fields** section on the right, open ContosoTable. Select **EventEnqueuedUtcTime**. It should put it across the X-Axis. Select **temperature** and drag it into the **Values** field for temperature. This adds temperature to the chart. You should have something that looks like the following graphic:
+1. Look in the **Visualizations** section on the right-hand side and select **Line chart** to select a line chart in the BI report page. Drag the graphic so it fills the space horizontally. Now in the **Fields** section on the right, open ContosoTable. Select **EventEnqueuedUtcTime**. It should put it across the X-Axis. Select **temperature** and drag it into the **Values** field for temperature. This adds temperature to the chart. You should have something that looks like the following graphic:
- ![Power BI graph of temperature](./media/tutorial-routing-view-message-routing-results/bi-temperature-chart.png)
+ ![Power BI graph of temperature](./media/tutorial-routing-view-message-routing-results/bi-temperature-chart.png)
-4. Click in the bottom half of the chart area. Select **Line Chart** again. It creates a chart under the first one.
+1. Click in the bottom half of the chart area. Select **Line Chart** again. It creates a chart under the first one.
-5. In the table, select **EventQueuedTime**, it will put it in the Axis field. Drag **humidity** to the Values field. Now you see both charts.
+1. In the table, select **EventQueuedTime**, it will put it in the Axis field. Drag **humidity** to the Values field. Now you see both charts.
- ![Power BI graph of both fields](./media/tutorial-routing-view-message-routing-results/bi-chart-temp-humidity.png)
+ ![Power BI graph of both fields](./media/tutorial-routing-view-message-routing-results/bi-chart-temp-humidity.png)
- You sent messages from the default endpoint of the IoT Hub to the Azure Stream Analytics. Then you added a Power BI report to show the data, adding two charts to represent the temperature and the humidity.
+ You sent messages from the default endpoint of the IoT Hub to the Azure Stream Analytics. Then you added a Power BI report to show the data, adding two charts to represent the temperature and the humidity.
-7. Select **File > Save** to save the report, entering a name for the report when prompted. Save your report in your workspace.
+1. Select **File > Save** to save the report, entering a name for the report when prompted. Save your report in your workspace.
-You are able to see data on both charts. This result means the following statements are true:
+You can see data on both charts. This result means the following statements are true:
- * The routing to the default endpoint is working correctly.
- * The Azure Stream Analytics job is streaming correctly.
- * The Power BI Visualization is set up correctly.
+* The routing to the default endpoint is working correctly.
+* The Azure Stream Analytics job is streaming correctly.
+* The Power BI Visualization is set up correctly.
-You can refresh the charts to see the most recent data by selecting the Refresh button on the top of the Power BI window.
+You can refresh the charts to see the most recent data by selecting the Refresh button on the top of the Power BI window.
-## Clean up resources
+## Clean up resources
If you want to remove all of the Azure resources you've created through both parts of this tutorial, delete the resource group. This action deletes all resources contained within the group. In this case, it removes the IoT hub, the Service Bus namespace and queue, the Logic App, the storage account, and the resource group itself. You can also remove the Power BI resources and clear the emails sent during the tutorial.
You may also want to delete the quantity of emails in your inbox that were gener
## Next steps
-In this two-part tutorial, you learned how to use message routing to route IoT Hub messages to different destinations by performing the following tasks.
+In this two-part tutorial, you learned how to use message routing to route IoT Hub messages to different destinations by performing the following tasks.
**Part I: Create resources, set up message routing** > [!div class="checklist"]
In this two-part tutorial, you learned how to use message routing to route IoT H
> [!div class="checklist"] > * Create a Logic App that is triggered and sends e-mail when a message is added to the Service Bus queue. > * Download and run an app that simulates an IoT Device sending messages to the hub for the different routing options.
->
+>
> * Create a Power BI visualization for data sent to the default endpoint.
->
+>
> * View the results ... > * ...in the Service Bus queue and e-mails. > * ...in the storage account. > * ...in the Power BI visualization. -
-Advance to the next tutorial to learn how to manage the state of an IoT device.
-
+Advance to the next tutorial to learn how to manage the state of an IoT device.
> [!div class="nextstepaction"] > [Set up and use metrics and diagnostics with an IoT Hub](tutorial-use-metrics-and-diags.md)+
key-vault Overview Vnet Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-vnet-service-endpoints.md
Here's a list of trusted services that are allowed to access a key vault if the
|Azure Information Protection|Allow access to tenant key for [Azure Information Protection.](/azure/information-protection/what-is-information-protection)| |Azure App Service|App Service is trusted only for [Deploying Azure Web App Certificate through Key Vault](https://azure.github.io/AppService/2016/05/24/Deploying-Azure-Web-App-Certificate-through-Key-Vault.html), for individual app itself, the outbound IPs can be added in Key Vault's IP-based rules| |Azure SQL Database|[Transparent Data Encryption with Bring Your Own Key support for Azure SQL Database and Azure Synapse Analytics](/azure/azure-sql/database/transparent-data-encryption-byok-overview).|
+| Azure Database for MySQL | [Data encryption for Azure Database for MySQL](../../mysql/howto-data-encryption-cli.md) |
+| Azure Database for PostgreSQL Single server | [Data encryption for Azure Database for PostgreSQL Single server](../../postgresql/howto-data-encryption-cli.md) |
|Azure Storage|[Storage Service Encryption using customer-managed keys in Azure Key Vault](../../storage/common/customer-managed-keys-configure-key-vault.md).| |Azure Data Lake Store|[Encryption of data in Azure Data Lake Store](../../data-lake-store/data-lake-store-encryption.md) with a customer-managed key.| |Azure Synapse Analytics|[Encryption of data using customer-managed keys in Azure Key Vault](../../synapse-analytics/security/workspaces-encryption.md)|
load-balancer Load Balancer Floating Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-floating-ip.md
netsh interface ipv4 set interface ΓÇ£interfacenameΓÇ¥ weakhostsend=enabled
## <a name = "limitations"></a>Limitations -- Floating IP is not currently supported on secondary IP configurations for Load Balancing scenarios
+- Floating IP is not currently supported on secondary IP configurations for Load Balancing scenarios. Note that this does not apply to Public load balancers with dual-stack configurations or to architectures that utilize a NAT Gateway for outbound connectivity.
## Next steps
machine-learning Concept Sourcing Human Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-sourcing-human-data.md
+
+ Title: Manually sourcing human data for AI development
+description: Learn best practices for mitigating potential harm to peopleΓÇöespecially in vulnerable groupsΓÇöand building balanced datasets when collecting human data manually.
++++ Last updated : 05/05/2022 ++
+# What is "human data" and why is it important to source responsibly?
+
+Human data is data collected directly from, or about, people. Human data may include personal data such as names, age, images, or voice clips and sensitive data such as genetic data, biometric data, gender identity, religious beliefs, or political affiliations.
+
+Collecting this data can be important to building AI systems that work for all users. But certain practices should be avoided, especially ones that can cause physical and psychological harm to data contributors.
+
+The best practices in this article will help you conduct manual data collection projects from volunteers where everyone involved is treated with respect, and potential harmsΓÇöespecially those faced by vulnerable groupsΓÇöare anticipated and mitigated. This means that:
+
+- People contributing data are not coerced or exploited in any way, and they have control over what personal data is collected.
+- People collecting and labeling data have adequate training.
+
+These practices can also help ensure more-balanced and higher-quality datasets and better stewardship of human data.
+
+These are emerging practices, and we are continually learning. The best practices below are a starting point as you begin your own responsible human data collections. These best practices are provided for informational purposes only and should not be treated as legal advice. All human data collections should undergo specific privacy and legal reviews.
+
+## General best practices
+
+We suggest the following best practices for manually collecting human data directly from people.
+
+ :::column span="":::
+ **Best Practice**
+ :::column-end:::
+ :::column span="":::
+ **Why?**
+ :::column-end:::
+ :::row-end:::
+
+--
+
+ :::column span="":::
+ **Obtain voluntary informed consent.**
+ :::column-end:::
+
+ :::column span="":::
+ - Participants should understand and consent to data collection and how their data will be used.
+ - Data should only be stored, processed, and used for purposes that are part of the original documented informed consent.
+ - Consent documentation should be properly stored and associated with the collected data.
+ :::column-end:::
+
+--
+
+ :::column span="":::
+ **Compensate data contributors appropriately.**
+ :::column-end:::
+
+ :::column span="":::
+ - Data contributors should not be pressured or coerced into data collections and should be fairly compensated for their time and data.
+ - Inappropriate compensation can be exploitative or coercive.
+ :::column-end:::
++
+--
+
+ :::column span="":::
+ **Let contributors self-identify demographic information.**
+ :::column-end:::
+
+ :::column span="":::
+ - Demographic information that is not self-reported by data contributors but assigned by data collectors may 1) result in inaccurate metadata and 2) be disrespectful to data contributors.
+ :::column-end:::
++
+--
+
+ :::column span="":::
+ **Anticipate harms when recruiting vulnerable groups.**
+ :::column-end:::
+
+ :::column span="":::
+ - Collecting data from vulnerable population groups introduces risk to data contributors and your organization.
+ :::column-end:::
++
+--
+
+ :::column span="":::
+ **Treat data contributors with respect.**
+ :::column-end:::
+
+ :::column span="":::
+ - Improper interactions with data contributors at any phase of the data collection can negatively impact data quality, as well as the overall data collection experience for data contributors and data collectors.
+ :::column-end:::
++
+--
+
+ :::column span="":::
+ **Qualify external suppliers carefully.**
+ :::column-end:::
+
+ :::column span="":::
+ - Data collections with unqualified suppliers may result in low quality data, poor data management, unprofessional practices, and potentially harmful outcomes for data contributors and data collectors (including violations of human rights).
+ - Annotation or labeling work (e.g., audio transcription, image tagging) with unqualified suppliers may result in low quality or biased datasets, insecure data management, unprofessional practices, and potentially harmful outcomes for data contributors (including violations of human rights).
+ :::column-end:::
++
+--
+
+ :::column span="":::
+ **Communicate expectations clearly in the Statement of Work (SOW) with suppliers.**
+ :::column-end:::
+
+ :::column span="":::
+ - An SOW which lacks requirements for responsible data collection work may result in low-quality or poorly collected data.
+ :::column-end:::
++
+--
+
+ :::column span="":::
+ **Qualify geographies carefully.**
+ :::column-end:::
+
+ :::column span="":::
+ - When applicable, collecting data in areas of high geopolitical risk and/or unfamiliar geographies may result in unusable or low-quality data and may impact the safety of involved parties.
+ :::column-end:::
++
+--
+
+ :::column span="":::
+ **Be a good steward of your datasets.**
+ :::column-end:::
+
+ :::column span="":::
+ - Improper data management and poor documentation can result in data misuse.
+ :::column-end:::
++
+--
++
+>[!NOTE]
+>This article focuses on recommendations for human data, including personal data and sensitive data such as biometric data, health data, racial or ethnic data, data collected manually from the general public or company employees, as well as metadata relating to human characteristics, such as age, ancestry, and gender identity, that may be created via annotation or labeling.
++
+## Best practices for collecting age, ancestry, and gender identity
+
+In order for AI systems to work well for everyone, the datasets used for training and evaluation should reflect the diversity of people who will use or be affected by those systems. In many cases, age, ancestry, and gender identity can help approximate the range of factors that might affect how well a product performs for a variety of people; however, collecting this information requires special consideration.
+
+If you do collect this data, always let data contributors self-identify (choose their own responses) instead of having data collectors make assumptions, which might be incorrect. Also include a ΓÇ£prefer not to answerΓÇ¥ option for each question. These practices will show respect for the data contributors and yield more balanced and higher-quality data.
+
+These best practices have been developed based on three years of research with intended stakeholders and collaboration with many teams at Microsoft: [fairness and inclusiveness working groups](https://www.microsoft.com/ai/our-approach?activetab=pivot1:primaryr5), [Global Diversity & Inclusion](https://www.microsoft.com/diversity/default.aspx), [Global Readiness](https://www.microsoft.com/security/blog/2014/09/29/microsoft-global-readiness-diverse-cultures-multiple-languages-one-world/), [Office of Responsible AI](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6), and others.
+
+To enable people to self-identify, consider using the following survey questions.
+
+### Age
+
+**How old are you?**
+
+*Select your age range*
+
+[*Include appropriate age ranges as defined by project purpose, geographical region, and guidance from domain experts*]
+
+- \# to #
+- \# to #
+- \# to #
+- Prefer not to answer
++
+### Ancestry
+
+**Please select the categories that best describe your ancestry**
+
+*May select multiple*
+
+[*Include appropriate categories as defined by project purpose, geographical region, and guidance from domain experts*]
+
+- Ancestry group
+- Ancestry group
+- Ancestry group
+- Multiple (multiracial, mixed Ancestry)
+- Not listed, I describe myself as: _________________
+- Prefer not to answer
++
+### Gender identity
+
+**How do you identify?**
+
+*May select multiple*
+
+[*Include appropriate gender identities as defined by project purpose, geographical region, and guidance from domain experts*]
+
+- Gender identity
+- Gender identity
+- Gender identity
+- Prefer to self-describe: _________________
+- Prefer not to answer
++
+>[!CAUTION]
+>In some parts of the world, there are laws that criminalize specific gender categories, so it may be dangerous for data contributors to answer this question honestly. Always give people a way to opt out. And work with regional experts and attorneys to conduct a careful review of the laws and cultural norms of each place where you plan to collect data, and if needed, avoid asking this question entirely.
++
+## Next steps
+For more information on how to work with your data:
+
+- [Secure data access in Azure Machine Learning](concept-data.md)
+- [Data ingestion options for Azure Machine Learning workflows](concept-data-ingestion.md)
+- [Optimize data processing with Azure Machine Learning](concept-optimize-data-processing.md)
+- [Use differential privacy in Azure Machine Learning](how-to-differential-privacy.md)
+
+Follow these how-to guides to work with your data after you've collected it:
+
+- [Set up image labeling](how-to-create-image-labeling-projects.md)
+- [Label images and text](how-to-label-data.md)
+
mariadb Howto Configure Audit Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-audit-logs-cli.md
Previously updated : 6/24/2020 - Last updated : 05/06/2022 +
+- devx-track-azurecli
+- kr2b-contr-experiment
# Configure and access Azure Database for MariaDB audit logs in the Azure CLI
To complete this guide:
Enable and configure audit logging using the following steps:
-1. Turn on audit logs by setting the **audit_logs_enabled** parameter to "ON".
+1. Turn on audit logs by setting the **audit_logs_enabled** parameter to "ON".
+ ```azurecli-interactive az mariadb server configuration set --name audit_log_enabled --resource-group myresourcegroup --server mydemoserver --value ON ``` 1. Select the [event types](concepts-audit-logs.md#configure-audit-logging) to be logged by updating the **audit_log_events** parameter.+ ```azurecli-interactive az mariadb server configuration set --name audit_log_events --resource-group myresourcegroup --server mydemoserver --value "ADMIN,CONNECTION" ``` 1. Add any MariaDB users to be excluded from logging by updating the **audit_log_exclude_users** parameter. Specify users by providing their MariaDB user name.+ ```azurecli-interactive az mariadb server configuration set --name audit_log_exclude_users --resource-group myresourcegroup --server mydemoserver --value "azure_superuser" ``` 1. Add any specific MariaDB users to be included for logging by updating the **audit_log_include_users** parameter. Specify users by providing their MariaDB user name.+ ```azurecli-interactive az mariadb server configuration set --name audit_log_include_users --resource-group myresourcegroup --server mydemoserver --value "sampleuser" ```
mariadb Howto Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-server-parameters-using-powershell.md
Title: Configure server parameters - Azure PowerShell - Azure Database for MariaDB
+ Title: Configure Azure Database for MariaDB - Azure PowerShell
description: This article describes how to configure the service parameters in Azure Database for MariaDB using PowerShell. ms.devlang: azurepowershell Previously updated : 10/1/2020 - Last updated : 05/06/2022+
+- devx-track-azurepowershell
+- kr2b-contr-experiment
# Configure server parameters in Azure Database for MariaDB using PowerShell
Update-AzMariaDbConfiguration -Name slow_query_log -ResourceGroupName myresource
## Next steps > [!div class="nextstepaction"]
-> [Auto grow storage in Azure Database for MariaDB server using PowerShell](howto-auto-grow-storage-powershell.md).
+> [Auto grow storage in Azure Database for MariaDB server using PowerShell](howto-auto-grow-storage-powershell.md).
mariadb Howto Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-read-replicas-powershell.md
Title: Manage read replicas - Azure PowerShell - Azure Database for MariaDB
-description: Learn how to set up and manage read replicas in Azure Database for MariaDB using PowerShell.
+ Title: Manage Azure Database for MariaDB read replicas
+description: Learn how to set up and manage read replicas in Azure Database for MariaDB using PowerShell in the General Purpose or Memory Optimized pricing tiers.
Previously updated : 6/10/2020 - Last updated : 05/06/2022 +
+- devx-track-azurepowershell
+- kr2b-contr-experiment
# How to create and manage read replicas in Azure Database for MariaDB using PowerShell
-In this article, you learn how to create and manage read replicas in the Azure Database for MariaDB service using PowerShell. To learn more about read replicas, see the
-[overview](concepts-read-replicas.md).
-
-## Azure PowerShell
+In this article, you learn how to create and manage read replicas in the Azure Database for MariaDB service using PowerShell. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
You can create and manage read replicas using PowerShell.
Remove-AzMariaDbServer -Name mydemoserver -ResourceGroupName myresourcegroup
## Next steps > [!div class="nextstepaction"]
-> [Restart Azure Database for MariaDB server using PowerShell](howto-restart-server-powershell.md)
+> [Restart Azure Database for MariaDB server using PowerShell](howto-restart-server-powershell.md)
mariadb Howto Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restart-server-powershell.md
Title: Restart server - Azure PowerShell - Azure Database for MariaDB
-description: This article describes how you can restart an Azure Database for MariaDB server using PowerShell.
+ Title: Restart Azure Database for MariaDB server - Azure PowerShell
+description: Learn how you can restart an Azure Database for MariaDB server using PowerShell. The time required for a restart depends on the MariaDB recovery process.
Previously updated : 5/26/2020 - Last updated : 05/06/2022 +
+- devx-track-azurepowershell
+- kr2b-contr-experiment
# Restart Azure Database for MariaDB server using PowerShell
-This topic describes how you can restart an Azure Database for MariaDB server. You may need to restart
+This article describes how you can restart an Azure Database for MariaDB server. You may need to restart
your server for maintenance reasons, which causes a short outage during the operation. The server restart is blocked if the service is busy. For example, the service may be processing a
mysql How To Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-alert-on-metric.md
Title: Configure metric alerts - Azure portal - Azure Database for MySQL - Flexible Server
+ Title: Configure Azure Database for MySQL metric alerts
description: This article describes how to configure and access metric alerts for Azure Database for MySQL Flexible Server from the Azure portal. Previously updated : 9/21/2020 Last updated : 05/06/2022+ # Set up alerts on metrics for Azure Database for MySQL - Flexible Server
The alert triggers when the value of a specified metric crosses a threshold you
You can configure an alert to do the following actions when it triggers: * Send email notifications to the service administrator and co-administrators
-* Send email to additional emails that you specify.
+* Send email to other emails that you specify
* Call a webhook You can configure and get information about alert rules using:
You can configure and get information about alert rules using:
1. In the [Azure portal](https://portal.azure.com/), select the Azure Database for MySQL flexible server you want to monitor. 2. Under the **Monitoring** section of the sidebar, select **Alerts**. 3. Select **+ New alert rule**.
-4. The **Create rule** page opens. Fill in the required information:
+4. The **Create rule** page opens. Fill in the required information.
5. Within the **Condition** section, choose **Select condition**.
-6. You will see a list of supported signals, select the metric you want to create an alert on. For example, select "Storage percent".
-7. You will see a chart for the metric for the last six hours. Use the **Chart period** dropdown to select to see longer history for the metric.
-8. Select the **Threshold** type (ex. "Static" or "Dynamic"), **Operator** (ex. "Greater than"), and **Aggregation type** (ex. average). This will determine the logic that the metric alert rule will evaluate.
- - If you are using a **Static** threshold, continue to define a **Threshold value** (ex. 85 percent). The metric chart can help determine what might be a reasonable threshold.
- - If you are using a **Dynamic** threshold, continue to define the **Threshold sensitivity**. The metric chart will display the calculated thresholds based on recent data. [Learn more about Dynamic Thresholds condition type and sensitivity options](../../azure-monitor/alerts/alerts-dynamic-thresholds.md).
+6. You'll see a list of supported signals, select the metric you want to create an alert on. For example, select Storage percent.
+7. You'll see a chart for the metric for the last six hours. Use the **Chart period** dropdown to select to see longer history for the metric.
+8. Select the **Threshold** type (ex. "Static" or "Dynamic"), **Operator** (ex. "Greater than"), and **Aggregation type** (ex. average). This selection determines the logic that the metric alert rule will evaluate.
+ * If you're using a **Static** threshold, continue to define a **Threshold value** (ex. 85 percent). The metric chart can help determine what might be a reasonable threshold.
+ * If you're using a **Dynamic** threshold, continue to define the **Threshold sensitivity**. The metric chart will display the calculated thresholds based on recent data. [Learn more about Dynamic Thresholds condition type and sensitivity options](../../azure-monitor/alerts/alerts-dynamic-thresholds.md).
9. Refine the condition by adjusting **Aggregation granularity (Period)** interval over which data points are grouped using the aggregation type function (ex. "30 minutes"), and **Frequency** (ex "Every 15 Minutes").
-10. Click **Done**.
+10. Select **Done**.
11. Add an action group. An action group is a collection of notification preferences defined by the owner of an Azure subscription. Within the **Action Groups** section, choose **Select action group** to select an already existing action group to attach to the alert rule.
-12. You can also create a new action group to receive notifications on the alert. Refer to [create and manage action group](../../azure-monitor/alerts/action-groups.md) for more information.
-13. To create a new action group, choose **+ Create action group**. Fill out the "Create action group" form with a **Subscription**, **Resource group**, **Action group name** and **Display Name**.
+12. You can also create a new action group to receive notifications on the alert. For more information, see [create and manage action group](../../azure-monitor/alerts/action-groups.md).
+13. To create a new action group, choose **+ Create action group**. Fill out the **Create action group** form with a **Subscription**, **Resource group**, **Action group name** and **Display Name**.
14. Configure **Notifications** for action group.
- In **Notification type**, choose "Email Azure Resource Manager Role" to select subscription Owners, Contributors, and Readers to receive notifications. Choose the **Azure Resource Manager Role** for sending the email.
+ In **Notification type**, choose **Email Azure Resource Manager Role** to select subscription Owners, Contributors, and Readers to receive notifications. Choose the **Azure Resource Manager Role** for sending the email.
You can also choose **Email/SMS message/Push/Voice** to send notifications to specific recipients. Provide **Name** to the notification type and select **Review + Create** when completed.
- <!--:::image type="content" source="./media/howto-alert-on-metric/10-action-group-type.png" alt-text="Action group":::-->
- 15. Fill in **Alert rule details** like **Alert rule name**, **Description**, **Save alert rule to resource group** and **Severity**.
- <!--:::image type="content" source="./media/howto-alert-on-metric/11-name-description-severity.png" alt-text="Action group":::-->
- 16. Select **Create alert rule** to create the alert. Within a few minutes, the alert is active and triggers as previously described.+ ## Manage your alerts
-Once you have created an alert, you can select it and do the following actions:
+
+Once you create an alert, you can select it and do the following actions:
* View a graph showing the metric threshold and the actual values from the previous day relevant to this alert. * **Edit** or **Delete** the alert rule. * **Disable** or **Enable** the alert, if you want to temporarily stop or resume receiving notifications. - ## Next steps-- Learn more about [setting alert on metrics](../../azure-monitor/alerts/alerts-metric.md).-- Learn more about available [metrics in Azure Database for MySQL Flexible Server](./concepts-monitoring.md).-- [Understand how metric alerts work in Azure Monitor](../../azure-monitor/alerts/alerts-metric-overview.md)+
+* Learn more about [setting alert on metrics](../../azure-monitor/alerts/alerts-metric.md).
+* Learn more about available [metrics in Azure Database for MySQL Flexible Server](./concepts-monitoring.md).
+* [Understand how metric alerts work in Azure Monitor](../../azure-monitor/alerts/alerts-metric-overview.md).
postgresql Howto Restart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-restart.md
Previously updated : 7/9/2021 Last updated : 05/06/2022 # Restart Azure Database for PostgreSQL - Hyperscale (Citus)
all** to continue.
> an Azure support request to restart the server group. Restarting the server group applies to all nodes; you can't selectively restart
-individual nodes. The restart applies to the nodes' entire virtual machines,
-not just the PostgreSQL server instances. Any applications attempting to use
-the database will experience connectivity downtime while the restart happens.
+individual nodes. The restart applies to the PostgreSQL server processes in the
+nodes. Any applications attempting to use the database will experience
+connectivity downtime while the restart happens.
**Next steps**
postgresql Quickstart Connect Psql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-connect-psql.md
Previously updated : 04/20/2022 Last updated : 05/05/2022 # Connect to a Hyperscale (Citus) server group with psql
When you create your Hyperscale (Citus) server group, a default database named *
When psql successfully connects to the database, you'll see a new prompt: ```
- psql (13.0 (Debian 13.0-1.pgdg100+1), server 13.5)
+ psql (14.2 (Debian 14.2-1.pgdg100+1))
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off) Type "help" for help.
When you create your Hyperscale (Citus) server group, a default database named *
``` server_version -
- 13.5
+ 14.2
(1 row) ```
postgresql Quickstart Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-create-portal.md
Previously updated : 04/20/2022 Last updated : 05/05/2022 #Customer intent: As a developer, I want to provision a hyperscale server group so that I can run queries quickly on large datasets.
Visit [Create Hyperscale (Citus) server group](https://portal.azure.com/#create/
1. Fill out the **Basics** form. ![basic info form](../media/quickstart-hyperscale-create-portal/basics.png)
- Most options are self-explanatory. Note that the server group name will
- determine the DNS name your applications use to connect, in the form
- `server-group-name.postgres.database.azure.com`. Also, the admin username
- is required to be the value `citus`.
+ Most options are self-explanatory, but keep in mind:
+
+ * The server group name will determine the DNS name your
+ applications use to connect, in the form
+ `server-group-name.postgres.database.azure.com`.
+ * The admin username is required to be the value `citus`.
+ * You can choose a database version. Hyperscale (Citus) always supports the
+ latest PostgreSQL version, within one day of release.
2. Select **Configure server group**.
postgresql Quickstart Distribute Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-distribute-tables.md
Previously updated : 04/20/2022 Last updated : 05/05/2022 # Model and load data
-Within Hyperscale (Citus) servers, there are three types of tables:
-
-* **Distributed Tables** - Distributed across worker nodes (scaled out).
- Large tables should be distributed tables to improve performance.
-* **Reference tables** - Replicated to all nodes. Enables joins with
- distributed tables. Typically used for small tables like countries or product
- categories.
-* **Local tables** - Tables that reside on coordinator node. Administration
- tables are good examples of local tables.
-
-In this quickstart, we'll focus on distributed tables, and get familiar with
-them. The data model we're going to work with is simple: an HTTP request log
-for multiple websites, sharded by site.
+In this example, we'll use Hyperscale (Citus) to store and query events
+recorded from GitHub open source contributors.
## Prerequisites
CREATE INDEX event_type_index ON github_events (event_type);
CREATE INDEX payload_index ON github_events USING GIN (payload jsonb_path_ops); ```
-## Shard tables across worker nodes
+Notice the GIN index on `payload` in `github_events`. The index allows fast
+querying in the JSONB column. Since Citus is a PostgreSQL extension, Hyperscale
+(Citus) supports advanced PostgreSQL features like the JSONB datatype for
+storing semi-structured data.
+
+## Distribute tables
+
+`create_distributed_table()` is the magic function that Hyperscale (Citus)
+provides to distribute tables and use resources across multiple machines. The
+function decomposes tables into shards, which can be spread across nodes for
+increased storage and compute performance.
-Next, weΓÇÖll tell Hyperscale (Citus) to shard the tables. If your server group
-is running on the Standard Tier (meaning it has worker nodes), then the table
-shards will be created on workers. If the server group is running on the Basic
-Tier, then the shards will all be stored on the coordinator node.
+The server group in this quickstart uses the Basic Tier, so the shards will be
+stored on just one node. However, if you later decide to graduate to the
+Standard Tier, then the shards can be spread across more nodes. With Hyperscale
+(Citus), you can start small and scale seamlessly.
-To shard and distribute the tables, call `create_distributed_table()` and
-specify the table and key to shard it on.
+Let's distribute the tables:
```sql SELECT create_distributed_table('github_users', 'user_id');
SELECT create_distributed_table('github_events', 'user_id');
[!INCLUDE [azure-postgresql-hyperscale-dist-alert](../../../includes/azure-postgresql-hyperscale-dist-alert.md)]
-By default, `create_distributed_table()` splits tables into 32 shards. We can
-verify using the `citus_shards` view:
-
-```sql
-SELECT table_name, count(*) AS shards
- FROM citus_shards
- GROUP BY 1;
-```
-
-```
- table_name | shards
-+--
- github_users | 32
- github_events | 32
-(2 rows)
-```
- ## Load data into distributed tables We're ready to fill the tables with sample data. For this quickstart, we'll use
pre-installed in the Azure Cloud Shell.)
\COPY github_events FROM PROGRAM 'curl https://examples.citusdata.com/events.csv' WITH (FORMAT CSV) ```
-We can confirm the shards now hold data:
+We can review details of our distributed tables, including their sizes, with
+the `citus_tables` view:
```sql
-SELECT table_name,
- pg_size_pretty(sum(shard_size)) AS shard_size_sum
- FROM citus_shards
- GROUP BY 1;
+SELECT * FROM citus_tables;
``` ```
- table_name | shard_size_sum
-+-
- github_users | 38 MB
- github_events | 95 MB
+ table_name | citus_table_type | distribution_column | colocation_id | table_size | shard_count | table_owner | access_method
++++++-+-+
+ github_events | distributed | user_id | 1 | 388 MB | 32 | citus | heap
+ github_users | distributed | user_id | 1 | 39 MB | 32 | citus | heap
(2 rows) ```
-If you created your server group in the Basic Tier, all shards are stored on
-one node, the coordinator. Otherwise, if the server group is in the Standard
-Tier, it has multiple worker nodes that store the shards.
- ## Next steps
-Now we have a table sharded and loaded with data. Next, let's try running
-queries across the data in these shards.
+Now we have distributed tables and loaded them with data. Next, let's try
+running queries across the distributed tables.
> [!div class="nextstepaction"] > [Run distributed queries >](quickstart-run-queries.md)
postgresql Quickstart Run Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-run-queries.md
Previously updated : 04/28/2022 Last updated : 05/05/2022 # Run queries
ORDER BY hour;
(4 rows) ```
-Hyperscale (Citus) also automatically applies data definition changes across
-the shards of a distributed table.
+Hyperscale (Citus) combines the power of SQL and NoSQL datastores
+with structured and semi-structured data.
+
+In addition to running queries, Hyperscale (Citus) also applies data definition
+changes across the shards of a distributed table:
```sql -- DDL commands that are also parallelized
ALTER TABLE github_users ADD COLUMN dummy_column integer;
## Next steps The quickstart is now complete. You've successfully created a scalable
-Hyperscale (Citus) server group, created tables, sharded them, loaded data, and
-run distributed queries.
+Hyperscale (Citus) server group, created tables, distributed them, loaded data,
+and run distributed queries.
Now you're ready to learn to build applications with Hyperscale (Citus).
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
Collect all the values in the following table to define the packet core instance
|The data subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N6 gateway**| | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**| | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this if you don't want to support static IP address allocation for this site. You identified this in [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
- |Whether Network Address and Port Translation (NAPT) should be enabled for this data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the core network, maximizing the utility of a limited supply of public IP addresses. |**NAPT**|
+ |Whether Network Address and Port Translation (NAPT) should be enabled for this data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the data network, maximizing the utility of a limited supply of public IP addresses. |**NAPT**|
## Next steps
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
In this how-to guide, you'll carry out each of the tasks you need to complete be
## Get access to Azure Private 5G Core for your Azure subscription
-Contact your support representative and ask them to register your Azure subscription for access to Azure Private 5G Core.
+Contact your trials engineer and ask them to register your Azure subscription for access to Azure Private 5G Core. If you do not already have a trials engineer and are interested in trialing Azure Private 5G Core, contact your Microsoft account team, or express your interest through the [partner registration form](https://aka.ms/privateMECMSP).
-Once your support representative has confirmed your access, register the Mobile Network resource provider (Microsoft.MobileNetwork) for your subscription, as described in [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
+Once your trials engineer has confirmed your access, register the Mobile Network resource provider (Microsoft.MobileNetwork) for your subscription, as described in [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
## Allocate subnets and IP addresses
-For each of the following networks, allocate a subnet and then identify the listed IP addresses. If you're deploying multiple sites, you'll need to collect this information for each site.
+Azure Private 5G Core requires a management network, access network, and data network. These networks can all be part of the same, larger network, or they can be separate. The approach you use depends on your traffic separation requirements.
+
+For each of these networks, allocate a subnet and then identify the listed IP addresses. If you're deploying multiple sites, you'll need to collect this information for each site.
### Management network - Network address in Classless Inter-Domain Routing (CIDR) notation. - Default gateway. -- One IP address for the Azure Stack Edge Pro device's management port.
+- One IP address for the Azure Stack Edge Pro device's management port. You'll choose a port between 2 and 4 to use as the management port as part of [setting up your Azure Stack Edge Pro device](#order-and-set-up-your-azure-stack-edge-pro-devices).
- Three sequential IP addresses for the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster nodes. - One IP address for accessing local monitoring tools for the packet core instance.
For each site you're deploying, do the following:
- Decide which IP address allocation methods you want to support. - For each method you want to support, identify an IP address pool from which IP addresses can be allocated to UEs. You'll need to provide each IP address pool in CIDR notation.
- If you decide to support both methods for a particular site, ensure that the IP address pools are of the same size and do not overlap.
+ If you decide to support both methods for a particular site, ensure that the IP address pools are of the same size and do not overlap.
+
+- Decide whether you want to enable Network Address and Port Translation (NAPT) for the data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the data network, maximizing the utility of a limited supply of public IP addresses.
+
+## Prepare your networks
+
+For each site you're deploying, do the following.
+
+- Ensure you have at least one network switch with at least three ports available. You'll connect each Azure Stack Edge Pro device to the switch(es) in the same site as part of the instructions in [Order and set up your Azure Stack Edge Pro device(s)](#order-and-set-up-your-azure-stack-edge-pro-devices).
+- If you're not enabling NAPT as described in [Allocate user equipment (UE) IP address pools](#allocate-user-equipment-ue-ip-address-pools), configure the data network to route traffic destined for the UE IP address pools via the IP address you allocated for the packet core instance's N6 interface.
## Order and set up your Azure Stack Edge Pro device(s)
Do the following for each site you want to add to your private mobile network. D
| Step No. | Description | Detailed instructions | |--|--|--|
-| 1. | Order and prepare your Azure Stack Edge Pro device. | [Tutorial: Prepare to deploy Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-prep.md?tabs=azure-portal) |
-| 2. | Rack and cable your Azure Stack Edge Pro device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 5 - access network</br>- Port 6 - data network</br></br>Additionally, you must have a port connected to your management network. You can choose any port from 2 to 4. | [Tutorial: Install Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-install.md) |
-| 3. | Connect to your Azure Stack Edge Pro device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-connect.md) |
-| 4. | Configure the network for your Azure Stack Edge Pro device. When carrying out the *Enable compute network* step of this procedure, ensure you use the port you've connected to your management network. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md) |
-| 5. | Configure a name, Domain Name System (DNS) name, and (optionally) time settings. | [Tutorial: Configure the device settings for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md) |
-| 6. | Configure certificates for your Azure Stack Edge Pro device. | [Tutorial: Configure certificates for your Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-configure-certificates.md) |
-| 7. | Activate your Azure Stack Edge Pro device. | [Tutorial: Activate Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-activate.md) |
-| 8. | Run the diagnostics tests for the Azure Stack Edge Pro device in the local web UI, and verify they all pass. </br></br>You may see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 5.</br>- Port 6.</br>- The port you chose to connect to the management network in Step 2.</br></br>For all other ports, you can ignore the warning.</br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) |
-| 9. | Deploy an Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on your Azure Stack Edge Pro device. At the end of this step, the Kubernetes cluster will be connected to Azure Arc and ready to host a packet core instance. During this step, you'll need to use the information you collected in [Allocate subnets and IP addresses](#allocate-subnets-and-ip-addresses). | Contact your support representative for detailed instructions. |
+| 1. | Complete the Azure Stack Edge Pro deployment checklist. | [Deployment checklist for your Azure Stack Edge Pro GPU device](../databox-online/azure-stack-edge-gpu-deploy-checklist.md)|
+| 2. | Order and prepare your Azure Stack Edge Pro device. | [Tutorial: Prepare to deploy Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-prep.md?tabs=azure-portal) |
+| 3. | Rack and cable your Azure Stack Edge Pro device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 5 - access network</br>- Port 6 - data network</br></br>Additionally, you must have a port connected to your management network. You can choose any port from 2 to 4. | [Tutorial: Install Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-install.md) |
+| 4. | Connect to your Azure Stack Edge Pro device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-connect.md) |
+| 5. | Configure the network for your Azure Stack Edge Pro device. When carrying out the *Enable compute network* step of this procedure, ensure you use the port you've connected to your management network. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md) |
+| 6. | Configure a name, Domain Name System (DNS) name, and (optionally) time settings. | [Tutorial: Configure the device settings for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md) |
+| 7. | Configure certificates for your Azure Stack Edge Pro device. | [Tutorial: Configure certificates for your Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-configure-certificates.md) |
+| 8. | Activate your Azure Stack Edge Pro device. | [Tutorial: Activate Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-activate.md) |
+| 9. | Run the diagnostics tests for the Azure Stack Edge Pro device in the local web UI, and verify they all pass. </br></br>You may see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 5.</br>- Port 6.</br>- The port you chose to connect to the management network in Step 3.</br></br>For all other ports, you can ignore the warning.</br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) |
+| 10. | Deploy an Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on your Azure Stack Edge Pro device. At the end of this step, the Kubernetes cluster will be connected to Azure Arc and ready to host a packet core instance. During this step, you'll need to use the information you collected in [Allocate subnets and IP addresses](#allocate-subnets-and-ip-addresses). | Contact your trials engineer for detailed instructions. |
## Next steps
purview Azure Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/azure-purview-connector-overview.md
The following file types are supported for scanning, for schema extraction, and
> * Microsoft Purview scanner supports scanning snappy compressed PARQUET types for schema extraction and classification. > * For GZIP file types, the GZIP must be mapped to a single csv file within. > Gzip files are subject to System and Custom Classification rules. We currently don't support scanning a gzip file mapped to multiple files within, or any file type other than csv.
- > * For delimited file types(CSV, PSV, SSV, TSV, TXT), we do not support data type detection. The data type will be listed as "string" for all columns.
+ > * For delimited file types (CSV, PSV, SSV, TSV, TXT), we do not support data type detection. The data type will be listed as "string" for all columns.
- Document file formats supported by extension: DOC, DOCM, DOCX, DOT, ODP, ODS, ODT, PDF, POT, PPS, PPSX, PPT, PPTM, PPTX, XLC, XLS, XLSB, XLSM, XLSX, XLT - Microsoft Purview also supports custom file extensions and custom parsers.
For all structured file formats, Microsoft Purview scanner samples files in the
- For structured file types, it samples the top 128 rows in each column or the first 1 MB, whichever is lower. - For document file formats, it samples the first 20 MB of each file. - If a document file is larger than 20 MB, then it is not subject to a deep scan (subject to classification). In that case, Microsoft Purview captures only basic meta data like file name and fully qualified name.-- For **tabular data sources(SQL, CosmosDB)**, it samples the top 128 rows.
+- For **tabular data sources (SQL, CosmosDB)**, it samples the top 128 rows.
## Resource set file sampling
A folder or group of partition files is detected as a *resource set* in Microsof
File sampling for resource sets by file types: - **Delimited files (CSV, PSV, SSV, TSV)** - 1 in 100 files are sampled (L3 scan) within a folder or group of partition files that are considered a 'Resource set'-- **Data Lake file types (Parquet, Avro, Orc)** - 1 in 18446744073709551615 (long max) files are sampled (L3 scan) within a folder or group of partition files that are considered a *resource set*
+- **Data Lake file types (Parquet, Avro, Orc)** - 1 in 18446744073709551615 (long max) files are sampled (L3 scan) within a folder or group of partition files that are considered a 'Resource set'
- **Other structured file types (JSON, XML, TXT)** - 1 in 100 files are sampled (L3 scan) within a folder or group of partition files that are considered a 'Resource set' - **SQL objects and CosmosDB entities** - Each file is L3 scanned. - **Document file types** - Each file is L3 scanned. Resource set patterns don't apply to these file types. ## Classification
-All 206 system classification rules apply to structured file formats. Only the MCE classification rules apply to document file types (Not the data scan native regex patterns, bloom filter-based detection). For more information on supported classifications, see [Supported classifications in Microsoft Purview](supported-classifications.md).
+All 208 system classification rules apply to structured file formats. Only the MCE classification rules apply to document file types (Not the data scan native regex patterns, bloom filter-based detection). For more information on supported classifications, see [Supported classifications in Microsoft Purview](supported-classifications.md).
## Next steps
purview Catalog Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-managed-vnet.md
Additionally, you can deploy managed private endpoints for your Azure Key Vault
> [!IMPORTANT] > If you are planning to scan Azure Synapse workspaces using Managed Virtual Network, you are also required to [configure Azure Synapse workspace firewall access](register-scan-synapse-workspace.md#set-up-azure-synapse-workspace-firewall-access) to enable **Allow Azure services and resources to access this workspace**. Currently, we do not support setting up scans for an Azure Synapse workspace from the Microsoft Purview governance portal, if you cannot enable **Allow Azure services and resources to access this workspace** on your Azure Synapse workspaces. If you cannot enable the firewall:
-> - You can use [Microsoft Purview Rest API - Scans - Create Or Update](/rest/api/purview/scanningdataplane/scans/create-or-update/) to create a new scan for your Synapse workspaces including dedicated and serverless pools.
+> - You can use [Microsoft Purview REST API - Scans - Create Or Update](/rest/api/purview/scanningdataplane/scans/create-or-update/) to create a new scan for your Synapse workspaces including dedicated and serverless pools.
> - You must use **SQL Authentication** as authentication mechanism. ### Managed Virtual Network
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-permissions.md
Last updated 03/09/2022
Microsoft Purview uses **Collections** to organize and manage access across its sources, assets, and other artifacts. This article describes collections and access management in your Microsoft Purview account.
+> [!IMPORTANT]
+> This article refers to permissions required for the Microsoft Purview governance portal. If you are looking for permissions information for the Microsoft Purview compliance center, follow [the article for permissions in the Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center-permissions).
+ ## Collections A collection is a tool Microsoft Purview uses to group assets, sources, and other artifacts into a hierarchy for discoverability and to manage access control. All accesses to Microsoft Purview's resources are managed from collections in the Microsoft Purview account itself.
purview Catalog Private Link Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-faqs.md
Previously updated : 05/05/2022 Last updated : 05/06/2022 # Customer intent: As a Microsoft Purview admin, I want to set up private endpoints and managed vnets for my Microsoft Purview account for secure access or ingestion. # FAQ about Microsoft Purview private endpoints and Managed VNets
At least one account and portal private endpoints are required, if public access
At least one account, portal and ingestion private endpoint are required, if public access in Microsoft Purview account is set to **deny** and you are planning to scan additional data sources using a self-hosted integration runtime. ### What inbound and outbound communications are allowed through public endpoint for Microsoft Purview Managed VNets?+ No inbound communication is allowed into a Managed VNet from public network. All ports are opened for outbound communications.
+In Microsoft Purview, a Managed VNet can be used to privately connect to Azure data sources to extract metadata during scan.
### Why do I receive the following error message when I try to launch Microsoft Purview governance portal from my machine?
purview How To Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-resource-group.md
Check blog, demo and related tutorials:
* [Concepts for Microsoft Purview data owner policies](./concept-data-owner-policies.md) * [Data owner policies on an Azure Storage account](./how-to-data-owner-policies-storage.md) * [Blog: resource group-level governance can significantly reduce effort](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-resource-group-level-governance-can/ba-p/3096314)
-* [Demo of data owner access policies for Azure Storage](/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
+* [Video: Demo of data owner access policies for Azure Storage](https://docs.microsoft.com/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
purview Register Scan Power Bi Tenant Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant-troubleshoot.md
Previously updated : 04/29/2022 Last updated : 05/06/2022
This article explores common troubleshooting methods for scanning Power BI tenants in [Microsoft Purview](overview.md).
-## Supported capabilities
+## Supported scenarios for Power BI scans
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|
-||||||||
-| [Yes](register-scan-power-bi-tenant.md#deployment-checklist)| [Yes](register-scan-power-bi-tenant.md#deployment-checklist)| Yes | No | No | No| [Yes](how-to-lineage-powerbi.md)|
+### Same-tenant
+
+|**Scenarios** |**Microsoft Purview public access allowed/denied** |**Power BI public access allowed /denied** | **Runtime option** | **Authentication option** | **Deployment checklist** |
+|||||||
+|Public access with Azure IR |Allowed |Allowed |Azure Runtime | Microsoft Purview Managed Identity | [Review deployment checklist](register-scan-power-bi-tenant.md#deployment-checklist) |
+|Public access with Self-hosted IR |Allowed |Allowed |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](register-scan-power-bi-tenant.md#deployment-checklist) |
+|Private access |Allowed |Denied |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](register-scan-power-bi-tenant.md#deployment-checklist) |
+|Private access |Denied |Allowed* |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](register-scan-power-bi-tenant.md#deployment-checklist) |
+|Private access |Denied |Denied |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](register-scan-power-bi-tenant.md#deployment-checklist) |
+
+\* Power BI tenant must have a private endpoint which is deployed in a Virtual Network accessible from the self-hosted integration runtime VM. For more information, see [private endpoint for Power BI tenant](/power-bi/enterprise/service-security-private-links).
+
+### Cross-tenant
+
+|**Scenarios** |**Microsoft Purview public access allowed/denied** |**Power BI public access allowed /denied** | **Runtime option** | **Authentication option** | **Deployment checklist** |
+|||||||
+|Public access with Azure IR |Allowed |Allowed |Azure runtime |Delegated Authentication | [Deployment checklist](register-scan-power-bi-tenant-cross-tenant.md#deployment-checklist) |
+|Public access with Self-hosted IR |Allowed |Allowed |Self-hosted runtime |Delegated Authentication | [Deployment checklist](register-scan-power-bi-tenant-cross-tenant.md#deployment-checklist) |
## Troubleshooting tips
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
This article outlines how to register a Power BI tenant in a **same-tenant scena
|Public access with Azure IR |Allowed |Allowed |Azure Runtime | Microsoft Purview Managed Identity | [Review deployment checklist](#deployment-checklist) | |Public access with Self-hosted IR |Allowed |Allowed |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) | |Private access |Allowed |Denied |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
-|Private access |Denied |Allowed |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
+|Private access |Denied |Allowed* |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
|Private access |Denied |Denied |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
+\* Power BI tenant must have a private endpoint which is deployed in a Virtual Network accessible from the self-hosted integration runtime VM. For more information, see [private endpoint for Power BI tenant](/power-bi/enterprise/service-security-private-links).
+ ### Known limitations - If Microsoft Purview or Power BI tenant is protected behind a private endpoint, Self-hosted runtime is the only option to scan.
purview Tutorial Using Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-using-rest-apis.md
Once service principal is created, you need to assign Data plane roles of your p
>[!NOTE] >You can also assign your service principal permission to any sub-collections, instead of the root collection. However, all APIs will be scoped to that collection (and sub-collections that inherit permissions), and users trying to call the API for another collection will get errors.
-1. Select **Access control (IAM)**.
-
-1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-
- ![Screenshot that shows Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
-
-1. Select the **Role** tab.
+1. Select the **Role assignments** tab.
1. Assign the following roles to the service principal created previously to access various data planes in Microsoft Purview. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
security Key Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/key-management.md
Azure offers several options for storing and managing your keys in the cloud, in
**Azure Dedicated HSM**: A FIPS 140-2 Level 3 validated bare metal HSM offering, that lets customers lease a general-purpose HSM appliance that resides in Microsoft datacenters. The customer has complete and total ownership over the HSM device and is responsible for patching and updating the firmware when required. Microsoft has no permissions on the device or access to the key material, and Dedicated HSM is not integrated with any Azure PaaS offerings. Customers can interact with the HSM using the PKCS#11, JCE/JCA, and KSP/CNG APIs. This offering is most useful for legacy lift-and-shift workloads, PKI, SSL Offloading and Keyless TLS (supported integrations include F5, Nginx, Apache, Palo Alto, IBM GW and more), OpenSSL applications, Oracle TDE, and Azure SQL TDE IaaS. For more information, see [What is Azure Key Vault Managed HSM?](../../dedicated-hsm/overview.md)
-**Azure Payments HSM**: A FIPS 140-2 Level 3, PCI HSM v3, validated bare metal offering that lets customers lease a payment HSM appliance in Microsoft datacenters for payments operations, including payment processing, payment credential issuing, securing keys and authentication data, and sensitive data protection. The service is currently undergoing PCI DSS and PCI 3DS audits. Azure Payment HSM offers single-tenant HSMs for customers to have complete administrative control and exclusive access to the HSM. Once the HSM is allocated to a customer, Microsoft has no access to customer data. Likewise, when the HSM is no longer required, customer data is zeroized and erased as soon as the HSM is released, to ensure complete privacy and security is maintained. This offering is currently in public preview. For more information, see [About Azure Key Vault](../../payment-hsm/overview.md).
+**Azure Payments HSM** (in public preview): A FIPS 140-2 Level 3, PCI HSM v3, validated bare metal offering that lets customers lease a payment HSM appliance in Microsoft datacenters for payments operations, including payment processing, payment credential issuing, securing keys and authentication data, and sensitive data protection. The service is PCI DSS and PCI 3DS compliant. Azure Payment HSM offers single-tenant HSMs for customers to have complete administrative control and exclusive access to the HSM. Once the HSM is allocated to a customer, Microsoft has no access to customer data. Likewise, when the HSM is no longer required, customer data is zeroized and erased as soon as the HSM is released, to ensure complete privacy and security is maintained. This offering is currently in public preview. For more information, see [About Azure Payment HSM](../../payment-hsm/overview.md).
### Pricing
Azure Key Vault and Azure Key Vault Managed HSM have integrations with Azure Ser
### APIs
-Dedicated HSM and Payments HSM support the PKCS#11, JCE/JCA, and KSP/CNG APIs, but Azure Key Vault and Managed HSM do not. Azure Key Vault and Managed HSM use the Azure Key Vault REST API and offer SDK support. For more information on the Azure Key Vault API, see [Azure Key Vault REST API Reference](/rest/api/keyvault/).
+Dedicated HSM and Payments HSM support the PKCS#11, JCE/JCA, and KSP/CNG APIs, but Azure Key Vault and Managed HSM do not. Azure Key Vault and Managed HSM use the Azure Key Vault REST API and offer SDK support. For more information on the Azure Key Vault API, see [Azure Key Vault REST API Reference](/rest/api/keyvault/).
security Steps Secure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/steps-secure-identity.md
For more information, see the article [Blocking legacy authentication protocols
Using the verify explicitly principle, you should reduce the impact of compromised user credentials when they happen. For each app in your environment, consider the valid use cases: which groups, which networks, which devices and other elements are authorized ΓÇô then block the rest. With Azure AD Conditional Access, you can control how authorized users access their apps and resources based on specific conditions you define.
+For more information on how to use Conditional Access for your Cloud Apps and user actions, see [Conditional Access Cloud apps, actions, and authentication context](../../active-directory/conditional-access/concept-conditional-access-cloud-apps.md).
+ ### Review and govern admin roles Another Zero Trust pillar is the need to minimize the likelihood a compromised account can operate with a privileged role. This control can be accomplished by assigning the least amount of privilege to an identity. If youΓÇÖre new to Azure AD Roles, this article will help you understand Azure AD Roles.
sentinel Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/overview.md
Microsoft Sentinel is a scalable, cloud-native, **security information and event management (SIEM)** and **security orchestration, automation, and response (SOAR)** solution. Microsoft Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for attack detection, threat visibility, proactive hunting, and threat response.
-Microsoft Sentinel is your birds-eye view across the enterprise alleviating the stress of increasingly sophisticated attacks, increasing volumes of alerts, and long resolution time frames.
+Microsoft Sentinel is your bird's-eye view across the enterprise alleviating the stress of increasingly sophisticated attacks, increasing volumes of alerts, and long resolution time frames.
- **Collect data at cloud scale** across all users, devices, applications, and infrastructure, both on-premises and in multiple clouds.
sentinel Deploy Data Connector Agent Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container.md
If not, then your SAP configuration and authentication secrets can and should be
To view a list of the available containers use the command: `docker ps -a`. --- # [Manual Deployment](#tab/deploy-manually) 1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
If not, then your SAP configuration and authentication secrets can and should be
docker start sapcon-$sid ```` ++ ## Next steps Once connector is deployed, proceed to deploy Continuous Threat Monitoring for SAP solution content
service-fabric Service Fabric Connect To Secure Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-connect-to-secure-cluster.md
catch (Exception e)
The following example relies on Microsoft.IdentityModel.Clients.ActiveDirectory, Version: 2.19.208020213.
-For more information on AAD token acquisition, see [Microsoft.IdentityModel.Clients.ActiveDirectory](/dotnet/api/microsoft.identitymodel.clients.activedirectory).
+> [!IMPORTANT]
+> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](/azure/active-directory/develop/msal-migration) for more details.
+
+For more information on AAD token acquisition, see [Microsoft.Identity.Client](/dotnet/api/microsoft.identity.client?view=azure-dotnet).
```csharp string tenantId = "C15CFCEA-02C1-40DC-8466-FBD0EE0B05D2";
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Title: Support matrix for Azure VM disaster recovery with Azure Site Recovery description: Summarizes support for Azure VMs disaster recovery to a secondary region with Azure Site Recovery. Previously updated : 11/29/2020 Last updated : 05/05/2022
This article summarizes support and prerequisites for disaster recovery of Azure
**Azure portal** | Supported. **PowerShell** | Supported. [Learn more](azure-to-azure-powershell.md) **REST API** | Supported.
-**CLI** | Not currently supported
+**CLI** | Not currently supported.
## Resource move/migrate support **Resource action** | **Details** |
-**Move vaults across resource groups** | Not supported
+**Move vaults across resource groups** | Not supported.
**Move compute/storage/network resources across resource groups** | Not supported.<br/><br/> If you move a VM or associated components such as storage/network after the VM is replicating, you need to disable and then re-enable replication for the VM. **Replicate Azure VMs from one subscription to another for disaster recovery** | Supported within the same Azure Active Directory tenant. **Migrate VMs across regions within supported geographical clusters (within and across subscriptions)** | Supported within the same Azure Active Directory tenant.
Debian 8 | Includes support for all 8. *x* versions [Supported kernel versions](
Debian 9 | Includes support for 9.1 to 9.13. Debian 9.0 is not supported. [Supported kernel versions](#supported-debian-kernel-versions-for-azure-virtual-machines) Debian 10 | [Supported kernel versions](#supported-debian-kernel-versions-for-azure-virtual-machines) SUSE Linux Enterprise Server 12 | SP1, SP2, SP3, SP4, SP5 [(Supported kernel versions)](#supported-suse-linux-enterprise-server-12-kernel-versions-for-azure-virtual-machines)
-SUSE Linux Enterprise Server 15 | 15, SP1, SP2[(Supported kernel versions)](#supported-suse-linux-enterprise-server-15-kernel-versions-for-azure-virtual-machines), SP3
+SUSE Linux Enterprise Server 15 | 15, SP1, SP2[(Supported kernel versions)](#supported-suse-linux-enterprise-server-15-kernel-versions-for-azure-virtual-machines)
SUSE Linux Enterprise Server 11 | SP3<br/><br/> Upgrade of replicating machines from SP3 to SP4 isn't supported. If a replicated machine has been upgraded, you need to disable replication and re-enable replication after the upgrade. SUSE Linux Enterprise Server 11 | SP4 Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4573888/), [7.9](https://support.microsoft.com/help/4597409), [8.0](https://support.microsoft.com/help/4573888/), [8.1](https://support.microsoft.com/help/4573888/), [8.2](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.3](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) (running the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3, 4, 5, and 6 (UEK3, UEK4, UEK5, UEK6), [8.4](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), 8.5 <br/><br/>8.1 (running on all UEK kernels and RedHat kernel <= 3.10.0-1062.* are supported in [9.35](https://support.microsoft.com/help/4573888/) Support for rest of the RedHat kernels is available in [9.36](https://support.microsoft.com/help/4578241/))
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
> [!NOTE] > For Linux versions, Azure Site Recovery does not support custom OS kernels. Only the stock kernels that are part of the distribution minor version release/update are supported.
-**Note: To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch), follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
+> [!NOTE]
+> To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch), follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
#### Supported Ubuntu kernel versions for Azure virtual machines
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
20.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 5.4.0-1058-azure </br> 5.4.0-84-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic | 20.04 LTS |[9.44](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 5.4.0-26-generic to 5.4.0-60-generic </br> 5.4.0-1010-azure to 5.4.0-1043-azure </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 5.4.0-81-generic </br> 5.4.0-1056-azure |
-**Note: To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
+> [!Note]
+> To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
-**Note: For Ubuntu 20.04, we had initially rolled out support for kernels 5.8.* but we have since found issues with support for this kernel and hence have redacted these kernels from our support statement for the time being.
+> [!Note]
+> For Ubuntu 20.04, we had initially rolled out support for kernels 5.8.* but we have since found issues with support for this kernel and hence have redacted these kernels from our support statement for the time being.
#### Supported Debian kernel versions for Azure virtual machines
Debian 10 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azur
Debian 10 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.19.0-18-amd64 </br> 4.19.0-18-cloud-amd64 Debian 10 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.19.0-6-amd64 to 4.19.0-16-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-16-cloud-amd64 </br>
-**Note: To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
+> [!Note]
+> To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
#### Supported SUSE Linux Enterprise Server 12 kernel versions for Azure virtual machines
SUSE Linux Enterprise Server 15, SP1, SP2 | [9.46](https://support.microsoft.com
SUSE Linux Enterprise Server 15, SP1, SP2 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure SUSE Linux Enterprise Server 15, SP1, SP2 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure
-**Note: To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
+> [!Note]
+> To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
## Replicated machines - Linux file system/guest storage
site-recovery Azure To Azure Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-errors.md
Go to **Virtual machines** > **Settings** > **Extensions** and check for any ext
## VM provisioning state isn't valid (error code 150019)
-To enable replication on the VM, its provisioning state must be **Succeeded**. Follow these steps to check the provisioning state:
+To enable replication on the VM, its provisioning state must be **Succeeded**. Perform the following steps to check the provisioning state:
1. In the Azure portal, select the **Resource Explorer** from **All Services**. 1. Expand the **Subscriptions** list and select your subscription.
Delete the replica disk identified in the error message and retry the failed pro
## Enable protection failed as the installer is unable to find the root disk (error code 151137)
-This error occurs for Linux machines where the OS disk is encrypted using Azure Disk Encryption (ADE). This is valid issue in Agent version 9.35 only.
+This error occurs for Linux machines where the OS disk is encrypted using Azure Disk Encryption (ADE). This is a valid issue in Agent version 9.35 only.
### Possible Causes
The installer is unable to find the root disk that hosts the root file-system.
### Fix the problem
-Follow the below steps to fix this issue -
+Perform the following steps to fix this issue.
1. Find the agent bits under the directory _/var/lib/waagent_ on RHEL and CentOS machines using the below command: <br>
site-recovery Vmware Azure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-tutorial.md
This article describes how to enable replication for on-premises VMware VMs, for disaster recovery to Azure using the [Azure Site Recovery](site-recovery-overview.md) service - Classic.
-For information about disaster recovery in Azure Site Recovery Preview, see [this article](vmware-azure-set-up-replication-tutorial-preview.md)
+For information about disaster recovery in Azure Site Recovery Preview, see [this article](vmware-azure-set-up-replication-tutorial-preview.md)
This is the third tutorial in a series that shows how to set up disaster recovery to Azure for on-premises VMware VMs. In the previous tutorial, we [prepared the on-premises VMware environment](vmware-azure-tutorial-prepare-on-premises.md) for disaster recovery to Azure.
Complete the previous tutorials:
## Select a protection goal 1. In **Recovery Services vaults**, select the vault name. We're using **ContosoVMVault** for this scenario.
-2. In **Getting Started**, select Site Recovery. Then select **Prepare Infrastructure**.
+2. In **Getting Started**, select Site Recovery. Then, select **Prepare Infrastructure**.
3. In **Protection goal** > **Where are your machines located**, select **On-premises**. 4. In **Where do you want to replicate your machines**, select **To Azure**.
-5. In **Are your machines virtualized**, select **Yes, with VMware vSphere Hypervisor**. Then select **OK**.
+5. In **Are your machines virtualized**, select **Yes, with VMware vSphere Hypervisor**. Then, select **OK**.
spring-cloud Concept Manage Monitor App Spring Boot Actuator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/concept-manage-monitor-app-spring-boot-actuator.md
Title: "Manage and monitor app with Azure Spring Boot Actuator"
+ Title: "Manage and monitor app with Spring Boot Actuator"
description: Learn how to manage and monitor app with Spring Boot Actuator. Previously updated : 05/20/2020 Last updated : 05/06/2022
-# Manage and monitor app with Azure Spring Boot Actuator
+# Manage and monitor app with Spring Boot Actuator
**This article applies to:** ✔️ Java ❌ C#
static-web-apps Front Door Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/front-door-manual.md
Add the following settings to disable Front Door's caching policies from trying
1. Select the **Add an action** dropdown.
-1. Select **Cache expiration**.
+1. Select **Route configuration override**.
-1. Select **Bypass cache** in the *Cache Behavior* dropdown.
+1. Select **Disabled** in the *Caching* dropdown.
1. Select the **Save** button.
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
description: Use Azure Storage lifecycle management policies to create automated
Previously updated : 04/18/2022 Last updated : 05/05/2022
Lifecycle management supports tiering and deletion of current versions, previous
| tierToCool | Supported for `blockBlob` | Supported | Supported | | enableAutoTierToHotFromCool | Supported for `blockBlob` | Not supported | Not supported | | tierToArchive | Supported for `blockBlob` | Supported | Supported |
-| delete | Supported for `blockBlob` and `appendBlob` | Supported | Supported |
+| delete<sup>1</sup> | Supported for `blockBlob` and `appendBlob` | Supported | Supported |
+
+<sup>1</sup> When applied to an account with a hierarchical namespace enabled, a delete action removes empty directories. If the directory is not empty, then the delete action removes objects that meet the policy conditions within the first 24-hour cycle. If that action results in an empty directory that also meets the policy conditions, then that directory will be removed within the next 24-hour cycle, and so on.
> [!NOTE] > If you define more than one action on the same blob, lifecycle management applies the least expensive action to the blob. For example, action `delete` is cheaper than action `tierToArchive`. Action `tierToArchive` is cheaper than action `tierToCool`.
storage Object Replication Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-configure.md
Previously updated : 05/02/2022 Last updated : 05/05/2022
This article describes how to configure an object replication policy by using th
## Prerequisites
-Before you configure object replication, create the source and destination storage accounts if they do not already exist. The source and destination accounts can be either general-purpose v2 storage accounts or premium block blob accounts. For more information, see [Create an Azure Storage account](../common/storage-account-create.md).
+Before you configure object replication, create the source and destination storage accounts if they don't already exist. The source and destination accounts can be either general-purpose v2 storage accounts or premium block blob accounts. For more information, see [Create an Azure Storage account](../common/storage-account-create.md).
Object replication requires that blob versioning is enabled for both the source and destination account, and that blob change feed is enabled for the source account. To learn more about blob versioning, see [Blob versioning](versioning-overview.md). To learn more about change feed, see [Change feed support in Azure Blob Storage](storage-blob-change-feed.md). Keep in mind that enabling these features can result in additional costs.
To create a replication policy in the Azure portal, follow these steps:
:::image type="content" source="media/object-replication-configure/configure-replication-policy.png" alt-text="Screenshot showing replication rules in Azure portal":::
-1. If desired, specify one or more filters to copy only blobs that match a prefix pattern. For example, if you specify a prefix `b`, only blobs whose name begin with that letter are replicated. You can specify a virtual directory as part of the prefix. The prefix string does not support wildcard characters.
+1. If desired, specify one or more filters to copy only blobs that match a prefix pattern. For example, if you specify a prefix `b`, only blobs whose name begin with that letter are replicated. You can specify a virtual directory as part of the prefix. You can add a maximum of up to five prefix matches. The prefix string doesn't support wildcard characters.
The following image shows filters that restrict which blobs are copied as part of a replication rule.
$srcContainerName2 = "source-container2"
$destContainerName2 = "dest-container2" # Enable blob versioning and change feed on the source account.
-Update-AzStorageBlobServiceProperty -ResourceGroupName $rgname `
+Update-AzStorageBlobServiceProperty -ResourceGroupName $rgName `
-StorageAccountName $srcAccountName ` -EnableChangeFeed $true ` -IsVersioningEnabled $true # Enable blob versioning on the destination account.
-Update-AzStorageBlobServiceProperty -ResourceGroupName $rgname `
+Update-AzStorageBlobServiceProperty -ResourceGroupName $rgName `
-StorageAccountName $destAccountName ` -IsVersioningEnabled $true # List the service properties for both accounts.
-Get-AzStorageBlobServiceProperty -ResourceGroupName $rgname `
+Get-AzStorageBlobServiceProperty -ResourceGroupName $rgName `
-StorageAccountName $srcAccountName
-Get-AzStorageBlobServiceProperty -ResourceGroupName $rgname `
+Get-AzStorageBlobServiceProperty -ResourceGroupName $rgName `
-StorageAccountName $destAccountName # Create containers in the source and destination accounts.
-Get-AzStorageAccount -ResourceGroupName $rgname -StorageAccountName $srcAccountName |
+Get-AzStorageAccount -ResourceGroupName $rgName -StorageAccountName $srcAccountName |
New-AzStorageContainer $srcContainerName1
-Get-AzStorageAccount -ResourceGroupName $rgname -StorageAccountName $destAccountName |
+Get-AzStorageAccount -ResourceGroupName $rgName -StorageAccountName $destAccountName |
New-AzStorageContainer $destContainerName1
-Get-AzStorageAccount -ResourceGroupName $rgname -StorageAccountName $srcAccountName |
+Get-AzStorageAccount -ResourceGroupName $rgName -StorageAccountName $srcAccountName |
New-AzStorageContainer $srcContainerName2
-Get-AzStorageAccount -ResourceGroupName $rgname -StorageAccountName $destAccountName |
+Get-AzStorageAccount -ResourceGroupName $rgName -StorageAccountName $destAccountName |
New-AzStorageContainer $destContainerName2 # Define replication rules for each container.
$rule2 = New-AzStorageObjectReplicationPolicyRule -SourceContainer $srcContainer
-MinCreationTime 2021-09-01T00:00:00Z # Create the replication policy on the destination account.
-$destPolicy = Set-AzStorageObjectReplicationPolicy -ResourceGroupName $rgname `
+$destPolicy = Set-AzStorageObjectReplicationPolicy -ResourceGroupName $rgName `
-StorageAccountName $destAccountName ` -PolicyId default ` -SourceAccount $srcAccountName ` -Rule $rule1,$rule2 # Create the same policy on the source account.
-Set-AzStorageObjectReplicationPolicy -ResourceGroupName $rgname `
+Set-AzStorageObjectReplicationPolicy -ResourceGroupName $rgName `
-StorageAccountName $srcAccountName ` -InputObject $destPolicy ```
az storage account or-policy show \
## Configure object replication using a JSON file
-If you do not have permissions to the source storage account or if you want to use more than 10 container pairs, then you can configure object replication on the destination account and provide a JSON file that contains the policy definition to another user to create the same policy on the source account. For example, if the source account is in a different Azure AD tenant from the destination account, then you can use this approach to configure object replication.
+If you don't have permissions to the source storage account or if you want to use more than 10 container pairs, then you can configure object replication on the destination account and provide a JSON file that contains the policy definition to another user to create the same policy on the source account. For example, if the source account is in a different Azure AD tenant from the destination account, then you can use this approach to configure object replication.
> [!NOTE] > Cross-tenant object replication is permitted by default for a storage account. To prevent replication across tenants, you can set the **AllowCrossTenantReplication** property (preview) to disallow cross-tenant object replication for your storage accounts. For more information, see [Prevent object replication across Azure Active Directory tenants](object-replication-prevent-cross-tenant-policies.md).
You can then download a JSON file containing the policy definition that you can
The downloaded JSON file includes the policy ID that Azure Storage created for the policy on the destination account. You must use the same policy ID to configure object replication on the source account.
-Keep in mind that uploading a JSON file to create a replication policy for the destination account via the Azure portal does not automatically create the same policy in the source account. Another user must create the policy on the source account before Azure Storage begins replicating objects.
+Keep in mind that uploading a JSON file to create a replication policy for the destination account via the Azure portal doesn't automatically create the same policy in the source account. Another user must create the policy on the source account before Azure Storage begins replicating objects.
# [PowerShell](#tab/powershell)
To download a JSON file that contains the replication policy definition for the
$rgName = "<resource-group>" $destAccountName = "<destination-storage-account>"
-$destPolicy = Get-AzStorageObjectReplicationPolicy -ResourceGroupName $rgname `
+$destPolicy = Get-AzStorageObjectReplicationPolicy -ResourceGroupName $rgName `
-StorageAccountName $destAccountName $destPolicy | ConvertTo-Json -Depth 5 > c:\temp\json.txt ```
When running the example, be sure to set the `-ResourceGroupName` parameter to t
```powershell $object = Get-Content -Path C:\temp\json.txt | ConvertFrom-Json
-Set-AzStorageObjectReplicationPolicy -ResourceGroupName $rgname `
+Set-AzStorageObjectReplicationPolicy -ResourceGroupName $rgName `
-StorageAccountName $srcAccountName ` -PolicyId $object.PolicyId ` -SourceAccount $object.SourceAccount `
az storage account or-policy create \
## Check the replication status of a blob
-You can check the replication status for a blob in the source account using the Azure portal, PowerShell, or Azure CLI. Object replication properties are not populated until replication has either completed or failed.
+You can check the replication status for a blob in the source account using the Azure portal, PowerShell, or Azure CLI. Object replication properties aren't populated until replication has either completed or failed.
# [Azure portal](#tab/portal)
To check the replication status for a blob in the source account in the Azure po
To check the replication status for a blob in the source account with PowerShell, get the value of the object replication **ReplicationStatus** property, as shown in the following example. Remember to replace values in angle brackets with your own values: ```powershell
-$ctxSrc = (Get-AzStorageAccount -ResourceGroupName $rgname `
+$ctxSrc = (Get-AzStorageAccount -ResourceGroupName $rgName `
-StorageAccountName $srcAccountName).Context $blobSrc = Get-AzStorageBlob -Container $srcContainerName1 ` -Context $ctxSrc `
To remove a replication policy in the Azure portal, follow these steps:
1. Navigate to the source storage account in the Azure portal. 1. Under **Settings**, select **Object replication**.
-1. Click the **More** button next to the policy name.
+1. Select the **More** button next to the policy name.
1. Select **Delete Rules**. # [PowerShell](#tab/powershell)
To remove a replication policy, delete the policy from both the source account a
```powershell # Remove the policy from the destination account.
-Remove-AzStorageObjectReplicationPolicy -ResourceGroupName $rgname `
+Remove-AzStorageObjectReplicationPolicy -ResourceGroupName $rgName `
-StorageAccountName $destAccountName ` -PolicyId $destPolicy.PolicyId # Remove the policy from the source account.
-Remove-AzStorageObjectReplicationPolicy -ResourceGroupName $rgname `
+Remove-AzStorageObjectReplicationPolicy -ResourceGroupName $rgName `
-StorageAccountName $srcAccountName ` -PolicyId $destPolicy.PolicyId ```
storage Soft Delete Blob Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-enable.md
Previously updated : 06/29/2021 Last updated : 05/05/2022
Blob soft delete protects an individual blob and its versions, snapshots, and me
Blob soft delete is part of a comprehensive data protection strategy for blob data. To learn more about Microsoft's recommendations for data protection, see [Data protection overview](data-protection-overview.md).
-Blob soft delete is disabled by default for a new storage account. You can enable or disable soft delete for a storage account at any time by using the Azure portal, PowerShell, or Azure CLI.
+Blob soft delete is enabled by default for a new storage account. You can enable or disable soft delete for a storage account at any time by using the Azure portal, PowerShell, or Azure CLI.
## Enable blob soft delete
storage Storage Quickstart Blobs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-portal.md
Previously updated : 04/04/2022 Last updated : 05/05/2022
storage Storage Client Side Encryption Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-client-side-encryption-python.md
ms.devlang: python Previously updated : 02/18/2021 Last updated : 05/05/2022
## Overview
-The [Azure Storage Client Library for Python](https://pypi.python.org/pypi/azure-storage) supports encrypting data within client applications before uploading to Azure Storage, and decrypting data while downloading to the client.
-
-> [!NOTE]
-> The Azure Storage Python library is in preview.
->
->
+The [Azure Blob Storage client library for Python](https://pypi.org/project/azure-storage-blob/) supports encrypting data within client applications before uploading to Azure Storage, and decrypting data while downloading to the client.
## Encryption and decryption via the envelope technique
Encryption via the envelope technique works in the following way:
Decryption via the envelope technique works in the following way:
-1. The client library assumes that the user is managing the key encryption key (KEK) locally. The user does not need to know the specific key that was used for encryption. Instead, a key resolver, which resolves different key identifiers to keys, can be set up and used.
+1. The client library assumes that the user is managing the key encryption key (KEK) locally. The user doesn't need to know the specific key that was used for encryption. Instead, a key resolver, which resolves different key identifiers to keys, can be set up and used.
2. The client library downloads the encrypted data along with any encryption material that is stored on the service.
-3. The wrapped content encryption key (CEK) is then unwrapped (decrypted) using the key encryption key (KEK). Here again, the client library does not have access to KEK. It simply invokes the custom provider's unwrapping algorithm.
+3. The wrapped content encryption key (CEK) is then unwrapped (decrypted) using the key encryption key (KEK). Here again, the client library doesn't have access to KEK. It simply invokes the custom provider's unwrapping algorithm.
4. The content encryption key (CEK) is then used to decrypt the encrypted user data. ## Encryption Mechanism
-The storage client library uses [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) in order to encrypt user data. Specifically, [Cipher Block Chaining (CBC)](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher-block_chaining_.28CBC.29) mode with AES. Each service works somewhat differently, so we will discuss each of them here.
+The storage client library uses [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) in order to encrypt user data. Specifically, [Cipher Block Chaining (CBC)](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher-block_chaining_.28CBC.29) mode with AES. Each service works differently, so we'll discuss each of them here.
### Blobs
Downloading an encrypted blob involves retrieving the content of the entire blob
Downloading an arbitrary range (**get*** methods with range parameters passed in) in the encrypted blob involves adjusting the range provided by users in order to get a small amount of additional data that can be used to successfully decrypt the requested range.
-Block blobs and page blobs only can be encrypted/decrypted using this scheme. There is currently no support for encrypting append blobs.
+Block blobs and page blobs only can be encrypted/decrypted using this scheme. There's currently no support for encrypting append blobs.
### Queues
During encryption, the client library generates a random IV of 16 bytes along wi
<MessageText>{"EncryptedMessageContents":"6kOu8Rq1C3+M1QO4alKLmWthWXSmHV3mEfxBAgP9QGTU++MKn2uPq3t2UjF1DO6w","EncryptionData":{…}}</MessageText> ```
-During decryption, the wrapped key is extracted from the queue message and unwrapped. The IV is also extracted from the queue message and used along with the unwrapped key to decrypt the queue message data. Note that the encryption metadata is small (under 500 bytes), so while it does count toward the 64KB limit for a queue message, the impact should be manageable.
+During decryption, the wrapped key is extracted from the queue message and unwrapped. The IV is also extracted from the queue message and used along with the unwrapped key to decrypt the queue message data. The encryption metadata is small (under 500 bytes), so while it does count toward the 64-KB limit for a queue message, the impact should be manageable.
### Tables
Table data encryption works as follows:
1. Users specify the properties to be encrypted. 2. The client library generates a random Initialization Vector (IV) of 16 bytes along with a random content encryption key (CEK) of 32 bytes for every entity, and performs envelope encryption on the individual properties to be encrypted by deriving a new IV per property. The encrypted property is stored as binary data. 3. The wrapped CEK and some additional encryption metadata are then stored as two additional reserved properties. The first reserved property (\_ClientEncryptionMetadata1) is a string property that holds the information about IV, version, and wrapped key. The second reserved property (\_ClientEncryptionMetadata2) is a binary property that holds the information about the properties that are encrypted. The information in this second property (\_ClientEncryptionMetadata2) is itself encrypted.
-4. Due to these additional reserved properties required for encryption, users may now have only 250 custom properties instead of 252. The total size of the entity must be less than 1MB.
+4. Due to these additional reserved properties required for encryption, users may now have only 250 custom properties instead of 252. The total size of the entity must be less than 1 MB.
- Note that only string properties can be encrypted. If other types of properties are to be encrypted, they must be converted to strings. The encrypted strings are stored on the service as binary properties, and they are converted back to strings (raw strings, not EntityProperties with type EdmType.STRING) after decryption.
+ Only string properties can be encrypted. If other types of properties are to be encrypted, they must be converted to strings. The encrypted strings are stored on the service as binary properties, and they're converted back to strings (raw strings, not EntityProperties with type EdmType.STRING) after decryption.
- For tables, in addition to the encryption policy, users must specify the properties to be encrypted. This can be done by either storing these properties in TableEntity objects with the type set to EdmType.STRING and encrypt set to true or setting the encryption_resolver_function on the tableservice object. An encryption resolver is a function that takes a partition key, row key, and property name and returns a boolean that indicates whether that property should be encrypted. During encryption, the client library will use this information to decide whether a property should be encrypted while writing to the wire. The delegate also provides for the possibility of logic around how properties are encrypted. (For example, if X, then encrypt property A; otherwise encrypt properties A and B.) Note that it is not necessary to provide this information while reading or querying entities.
+ For tables, in addition to the encryption policy, users must specify the properties to be encrypted. This can be done by either storing these properties in TableEntity objects with the type set to EdmType.STRING and encrypt set to true or setting the encryption_resolver_function on the table service object. An encryption resolver is a function that takes a partition key, row key, and property name and returns a boolean that indicates whether that property should be encrypted. During encryption, the client library will use this information to decide whether a property should be encrypted while writing to the wire. The delegate also provides for the possibility of logic around how properties are encrypted. (For example, if X, then encrypt property A; otherwise encrypt properties A and B.) It isn't necessary to provide this information while reading or querying entities.
### Batch Operations One encryption policy applies to all rows in the batch. The client library will internally generate a new random IV and random CEK per row in the batch. Users can also choose to encrypt different properties for every operation in the batch by defining this behavior in the encryption resolver.
-If a batch is created as a context manager through the tableservice batch() method, the tableservice's encryption policy will automatically be applied to the batch. If a batch is created explicitly by calling the constructor, the encryption policy must be passed as a parameter and left unmodified for the lifetime of the batch.
-Note that entities are encrypted as they are inserted into the batch using the batch's encryption policy (entities are NOT encrypted at the time of committing the batch using the tableservice's encryption policy).
+If a batch is created as a context manager through the table service batch() method, the table service's encryption policy will automatically be applied to the batch. If a batch is created explicitly by calling the constructor, the encryption policy must be passed as a parameter and left unmodified for the lifetime of the batch.
+Entities are encrypted as they're inserted into the batch using the batch's encryption policy (entities are NOT encrypted at the time of committing the batch using the table service's encryption policy).
### Queries
The KEK must implement the following methods to successfully encrypt data:
- wrap_key(cek): Wraps the specified CEK (bytes) using an algorithm of the user's choice. Returns the wrapped key. - get_key_wrap_algorithm(): Returns the algorithm used to wrap keys.-- get_kid(): Returns the string key id for this KEK.
+- get_kid(): Returns the string key ID for this KEK.
The KEK must implement the following methods to successfully decrypt data: - unwrap_key(cek, algorithm): Returns the unwrapped form of the specified CEK using the string-specified algorithm.-- get_kid(): Returns a string key id for this KEK.
+- get_kid(): Returns a string key ID for this KEK.
-The key resolver must at least implement a method that, given a key id, returns the corresponding KEK implementing the interface above. Only this method is to be assigned to the key_resolver_function property on the service object.
+The key resolver must at least implement a method that, given a key ID, returns the corresponding KEK implementing the interface above. Only this method is to be assigned to the key_resolver_function property on the service object.
- For encryption, the key is used always and the absence of a key will result in an error. - For decryption:
- - The key resolver is invoked if specified to get the key. If the resolver is specified but does not have a mapping for the key identifier, an error is thrown.
- - If resolver is not specified but a key is specified, the key is used if its identifier matches the required key identifier. If the identifier does not match, an error is thrown.
+ - The key resolver is invoked if specified to get the key. If the resolver is specified but doesn't have a mapping for the key identifier, an error is thrown.
+ - If resolver isn't specified but a key is specified, the key is used if its identifier matches the required key identifier. If the identifier doesn't match, an error is thrown.
The encryption samples in azure.storage.samples demonstrate a more detailed end-to-end scenario for blobs, queues and tables. Sample implementations of the KEK and key resolver are provided in the sample files as KeyWrapper and KeyResolver respectively. ### RequireEncryption mode
-Users can optionally enable a mode of operation where all uploads and downloads must be encrypted. In this mode, attempts to upload data without an encryption policy or download data that is not encrypted on the service will fail on the client. The **require_encryption** flag on the service object controls this behavior.
+Users can optionally enable a mode of operation where all uploads and downloads must be encrypted. In this mode, attempts to upload data without an encryption policy or download data that isn't encrypted on the service will fail on the client. The **require_encryption** flag on the service object controls this behavior.
### Blob service encryption
Set the encryption policy fields on the blockblobservice object. Everything else
# [Python v12 SDK](#tab/python)
-We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
+We're currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
# [Python v2.1](#tab/python2)
Set the encryption policy fields on the queueservice object. Everything else wil
# [Python v12 SDK](#tab/python)
-We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
+We're currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
# [Python v2.1](#tab/python2)
retrieved_message_list = my_queue_service.get_messages(queue_name)
### Table service encryption
-In addition to creating an encryption policy and setting it on request options, you must either specify an **encryption_resolver_function** on the **tableservice**, or set the encrypt attribute on the EntityProperty.
+In addition to creating an encryption policy and setting it on request options, you must either specify an **encryption_resolver_function** on the **table service**, or set the encrypt attribute on the EntityProperty.
### Using the resolver # [Python v12 SDK](#tab/python)
-We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
+We're currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
# [Python v2.1](#tab/python2)
As mentioned above, a property may be marked for encryption by storing it in an
# [Python v12 SDK](#tab/python)
-We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
+We're currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
# [Python v2.1](#tab/python2)
encrypted_property_1 = EntityProperty(EdmType.STRING, value, encrypt=True)
## Encryption and performance
-Note that encrypting your storage data results in additional performance overhead. The content key and IV must be generated, the content itself must be encrypted, and additional metadata must be formatted and uploaded. This overhead will vary depending on the quantity of data being encrypted. We recommend that customers always test their applications for performance during development.
+Encrypting your storage data results in additional performance overhead. The content key and IV must be generated, the content itself must be encrypted, and additional metadata must be formatted and uploaded. This overhead will vary depending on the quantity of data being encrypted. We recommend that customers always test their applications for performance during development.
## Next steps
storage Storage Files Identity Ad Ds Assign Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-assign-permissions.md
Previously updated : 03/16/2022 Last updated : 05/04/2022 ms.devlang: azurecli
The following table lists the share-level permissions and how they align with th
If you intend to use a specific Azure AD user or group to access Azure file share resources, that identity must be a **hybrid identity that exists in both on-premises AD DS and Azure AD**. For example, say you have a user in your AD that is user1@onprem.contoso.com and you have synced to Azure AD as user1@contoso.com using Azure AD Connect sync. For this user to access Azure Files, you must assign the share-level permissions to user1@contoso.com. The same concept applies to groups or service principals.
+> [!IMPORTANT]
+> **Assign permissions by explicitly declaring actions and data actions as opposed to using a wildcard (\*) character.** If a custom role definition for a data action contains a wildcard character, all identities assigned to that role are granted access for all possible data actions. This means that all such identities will also be granted any new data action added to the platform. The additional access and permissions granted through new actions or data actions may be unwanted behavior for customers using wildcard. To mitigate any unintended future impact, we highly recommend declaring actions and data actions explicitly as opposed to using the wildcard.
+ In order for share-level permissions to work, you must: - Sync the users **and** the groups from your local AD to Azure AD using Azure AD Connect sync
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
Previously updated : 01/14/2022 Last updated : 05/06/2022
-# Part one: enable AD DS authentication for your Azure file shares
+# Part one: enable AD DS authentication for your Azure file shares
-Before you enable Active Directory Domain Services (AD DS) authentication, make sure you've read the [overview article](storage-files-identity-auth-active-directory-enable.md) to understand the supported scenarios and requirements.
+This article describes the process for enabling Active Directory Domain Services (AD DS) authentication on your storage account. After enabling the feature, you must configure your storage account and your AD DS, to use AD DS credentials for authenticating to your Azure file share.
-This article describes the process required for enabling Active Directory Domain Services (AD DS) authentication on your storage account. After enabling the feature, you must configure your storage account and your AD DS, to use AD DS credentials for authenticating to your Azure file share. To enable AD DS authentication over SMB for Azure file shares, you need to register your storage account with AD DS and then set the required domain properties on the storage account.
+> [!IMPORTANT]
+> Before you enable AD DS authentication, make sure you understand the supported scenarios and requirements in the [overview article](storage-files-identity-auth-active-directory-enable.md) and complete the necessary [prerequisites](storage-files-identity-auth-active-directory-enable.md#prerequisites).
-To register your storage account with AD DS, create an account representing it in your AD DS. You can think of this process as if it were like creating an account representing an on-premises Windows file server in your AD DS. When the feature is enabled on the storage account, it applies to all new and existing file shares in the account.
+To enable AD DS authentication over SMB for Azure file shares, you need to register your storage account with AD DS and then set the required domain properties on the storage account. To register your storage account with AD DS, create an account representing it in your AD DS. You can think of this process as if it were like creating an account representing an on-premises Windows file server in your AD DS. When the feature is enabled on the storage account, it applies to all new and existing file shares in the account.
## Applies to | File share type | SMB | NFS |
The `Join-AzStorageAccount` cmdlet performs the equivalent of an offline domain
The AD DS account created by the cmdlet represents the storage account. If the AD DS account is created under an organizational unit (OU) that enforces password expiration, you must update the password before the maximum password age. Failing to update the account password before that date results in authentication failures when accessing Azure file shares. To learn how to update the password, see [Update AD DS account password](storage-files-identity-ad-ds-update-password.md). Replace the placeholder values with your own in the parameters below before executing it in PowerShell.+ > [!IMPORTANT] > The domain join cmdlet will create an AD account to represent the storage account (file share) in AD. You can choose to register as a computer account or service logon account, see [FAQ](./storage-files-faq.md#security-authentication-and-access-control) for details. For computer accounts, there is a default password expiration age set in AD at 30 days. Similarly, the service logon account may have a default password expiration age set on the AD domain or Organizational Unit (OU). > For both account types, we recommend you check the password expiration age configured in your AD environment and plan to [update the password of your storage account identity](storage-files-identity-ad-ds-update-password.md) of the AD account before the maximum password age. You can consider [creating a new AD Organizational Unit (OU) in AD](/powershell/module/activedirectory/new-adorganizationalunit) and disabling password expiration policy on [computer accounts](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj852252(v=ws.11)) or service logon accounts accordingly.
Import-Module -Name AzFilesHybrid
# for more information. Connect-AzAccount
-# Define parameters, $StorageAccountName currently has a maximum limit of 15 characters
+# Define parameters
+# $StorageAccountName is the name of an existing storage account that you want to join to AD
+# $SamAccountName is an AD object, see https://docs.microsoft.com/en-us/windows/win32/adschema/a-samaccountname
+# for more information.
+# If you want to use AES256 encryption (recommended), except for the trailing '$', the storage account name must be the same as the computer object's SamAccountName.
$SubscriptionId = "<your-subscription-id-here>" $ResourceGroupName = "<resource-group-name-here>" $StorageAccountName = "<storage-account-name-here>"
+$SamAccountName = "<sam-account-name-here>"
$DomainAccountType = "<ComputerAccount|ServiceLogonAccount>" # Default is set as ComputerAccount # If you don't provide the OU name as an input parameter, the AD identity that represents the storage account is created under the root directory. $OuDistinguishedName = "<ou-distinguishedname-here>"
-# Specify the encryption algorithm used for Kerberos authentication. AES256 is recommended. Default is configured as "'RC4','AES256'" which supports both 'RC4' and 'AES256' encryption.
+# Specify the encryption algorithm used for Kerberos authentication. Using AES256 is recommended.
$EncryptionType = "<AES256|RC4|AES256,RC4>" # Select the target subscription for the current session
Select-AzSubscription -SubscriptionId $SubscriptionId
Join-AzStorageAccount ` -ResourceGroupName $ResourceGroupName ` -StorageAccountName $StorageAccountName `
+ -SamAccountName $SamAccountName `
-DomainAccountType $DomainAccountType ` -OrganizationalUnitDistinguishedName $OuDistinguishedName ` -EncryptionType $EncryptionType
Set-AzStorageAccount `
To enable AES-256 encryption, follow the steps in this section. If you plan to use RC4, skip this section. The domain object that represents your storage account must meet the following requirements:-- The storage account name cannot exceed 15 characters.+ - The domain object must be created as a computer object in the on-premises AD domain. - Except for the trailing '$', the storage account name must be the same as the computer object's SamAccountName.
DomainGuid:<yourGUIDHere>
DomainSid:<yourSIDHere> AzureStorageID:<yourStorageSIDHere> ```+ ## Next steps You've now successfully enabled the feature on your storage account. To use the feature, you must assign share-level permissions. Continue to the next section.
storage Storage Files Identity Auth Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md
If you are new to Azure file shares, we recommend reading our [planning guide](s
- AD DS Identities used for Azure Files on-premises AD DS authentication must be synced to Azure AD or use a default share-level permission. Password hash synchronization is optional. - Supports Azure file shares managed by Azure File Sync.-- Supports Kerberos authentication with AD with [AES 256 encryption](./storage-troubleshoot-windows-file-connection-problems.md#azure-files-on-premises-ad-ds-authentication-support-for-aes-256-kerberos-encryption) (recommended) and RC4-HMAC. AES 256 encryption support is currently limited to storage accounts with names <= 15 characters in length. AES 128 Kerberos encryption is not yet supported.
+- Supports Kerberos authentication with AD with [AES 256 encryption](./storage-troubleshoot-windows-file-connection-problems.md#azure-files-on-premises-ad-ds-authentication-support-for-aes-256-kerberos-encryption) (recommended) and RC4-HMAC. AES 128 Kerberos encryption is not yet supported.
- Supports single sign-on experience. - Only supported on clients running on OS versions newer than Windows 7 or Windows Server 2008 R2. - Only supported against the AD forest that the storage account is registered to. You can only access Azure file shares with the AD DS credentials from a single forest by default. If you need to access your Azure file share from a different forest, make sure that you have the proper forest trust configured, see the [FAQ](storage-files-faq.md#ad-ds--azure-ad-ds-authentication) for details.
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/partner-overview.md
This article highlights Microsoft partner companies integrated with Azure Storag
|![Komprise company logo](./media/komprise-logo.png) |**Komprise**<br>Komprise enables visibility across silos to manage file and object data and save costs. Komprise Intelligent Data Management software lets you consistently analyze, move, and manage data across clouds.<br><br>Komprise helps you to analyze data growth across any network attached storage (NAS) and object storage to identify significant cost savings. You can also archive cold data to Azure, and runs data migrations, transparent data archiving, and data replications to Azure Files and Blob storage. Patented Komprise Transparent Move Technology enables you to archive files without changing user access. Global search and tagging enables virtual data lakes for AI, big data, and machine learning applications. |[Partner page](https://www.komprise.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=Overview) |![Peer company logo](./media/peer-logo.png) |**Peer Software**<br>Peer Software provides real-time file management solutions for hybrid and multi-cloud environments. Key use cases include high availability for user and application data across branch offices, Azure regions and availability zones, file sharing with version integrity, and migration to file or object storage with minimal cutover downtime. |[Partner page](https://go.peersoftware.com/azure_file_management_solutions)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/peer-software-inc.peergfs?tab=overview) |![Privacera company logo](./media/privacera-logo.png) |**Privacera**<br>Privacera provides a unified system for data governance and security across multiple cloud services and analytical platforms. Privacera enables IT and data platform teams to democratize data for analytics, while ensuring compliance with privacy regulations.  |[Partner page](https://privacera.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/globaltenetincdbaprivacera1585932150924.privacera_platform)
+|![Tiger Technology company logo.](./media/tiger-logo.png) |**Tiger Technology**<br>Tiger Technology offers high-performance, secure data & storage management software solutions. It enables organizations of any size to manage their digital assets on-premises, in any public cloud, or through an on-premises-first hybrid model. Specializes in migrating mission-critical workflows, application servers, and NAS/tape-to-cloud migrations. Tiger Bridge is a non-proprietary, software-only data management solution. It blends a file system and multi-tier cloud storage into a single space and enables hybrid workflows. Tiger Bridge addresses several data management challenges: file server and application server extension, migration, disaster recovery, backup and archive, and multi-site sync. It also offers continuous data protection and ransomware protection capabilities. |[Partner page](https://www.tiger-technology.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tiger-technology.tigerbridge_vm)|
+ Are you a storage partner but your solution is not listed yet? Send us your info [here](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR3i8TQB_XnRAsV3-7XmQFpFUQjY4QlJYUzFHQ0ZBVDNYWERaUlNRVU5IMyQlQCN0PWcu). ## Next steps
stream-analytics Service Bus Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/service-bus-managed-identity.md
First, you create a managed identity for your Azure Stream Analytics job.ΓÇ»
## Grant the Stream Analytics job permissions to access Azure Service Bus
-For the Stream Analytics job to access your Service Bus using managed identity, the service principal you created must have special permissions to your Azure Service Bus resource. In this step, you can assign a role to your stream analytics job's system-assigned managed identity. Azure provides the below Azure built-in roles for authorizing access to a Service Bus namespace:
+For the Stream Analytics job to access your Service Bus using managed identity, the service principal you created must have special permissions to your Azure Service Bus resource. In this step, you can assign a role to your stream analytics job's system-assigned managed identity. Azure provides the below Azure built-in roles for authorizing access to a Service Bus namespace. For Azure Stream Analytics you would need these:
- [Azure Service Bus Data Owner](../role-based-access-control/built-in-roles.md#azure-service-bus-data-owner): Enables data access to Service Bus namespace and its entities (queues, topics, subscriptions, and filters) - [Azure Service Bus Data Sender](../role-based-access-control/built-in-roles.md#azure-service-bus-data-sender): Use this role to give send access to Service Bus namespace and its entities.
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
If you would like to query data2.csv in this example, the following permissions
### Content of directory on the path cannot be listed
-This error indicates that the user who is querying Azure Data Lake cannot list the files on a storage. There are several scenarios where this error might happen:
+This error indicates that the user who is querying Azure Data Lake cannot list the files on storage. There are several scenarios where this error might happen:
- Azure AD user who is using [Azure AD pass-through authentication](develop-storage-files-storage-access-control.md?tabs=user-identity) does not have permissions to list the files on Azure Data Lake storage. - Azure AD or SQL user is reading data using [SAS key](develop-storage-files-storage-access-control.md?tabs=shared-access-signature) or [workspace Managed Identity](develop-storage-files-storage-access-control.md?tabs=managed-identity), and that key/identity does not have permission to list the files on the storage. - User who is accessing DataVerse data does not have permission to query data in DataVerse. This might happen if you are using SQL users.
FROM
names.csv ```csv Id, first name,
-1,Adam
-2,Bob
-3,Charles
-4,David
-five,Eva
+1, Adam
+2, Bob
+3, Charles
+4, David
+five, Eva
``` You might observe that the data has unexpected values for ID in the fifth row.
-In such circumstances, it is important to align with the business owner of the data to agree on how corrupt data like this can be avoided. If prevention is not possible at application level, reasonable sized VARCHAR might be the only option here.
+In such circumstances, it is important to align with the business owner of the data to agree on how corrupt data like this can be avoided. If prevention isn't possible at application level, reasonable sized VARCHAR might be the only option here.
> [!Tip] > Try to make VARCHAR() as short as possible. Avoid VARCHAR(MAX) if possible as this can impair performance.
If your query does not fail but you find that your resultset is not as expected,
To resolve this problem, it is needed to have another look at the data and change those settings. As shown next, debugging this query is easy like in the upcoming example. **Example**
-If you would like to query the file ΓÇÿnames.csvΓÇÖ with the query in 'Query 1', Azure Synapse SQL serverless will return with result that look odd.
+If you would like to query the file ΓÇÿnames.csvΓÇÖ with the query in 'Query 1', Azure Synapse SQL serverless will return with result that looks odd.
names.csv ```csv Id,first name,
-1,Adam
-2,Bob
-3,Charles
-4,David
-5,Eva
+1, Adam
+2, Bob
+3, Charles
+4, David
+5, Eva
``` ```sql
If you are getting the error '*CREATE DATABASE failed. User database limit has b
If your query fails with the error message '*Please create a master key in the database or open the master key in the session before performing this operation*', it means that your user database has no access to a master key at the moment.
-Most likely, you just created a new user database and did not create a master key yet.
+Most likely, you just created a new user database and didn't create a master key yet.
To resolve this problem, create a master key with the following query:
There are some limitations and known issues that you might see in Delta Lake sup
- Do not specify wildcards to describe the partition schema. Delta Lake query will automatically identify the Delta Lake partitions. - Delta Lake tables created in the Apache Spark pools are not automatically available in serverless SQL pool. To query such Delta Lake tables using T-SQL language, run the [CREATE EXTERNAL TABLE](./create-use-external-tables.md#delta-lake-external-table) statement and specify Delta as format. - External tables do not support partitioning. Use [partitioned views](create-use-views.md#delta-lake-partitioned-views) on Delta Lake folder to use the partition elimination. See known issues and workarounds later in the article.-- Serverless SQL pools do not support time travel queries. You can vote for this feature on [Azure feedback site](https://feedback.azure.com/d365community/ide?pivots=programming-language-python#read-older-versions-of-data-using-time-travel).
+- Serverless SQL pools do not support time travel queries. Use Apache Spark pools in Azure Synapse Analytics to [read historical data](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#read-older-versions-of-data-using-time-travel).
- Serverless SQL pools do not support updating Delta Lake files. You can use serverless SQL pool to query the latest version of Delta Lake. Use Apache Spark pools in Azure Synapse Analytics [to update Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#update-table-data).
- - You cannot [store query results to storage in Delta Lake format](create-external-table-as-select.md) using the Create external table as select (CETAS) command. The CETAS command supports only Parquet and CSV as an output formats.
+ - You cannot [store query results to storage in Delta Lake format](create-external-table-as-select.md) using the Create external table as select (CETAS) command. The CETAS command supports only Parquet and CSV as the output formats.
- Serverless SQL pools in Azure Synapse Analytics do not support the datasets with the [BLOOM filter](/azure/databricks/delta/optimizations/bloom-filters). The serverless SQL pool will ignore the BLOOM filters. - Delta Lake support is not available in dedicated SQL pools. Make sure that you are using serverless pools to query Delta Lake files.
See the best practices for [collocating the resources](best-practices-serverless
If you are executing the same query and observing variations in the query durations, there might be several reasons that can cause this behavior: - Check is this a first execution of a query. The first execution of a query collects the statistics required to create a plan. The statistics are collected by scanning the underlying files and might increase the query duration. In the Synapse studio, you will see the ΓÇ£global statistics creationΓÇ¥ queries in the SQL request list, that are executed before your query.-- Statistics might expire after some time, so periodically you might observe an impact on performance because the serverless pool must scan and re-built the statistics. You might notice additional ΓÇ£global statistics creationΓÇ¥ queries in the SQL request list, that are executed before your query.
+- Statistics might expire after some time, so periodically you might observe an impact on performance because the serverless pool must scan and rebuild the statistics. You might notice additional ΓÇ£global statistics creationΓÇ¥ queries in the SQL request list, that are executed before your query.
- Check is there some workload that is running on the same endpoint when you executed the query with the longer duration. The serverless SQL endpoint will equally allocate the resources to all queries that are executed in parallel, and the query might be delayed. ## Connections
Serverless SQL pool enables you to connect using TDS protocol and use T-SQL lang
### SQL pool is warming up
-Following a longer period of inactivity Serverless SQL pool will be deactivated. The activation will happen automatically on the first next activity, such as the first connection attempt. Activation process might take a bit longer than a single connection attempt interval, thus the error message is displayed. Re-trying the connection attempt should be enough.
+Following a longer period of inactivity Serverless SQL pool will be deactivated. The activation will happen automatically on the first next activity, such as the first connection attempt. Activation process might take a bit longer than a single connection attempt interval, thus the error message is displayed. Retrying the connection attempt should be enough.
As a best practice, for the clients that support it, use ConnectionRetryCount and ConnectRetryInterval connection string keywords to control the reconnect behavior. If the error message persists, file a support ticket through the Azure portal.
See the [Synapse Studio section](#synapse-studio).
### Cannot connect to Synapse pool from a tool Some tools might not have an explicit option that enables you to connect to the Synapse serverless SQL pool.
-Use an option that you would use to connect to SQL Server or Azure SQL database. The connection dialog do not need to be branded as "Synapse" because the serverless SQL pool uses the same protocol as SQL Server or Azure SQL database.
+Use an option that you would use to connect to SQL Server or Azure SQL database. The connection dialog doesn't need to be branded as "Synapse" because the serverless SQL pool uses the same protocol as SQL Server or Azure SQL database.
Even if a tool enables you to enter only a logical server name and predefines `database.windows.net` domain, put the Synapse workspace name followed by `-ondemand` suffix and `database.windows.net` domain.
Make sure that a user has permissions to access databases, [permissions to execu
### Cannot access Cosmos DB account
-You must use read-only Cosmos DB key to access your analytical storage, so make sure that it did not expire or that it is not re-generated.
+You must use read-only Cosmos DB key to access your analytical storage, so make sure that it didn't expire or that it isn't regenerated.
If you are getting the [Resolving Cosmos DB path has failed](#resolving-cosmos-db-path-has-failed) error, make sure that you configured firewall.
If a user cannot access a lake house or Spark database, it might not have permis
### SQL user cannot access Dataverse tables
-Dataverse tables are accessing storage using the callers Azure AD identity. SQL user with high permissions might try to select data from a table, but the table would not be able to access Dataverse data. This scenario is not supported.
+Dataverse tables are accessing storage using the callers Azure AD identity. SQL user with high permissions might try to select data from a table, but the table would not be able to access Dataverse data. This scenario isn't supported.
### Azure AD service principal login failures when SPI is creating a role assignment If you want to create role assignment for Service Principal Identifier/Azure AD app using another SPI, or have already created one and it fails to log in, you're probably receiving following error:
There are some general system constraints that might affect your workload:
| Max identifier length (in characters) | 128 (see [limitations in SQL Server database engine](/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects) )| | Max query duration | 30 min | | Max size of the result set | up to 200 GB (shared between concurrent queries) |
-| Max concurrency | Not limited and depends on the query complexity and amount of data scanned. One serverless SQL pool can concurrently handle 1000 active sessions that are executing lightweight queries, but the numbers will drop if the queries are more complex or scan a larger amount of data. |
+| Max concurrency | Not limited and depends on the query complexity and amount of data scanned. One serverless SQL pool can concurrently handle 1000 active sessions that are executing lightweight queries. However, the numbers will drop if the queries are more complex or scan a larger amount of data. |
### Cannot create a database in serverless SQL pool
virtual-desktop Create Profile Container Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-profile-container-azure-ad.md
The service principal's password will expire every six months. To update the pas
$azureAdTenantId = $azureAdTenantDetail.ObjectId $azureAdPrimaryDomain = ($azureAdTenantDetail.VerifiedDomains | Where-Object {$_._Default -eq $true}).Name $application = Get-AzureADApplication -Filter "DisplayName eq '$($storageAccountName)'" -ErrorAction Stop;
- $servicePrincipal = Get-AzureADServicePrincipal | Where-Object {$_.AppId -eq $($application.AppId)}
+ $servicePrincipal = Get-AzureADServicePrincipal -Filter "AppId eq '$($application.AppId)'"
+ if ($servicePrincipal -eq $null) {
+ Write-Host "Could not find service principal corresponding to application with app id $($application.AppId)"
+ Write-Error -Message "Make sure that both service principal and application exist and are correctly configured" -ErrorAction Stop
+ }
``` 5. Set the password for the storage account's service principal by running the following cmdlets.
virtual-machines Features Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/features-linux.md
When an update is available and automatic updates are enabled, the update is ins
- Data disks - Extensions
+- Extension Tags
- Boot diagnostics container - Guest OS secrets - VM size
virtual-machines Features Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/features-windows.md
When an update is available and automatic updates are enabled, the update is ins
- Data disks - Extensions
+- Extension Tags
- Boot diagnostics container - Guest OS secrets - VM size
virtual-machines Create Upload Ubuntu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-ubuntu.md
This article assumes that you have already installed an Ubuntu Linux operating s
14. Azure only accepts fixed-size VHDs. If the VM's OS disk is not a fixed-size VHD, use the `Convert-VHD` PowerShell cmdlet and specify the `-VHDType Fixed` option. Please have a look at the docs for `Convert-VHD` here: [Convert-VHD](/powershell/module/hyper-v/convert-vhd).
+15. To bring a Generation 2 VM on Azure, follow these steps:
++
+ 1. Change directory to the boot EFI directory:
+
+ ```console
+ # cd /boot/efi/EFI
+ ```
+
+ 1. Copy the ubuntu directory to a new directory named boot:
+
+ ```console
+ # sudo cp -r ubuntu/ boot
+ ```
+
+ 1. Change directory to the newly created boot directory:
+
+ ```console
+ # cd boot
+ ```
+
+ 1. Rename the shimx64.efi file:
+
+ ```console
+ # sudo mv shimx64.efi bootx64.efi
+ ```
+
+ 1. Rename the grub.cfg file to bootx64.cfg:
+
+ ```console
+ # sudo mv grub.cfg bootx64.cfg
+ ```
## Next steps You're now ready to use your Ubuntu Linux virtual hard disk to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
virtual-machines Oracle Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/oracle-create-upload-vhd.md
You must complete specific configuration steps in the operating system for the v
9. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this open "/boot/grub/menu.lst" in a text editor and ensure that the kernel includes the following parameters: ```config-grub
- console=ttyS0 earlyprintk=ttyS0 rootdelay=300
+ console=ttyS0 earlyprintk=ttyS0
``` This will ensure all console messages are sent to the first serial port, which can assist Azure support with debugging issues.
Preparing an Oracle Linux 7 virtual machine for Azure is very similar to Oracle
9. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this open "/etc/default/grub" in a text editor and edit the `GRUB_CMDLINE_LINUX` parameter, for example: ```config-grub
- GRUB_CMDLINE_LINUX="rootdelay=300 console=ttyS0 earlyprintk=ttyS0 net.ifnames=0"
+ GRUB_CMDLINE_LINUX="console=ttyS0 earlyprintk=ttyS0 net.ifnames=0"
``` This will also ensure all console messages are sent to the first serial port, which can assist Azure support with debugging issues. It also turns off the naming conventions for NICs in Oracle Linux 7 with the Unbreakable Enterprise Kernel. In addition to the above, it is recommended to *remove* the following parameters:
virtual-machines Redhat Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/redhat-create-upload-vhd.md
This section assumes that you have already obtained an ISO file from the Red Hat
1. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this modification, open `/boot/grub/menu.lst` in a text editor, and ensure that the default kernel includes the following parameters: ```config-grub
- console=ttyS0 earlyprintk=ttyS0 rootdelay=300
+ console=ttyS0 earlyprintk=ttyS0
``` This will also ensure that all console messages are sent to the first serial port, which can assist Azure support with debugging issues.
This section assumes that you have already obtained an ISO file from the Red Hat
```config-grub
- GRUB_CMDLINE_LINUX="rootdelay=300 console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 earlyprintk=ttyS0 net.ifnames=0"
+ GRUB_CMDLINE_LINUX="console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 earlyprintk=ttyS0 net.ifnames=0"
GRUB_TERMINAL_OUTPUT="serial console" GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1" ```
This section assumes that you have already obtained an ISO file from the Red Hat
1. Edit `/etc/default/grub` in a text editor, and add the following paramters: ```config-grub
- GRUB_CMDLINE_LINUX="rootdelay=300 console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 earlyprintk=ttyS0 net.ifnames=0"
+ GRUB_CMDLINE_LINUX="console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 earlyprintk=ttyS0 net.ifnames=0"
GRUB_TERMINAL_OUTPUT="serial console" GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1" ```
This section shows you how to use KVM to prepare a [RHEL 6](#rhel-6-using-kvm) o
1. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this configuration, open `/boot/grub/menu.lst` in a text editor, and ensure that the default kernel includes the following parameters: ```config-grub
- console=ttyS0 earlyprintk=ttyS0 rootdelay=300
+ console=ttyS0 earlyprintk=ttyS0
``` This will also ensure that all console messages are sent to the first serial port, which can assist Azure support with debugging issues.
This section shows you how to use KVM to prepare a [RHEL 6](#rhel-6-using-kvm) o
1. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this configuration, open `/etc/default/grub` in a text editor, and edit the `GRUB_CMDLINE_LINUX` parameter. For example: ```config-grub
- GRUB_CMDLINE_LINUX="rootdelay=300 console=ttyS0 earlyprintk=ttyS0 net.ifnames=0"
+ GRUB_CMDLINE_LINUX="console=ttyS0 earlyprintk=ttyS0 net.ifnames=0"
``` This command also ensures that all console messages are sent to the first serial port, which can assist Azure support with debugging issues. The command also turns off the new RHEL 7 naming conventions for NICs. In addition, we recommend that you remove the following parameters:
This section assumes that you have already installed a RHEL virtual machine in V
1. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this, open `/etc/default/grub` in a text editor, and edit the `GRUB_CMDLINE_LINUX` parameter. For example: ```config-grub
- GRUB_CMDLINE_LINUX="rootdelay=300 console=ttyS0 earlyprintk=ttyS0"
+ GRUB_CMDLINE_LINUX="console=ttyS0 earlyprintk=ttyS0"
``` This will also ensure that all console messages are sent to the first serial port, which can assist Azure support with debugging issues. In addition, we recommend that you remove the following parameters:
This section assumes that you have already installed a RHEL virtual machine in V
1. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this modification, open `/etc/default/grub` in a text editor, and edit the `GRUB_CMDLINE_LINUX` parameter. For example: ```config-grub
- GRUB_CMDLINE_LINUX="rootdelay=300 console=ttyS0 earlyprintk=ttyS0 net.ifnames=0"
+ GRUB_CMDLINE_LINUX="console=ttyS0 earlyprintk=ttyS0 net.ifnames=0"
``` This configuration also ensures that all console messages are sent to the first serial port, which can assist Azure support with debugging issues. It also turns off the new RHEL 7 naming conventions for NICs. In addition, we recommend that you remove the following parameters:
This section shows you how to prepare a RHEL 7 distro from an ISO using a kickst
EOF # Set the cmdline
- sed -i 's/^\(GRUB_CMDLINE_LINUX\)=".*"$/\1="console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300"/g' /etc/default/grub
+ sed -i 's/^\(GRUB_CMDLINE_LINUX\)=".*"$/\1="console=tty1 console=ttyS0 earlyprintk=ttyS0"/g' /etc/default/grub
# Enable SSH keepalive sed -i 's/^#\(ClientAliveInterval\).*$/\1 180/g' /etc/ssh/sshd_config
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/scheduled-events.md
With Scheduled Events, your application can discover when maintenance will occur
Scheduled Events provides events in the following use cases: -- [Platform initiated maintenance](../maintenance-and-updates.md?bc=/azure/virtual-machines/windows/breadcrumb/toc.json&toc=/azure/virtual-machines/windows/toc.json) (for example, VM reboot, live migration or memory preserving updates for host)-- Virtual machine is running on [degraded host hardware](https://azure.microsoft.com/blog/find-out-when-your-virtual-machine-hardware-is-degraded-with-scheduled-events) that is predicted to fail soon-- Virtual machine was running on a host that suffered a hardware failure-- User-initiated maintenance (for example, a user restarts or redeploys a VM)
+- [Platform initiated maintenance](../maintenance-and-updates.md?bc=/azure/virtual-machines/windows/breadcrumb/toc.json&toc=/azure/virtual-machines/windows/toc.json) (for example, VM reboot, live migration or memory preserving updates for host).
+- Virtual machine is running on [degraded host hardware](https://azure.microsoft.com/blog/find-out-when-your-virtual-machine-hardware-is-degraded-with-scheduled-events) that is predicted to fail soon.
+- Virtual machine was running on a host that suffered a hardware failure.
+- User-initiated maintenance (for example, a user restarts or redeploys a VM).
- [Spot VM](../spot-vms.md) and [Spot scale set](../../virtual-machine-scale-sets/use-spot.md) instance evictions. ## The Basics
Scheduled Events is disabled for your service if it does not make a request for
### User-initiated maintenance User-initiated VM maintenance via the Azure portal, API, CLI, or PowerShell results in a scheduled event. You then can test the maintenance preparation logic in your application, and your application can prepare for user-initiated maintenance.
-If you restart a VM, an event with the type `Reboot` is scheduled. If you redeploy a VM, an event with the type `Redeploy` is scheduled.
+If you restart a VM, an event with the type `Reboot` is scheduled. If you redeploy a VM, an event with the type `Redeploy` is scheduled. Typically events with a user event source can be immediately approved to avoid a delay on user-initiated actions.
## Use the API
curl -H Metadata:true http://169.254.169.254/metadata/scheduledevents?api-versio
``` Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -Uri "http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01" | ConvertTo-Json -Depth 64 ```
+#### Python sample
+````
+import json
+import requests
+
+metadata_url ="http://169.254.169.254/metadata/scheduledevents"
+header = {'Metadata' : 'true'}
+query_params = {'api-version':'2020-07-01'}
+
+def get_scheduled_events():
+ resp = requests.get(metadata_url, headers = header, params = query_params)
+ data = resp.json()
+ return data
+
+````
+ A response contains an array of scheduled events. An empty array means that currently no events are scheduled. In the case where there are scheduled events, the response contains an array of events.
In the case where there are scheduled events, the response contains an array of
### Event properties |Property | Description | | - | - |
-| Document Incarnation | Integer that increases when the events contained in the array of scheduled events changes. Documents with the same incarnation contain the same event information, and the incarnation will be incremented when an event changes. |
+| Document Incarnation | Integer that increases when the events array changes. Documents with the same incarnation contain the same event information, and the incarnation will be incremented when an event changes. |
| EventId | Globally unique identifier for this event. <br><br> Example: <br><ul><li>602d9444-d2cd-49c7-8624-8643e7171297 | | EventType | Impact this event causes. <br><br> Values: <br><ul><li> `Freeze`: The Virtual Machine is scheduled to pause for a few seconds. CPU and network connectivity may be suspended, but there is no impact on memory or open files.<li>`Reboot`: The Virtual Machine is scheduled for reboot (non-persistent memory is lost). <li>`Redeploy`: The Virtual Machine is scheduled to move to another node (ephemeral disks are lost). <li>`Preempt`: The Spot Virtual Machine is being deleted (ephemeral disks are lost). This event is made available on a best effort basis <li> `Terminate`: The virtual machine is scheduled to be deleted. | | ResourceType | Type of resource this event affects. <br><br> Values: <ul><li>`VirtualMachine`| | Resources| List of resources this event affects. The list is guaranteed to contain machines from at most one [update domain](../availability.md), but it might not contain all machines in the UD. <br><br> Example: <br><ul><li> ["FrontEnd_IN_0", "BackEnd_IN_0"] | | EventStatus | Status of this event. <br><br> Values: <ul><li>`Scheduled`: This event is scheduled to start after the time specified in the `NotBefore` property.<li>`Started`: This event has started.</ul> No `Completed` or similar status is ever provided. The event is no longer returned when the event is finished.
-| NotBefore| Time after which this event can start. The event is guaranteed to not start before this time. <br><br> Example: <br><ul><li> Mon, 19 Sep 2016 18:29:47 GMT |
+| NotBefore| Time after which this event can start. The event is guaranteed to not start before this time. Will be blank if the event has already started <br><br> Example: <br><ul><li> Mon, 19 Sep 2016 18:29:47 GMT |
| Description | Description of this event. <br><br> Example: <br><ul><li> Host server is undergoing maintenance. | | EventSource | Initiator of the event. <br><br> Example: <br><ul><li> `Platform`: This event is initiated by platform. <li>`User`: This event is initiated by user. | | DurationInSeconds | The expected duration of the interruption caused by the event. <br><br> Example: <br><ul><li> `9`: The interruption caused by the event will last for 9 seconds. <li>`-1`: The default value used if the impact duration is either unknown or not applicable. |
Each event is scheduled a minimum amount of time in the future based on the even
> [!NOTE] > In some cases, Azure is able to predict host failure due to degraded hardware and will attempt to mitigate disruption to your service by scheduling a migration. Affected virtual machines will receive a scheduled event with a `NotBefore` that is typically a few days in the future. The actual time varies depending on the predicted failure risk assessment. Azure tries to give 7 days' advance notice when possible, but the actual time varies and might be smaller if the prediction is that there is a high chance of the hardware failing imminently. To minimize risk to your service in case the hardware fails before the system-initiated migration, we recommend that you self-redeploy your virtual machine as soon as possible.+ >[!NOTE]
-> In the case the host node experiences a hardware failure Azure will bypass the minimum notice period an immediately begin the recovery process for affected virtual machines. This reduces recovery time in the case that the affected VMs are unable to respond. During the recovery process an event will be created for all impacted VMs with EventType = Reboot and EventStatus = Started
+> In the case the host node experiences a hardware failure Azure will bypass the minimum notice period an immediately begin the recovery process for affected virtual machines. This reduces recovery time in the case that the affected VMs are unable to respond. During the recovery process an event will be created for all impacted VMs with `EventType = Reboot` and `EventStatus = Started`.
### Polling frequency
You can poll the endpoint for updates as frequently or infrequently as you like.
### Start an event
-After you learn of an upcoming event and finish your logic for graceful shutdown, you can approve the outstanding event by making a `POST` call to Metadata Service with `EventId`. This call indicates to Azure that it can shorten the minimum notification time (when possible).
+After you learn of an upcoming event and finish your logic for graceful shutdown, you can approve the outstanding event by making a `POST` call to Metadata Service with `EventId`. This call indicates to Azure that it can shorten the minimum notification time (when possible). The event may not start immediately upon approval, in some cases Azure will require the approval of all the VMs hosted on the node before proceeding with the event.
The following JSON sample is expected in the `POST` request body. The request should contain a list of `StartRequests`. Each `StartRequest` contains `EventId` for the event you want to expedite:+ ``` { "StartRequests" : [
The following JSON sample is expected in the `POST` request body. The request sh
} ```
+The service will always return a 200 success code in the case of a valid event ID, even if it was already approved by a different VM. A 400 error code indicates that the request header or payload was malformed.
++ #### Bash sample ``` curl -H Metadata:true -X POST -d '{"StartRequests": [{"EventId": "f020ba2e-3bc0-4c40-a10b-86575a9eabd5"}]}' http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01
curl -H Metadata:true -X POST -d '{"StartRequests": [{"EventId": "f020ba2e-3bc0-
``` Invoke-RestMethod -Headers @{"Metadata" = "true"} -Method POST -body '{"StartRequests": [{"EventId": "5DD55B64-45AD-49D3-BBC9-F57D4EA97BD7"}]}' -Uri http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01 | ConvertTo-Json -Depth 64 ```
+#### Python sample
+````
+import json
+import requests
+
+def confirm_scheduled_event(event_id):
+ # This payload confirms a single event with id event_id
+ payload = json.dumps({"StartRequests": [{"EventId": event_id }]})
+ response = requests.post("http://169.254.169.254/metadata/scheduledevents",
+ headers = {'Metadata' : 'true'},
+ params = {'api-version':'2020-07-01'},
+ data = payload)
+ return response.status_code
+````
> [!NOTE] > Acknowledging an event allows the event to proceed for all `Resources` in the event, not just the VM that acknowledges the event. Therefore, you can choose to elect a leader to coordinate the acknowledgement, which might be as simple as the first machine in the `Resources` field.
-## Python Sample
+## Example responses
+The following is an example of a series of events that were seen by two VMs that were live migrated to another node.
-The following sample queries Metadata Service for scheduled events and approves each outstanding event:
+The `DocumentIncarnation` is changing every time there is new information in `Events`. An approval of the event would allow the freeze to proceed for both WestNO_0 and WestNO_1. The `DurationInSeconds` of -1 indicates that the platform does not know how long the operation will take.
-```python
-#!/usr/bin/python
+```JSON
+{
+ "DocumentIncarnation": 1,
+ "Events": [
+ ]
+}
-import json
-import socket
-import urllib2
+{
+ "DocumentIncarnation": 2,
+ "Events": [
+ {
+ "EventId": "C7061BAC-AFDC-4513-B24B-AA5F13A16123",
+ "EventStatus": "Scheduled",
+ "EventType": "Freeze",
+ "ResourceType": "VirtualMachine",
+ "Resources": [
+ "WestNO_0",
+ "WestNO_1"
+ ],
+ "NotBefore": "Mon, 11 Apr 2022 22:26:58 GMT",
+ "Description": "Virtual machine is being paused because of a memory-preserving Live Migration operation.",
+ "EventSource": "Platform",
+ "DurationInSeconds": -1
+ }
+ ]
+}
-metadata_url = "http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01"
-this_host = socket.gethostname()
+{
+ "DocumentIncarnation": 3,
+ "Events": [
+ {
+ "EventId": "C7061BAC-AFDC-4513-B24B-AA5F13A16123",
+ "EventStatus": "Started",
+ "EventType": "Freeze",
+ "ResourceType": "VirtualMachine",
+ "Resources": [
+ "WestNO_0",
+ "WestNO_1"
+ ],
+ "NotBefore": "",
+ "Description": "Virtual machine is being paused because of a memory-preserving Live Migration operation.",
+ "EventSource": "Platform",
+ "DurationInSeconds": -1
+ }
+ ]
+}
+{
+ "DocumentIncarnation": 4,
+ "Events": [
+ ]
+}
-def get_scheduled_events():
- req = urllib2.Request(metadata_url)
- req.add_header('Metadata', 'true')
- resp = urllib2.urlopen(req)
- data = json.loads(resp.read())
- return data
+```
+## Python Sample
+
+The following sample queries Metadata Service for scheduled events and approves each outstanding event:
-def handle_scheduled_events(data):
- for evt in data['Events']:
- eventid = evt['EventId']
- status = evt['EventStatus']
- resources = evt['Resources']
- eventtype = evt['EventType']
- resourcetype = evt['ResourceType']
- notbefore = evt['NotBefore'].replace(" ", "_")
- description = evt['Description']
- eventSource = evt['EventSource']
- if this_host in resources:
- print("+ Scheduled Event. This host " + this_host +
- " is scheduled for " + eventtype +
- " by " + eventSource +
- " with description " + description +
- " not before " + notbefore)
- # Add logic for handling events here
+```python
+#!/usr/bin/python
+import json
+import requests
+from time import sleep
+
+# The URL to access the metadata service
+metadata_url ="http://169.254.169.254/metadata/scheduledevents"
+# This must be sent otherwise the request will be ignored
+header = {'Metadata' : 'true'}
+# Current version of the API
+query_params = {'api-version':'2020-07-01'}
+
+def get_scheduled_events():
+ resp = requests.get(metadata_url, headers = header, params = query_params)
+ data = resp.json()
+ return data
+def confirm_scheduled_event(event_id):
+ # This payload confirms a single event with id event_id
+ # You can confirm multiple events in a single request if needed
+ payload = json.dumps({"StartRequests": [{"EventId": event_id }]})
+ response = requests.post(metadata_url,
+ headers= header,
+ params = query_params,
+ data = payload)
+ return response.status_code
+
+def log(event):
+ # This is an optional placeholder for logging events to your system
+ print(event["Description"])
+ return
+
+def advanced_sample(last_document_incarnation):
+ # Poll every second to see if there are new scheduled events to process
+ # Since some events may have necessarily short warning periods, it is
+ # recommended to poll frequently
+ found_document_incarnation = last_document_incarnation
+ while (last_document_incarnation == found_document_incarnation):
+ sleep(1)
+ payload = get_scheduled_events()
+ found_document_incarnation = payload["DocumentIncarnation"]
+
+ # We recommend processing all events in a document together,
+ # even if you won't be actioning on them right away
+ for event in payload["Events"]:
+
+ # Events that have already started, logged for tracking
+ if (event["EventStatus"] == "Started"):
+ log(event)
+
+ # Approve all user initiated events. These are typically created by an
+ # administrator and approving them immediately can help to avoid delays
+ # in admin actions
+ elif (event["EventSource"] == "User"):
+ confirm_scheduled_event(event["EventId"])
+
+ # For this application, freeze events less that 9 seconds are considered
+ # no impact. This will immediately approve them
+ elif (event["EventType"] == "Freeze" and
+ int(event["DurationInSeconds"]) >= 0 and
+ int(event["DurationInSeconds"]) < 9):
+ confirm_scheduled_event(event["EventId"])
+
+ # Events that may be impactful (eg. Reboot or redeploy) may need custom
+ # handling for your application
+ else:
+ #TODO Custom handling for impactful events
+ log(event)
+ print("Processed events from document: " + str(found_document_incarnation))
+ return found_document_incarnation
def main():
- data = get_scheduled_events()
- handle_scheduled_events(data)
+ # This will track the last set of events seen
+ last_document_incarnation = "-1"
+
+ input_text = "\
+ Press 1 to poll for new events \n\
+ Press 2 to exit \n "
+ program_exit = False
+ while program_exit == False:
+ user_input = input(input_text)
+ if (user_input == "1"):
+ last_document_incarnation = advanced_sample(last_document_incarnation)
+ elif (user_input == "2"):
+ program_exit = True
if __name__ == '__main__': main()
if __name__ == '__main__':
## Next steps - Review the Scheduled Events code samples in the [Azure Instance Metadata Scheduled Events GitHub repository](https://github.com/Azure-Samples/virtual-machines-scheduled-events-discover-endpoint-for-non-vnet-vm).
+- Review the Node.js Scheduled Events code samples in [Azure Samples GitHub repository](https://github.com/Azure/vm-scheduled-events).
- Read more about the APIs that are available in the [Instance Metadata Service](instance-metadata-service.md). - Learn about [planned maintenance for Windows virtual machines in Azure](../maintenance-and-updates.md?bc=/azure/virtual-machines/windows/breadcrumb/toc.json&toc=/azure/virtual-machines/windows/toc.json). - Learn how to [monitor scheduled events for your VMs through Log Analytics](./scheduled-event-service.md).
+- Learn how to log scheduled events using Azure Event Hub in the [Azure Samples GitHub repository](https://github.com/Azure-Samples/virtual-machines-python-scheduled-events-central-logging).
virtual-machines Dbms_Guide_Ibm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms_guide_ibm.md
Remote shared volumes like the Azure services in the listed scenarios are suppor
Using disks based on Azure Page BLOB Storage or Managed Disks, the statements made in [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms_guide_general.md) apply to deployments with the Db2 DBMS as well.
-As explained earlier in the general part of the document, quotas on IOPS throughput for Azure disks exist. The exact quotas are depending on the VM type used. A list of VM types with their quotas can be found [here (Linux)][virtual-machines-sizes-linux] and [here (Windows)][virtual-machines-sizes-windows].
+As explained earlier in the general part of the document, quotas on IOPS throughput for Azure disks exist. The exact quotas are depending on the VM type used. A list of VM types with their quotas can be found [here (Linux)](../../sizes.md) and [here (Windows)](../../sizes.md).
As long as the current IOPS quota per disk is sufficient, it is possible to store all the database files on one single mounted disk. Whereas you always should separate the data files and transaction log files on different disks/VHDs.
virtual-network Ipv6 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ipv6-overview.md
IPv6 for Azure VNET is a foundational feature set which enables customers to hos
## Limitations The current IPv6 for Azure virtual network release has the following limitations: - VPN gateways currently support IPv4 traffic only, but they still CAN be deployed in a Dual-stacked VNET.
+- Dual-stack configurations that use Floating IP can only be used with Public load balancers (not Internal load balancers)
- Application Gateway v2 does not currently support IPv6. It can operate in a dual stack VNet using only IPv4, but the gateway subnet must be IPv4-only. Application Gateway v1 does not support dual stack VNets. - The Azure platform (AKS, etc.) does not support IPv6 communication for Containers. - IPv6-only Virtual Machines or Virtual Machines Scale Sets are not supported, each NIC must include at least one IPv4 IP configuration.
virtual-network Manage Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-custom-ip-address-prefix.md
For information on provisioning an IP address, see [Create a custom IP address p
When a custom IP prefix is in **Provisioned**, **Commissioning**, or **Commissioned** state, a linked public IP prefix can be created. Either as a subset of the custom IP prefix range or the entire range.
-> [!NOTE]
-> A public IP prefix can be derived from a custom IP prefix in another subscription with the appropriate permissions.
-- Use the following CLI and PowerShell commands to create public IP prefixes with the `--custom-ip-prefix-name` (CLI) and `-CustomIpPrefix` (PowerShell) parameters that point to an existing custom IP prefix. |Tool|Command|
Use the following CLI and PowerShell commands to create public IP prefixes with
|CLI|[az network public-ip prefix create](/cli/azure/network/public-ip/prefix#az_network_public_ip_prefix_create)| |PowerShell|[New-AzPublicIpPrefix](/powershell/module/az.network/new-azpublicipprefix)|
+> [!NOTE]
+> A public IP prefix can be derived from a custom IP prefix in another subscription with the appropriate permissions using Azure PowerShell or Azure portal.
++
+An example derivation of a public IP prefix from a custom IP prefix using PowerShell is shown below:
+
+ ```azurepowershell-interactive
+Set-AzContext -Subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+$customprefix = Get-AzCustomIpPrefix -Name myBYOIPPrefix -ResourceGroupName myResourceGroup
+Set-AzContext -Subscription yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy
+New-AzPublicIpPrefix -Name myPublicIpPrefix -ResourceGroupName myResourceGroup2 -Location eastus -PrefixLength 30 -CustomIpPrefix $customprefix
+```
+ Once created, the IPs in the child public IP prefix can be associated with resources like any other standard SKU static public IPs. To learn more about using IPs from a public IP prefix, including selection of a specific IP from the range, see [Create a static public IP address from a prefix](manage-public-ip-address-prefix.md#create-a-static-public-ip-address-from-a-prefix). ## Migration of active prefixes from outside Microsoft
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
Virtual Network NAT is scaled out from creation. There isn't a ramp up or scale-
* Virtual Network NAT is compatible with standard SKU public IP addresses or public IP prefix resources or a combination of both. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. The NAT gateway will groom all traffic to the range of IP addresses of the prefix. Basic resources, such as basic load balancer or basic public IPs aren't compatible with Virtual Network NAT. Basic resources must be placed on a subnet not associated to a NAT gateway. Basic load balancer and basic public IP can be upgraded to standard to work with a NAT gateway
-* To upgrade a basic load balancer too standard, see [Upgrade a public basic Azure Load Balancer](../../load-balancer/upgrade-basic-standard.md)
+* To upgrade a basic load balancer to standard, see [Upgrade a public basic Azure Load Balancer](../../load-balancer/upgrade-basic-standard.md)
-* To upgrade a basic public IP too standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md)
+* To upgrade a basic public IP to standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md)
* Virtual Network NAT is the recommended method for outbound connectivity. A NAT gateway doesn't have the same limitations of SNAT port exhaustion as does [default outbound access](../ip-services/default-outbound-access.md) and [outbound rules of a load balancer](../../load-balancer/outbound-rules.md)
vpn-gateway About Vpn Profile Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-vpn-profile-download.md
Title: 'About Point-to-Site VPN client profiles'
+ Title: 'About Point-to-Site VPN client profiles for Azure AD authentication'
-description: Learn about P2S VPN client profile files.
-
+description: Learn about P2S VPN client profile files for Azure AD authentication.
- Previously updated : 03/20/2021 Last updated : 05/04/2022
-# Working with P2S VPN client profile files
+# Generate P2S Azure VPN client profile files - Azure AD authentication
-Client profile files contain information that is necessary to configure a VPN connection. This article helps you obtain and understand the information needed for a VPN client profile.
+After you install the Azure VPN Client, you configure the VPN client profile. Client profile files contain information that's necessary to configure a VPN connection. This article helps you obtain and understand the information needed to configure an Azure VPN Client profile.
-## Generate and download profile
+## <a name="generate"></a>Generate profile files
-You can generate client configuration files using PowerShell, or by using the Azure portal. Either method returns the same zip file.
+You can generate VPN client profile configuration files using PowerShell, or by using the Azure portal. Either method returns the same zip file.
### Portal 1. In the Azure portal, navigate to the virtual network gateway for the virtual network that you want to connect to. 1. On the virtual network gateway page, select **Point-to-site configuration**. 1. At the top of the Point-to-site configuration page, select **Download VPN client**. It takes a few minutes for the client configuration package to generate.
-1. Your browser indicates that a client configuration zip file is available. It is named the same name as your gateway. Unzip the file to view the folders.
+1. Your browser indicates that a client configuration zip file is available. It's named the same name as your gateway. Unzip the file to view the folders.
### PowerShell
To generate using PowerShell, you can use the following example:
1. Copy the URL to your browser to download the zip file, then unzip the file to view the folders.
+## <a name="extract"></a>Extract the zip file
+
+Extract the zip file. The file contains the following folders:
+
+* **AzureVPN**: The AzureVPN folder contains the **Azurevpnconfig.xml** file.
+* **Generic**: The generic folder contains the public server certificate and the VpnSettings.xml file. The VpnSettings.xml file contains information needed to configure a generic client
+
+## <a name="get"></a>Retrieve file information
+
+In the **AzureVPN** folder, navigate to the ***azurevpnconfig.xml*** file and open it with Notepad. Make a note of the text between the following tags. You may need this information later when configuring the Azure VPN Client.
+
+```
+<audience> </audience>
+<issuer> </issuer>
+<tennant> </tennant>
+<fqdn> </fqdn>
+<serversecret> </serversecret>
+```
+
+## <a name="details"></a>Profile details
+
+When you add a connection, use the information you collected in the previous step for the profile details page. The fields correspond to the following information:
-* The **OpenVPN folder** contains the *ovpn* profile that needs to be modified to include the key and the certificate. For more information, see [Configure OpenVPN clients for Azure VPN Gateway](vpn-gateway-howto-openvpn-clients.md#windows). If Azure AD authentication is selected on the VPN gateway, this folder is not present in the zip file. Instead, navigate to the AzureVPN folder and locate azurevpnconfig.xml.
+* **Audience:** Identifies the recipient resource the token is intended for.
+* **Issuer:** Identifies the Security Token Service (STS) that emitted the token, as well as the Azure AD tenant.
+* **Tenant:** Contains an immutable, unique identifier of the directory tenant that issued the token.
+* **FQDN:** The fully qualified domain name (FQDN) on the Azure VPN gateway.
+* **ServerSecret:** The VPN gateway preshared key.
## Next steps
vpn-gateway Ikev2 Openvpn From Sstp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/ikev2-openvpn-from-sstp.md
Previously updated : 06/04/2021 Last updated : 05/04/2022
Point-to-site VPN can use one of the following protocols:
* IKEv2 VPN, a standards-based IPsec VPN solution. IKEv2 VPN can be used to connect from Mac devices (macOS versions 10.11 and above). - >[!NOTE] >IKEv2 and OpenVPN for P2S are available for the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) only. They are not available for the classic deployment model. Basic gateway SKU does not support IKEv2 or OpenVPN protocols. If you are using the basic SKU, you will have to delete and recreate a production SKU Virtual Network Gateway. >
-## Migrating from SSTP to IKEv2 or OpenVPN
+## <a name="migrate"></a>Migrating from SSTP to IKEv2 or OpenVPN
There may be cases when you want to support more than 128 concurrent P2S connection to a VPN gateway but are using SSTP. In such a case, you need to move to IKEv2 or OpenVPN protocol.
There may be cases when you want to support more than 128 concurrent P2S connect
This is the simplest option. SSTP and IKEv2 can coexist on the same gateway and give you a higher number of concurrent connections. You can simply enable IKEv2 on the existing gateway and redownload the client.
-Adding IKEv2 to an existing SSTP VPN gateway will not affect existing clients and you can configure them to use IKEv2 in small batches or just configure the new clients to use IKEv2. If a Windows client is configured for both SSTP and IKEv2, it will try to connect using IKEV2 first and if that fails, it will fall back to SSTP.
+Adding IKEv2 to an existing SSTP VPN gateway won't affect existing clients and you can configure them to use IKEv2 in small batches or just configure the new clients to use IKEv2. If a Windows client is configured for both SSTP and IKEv2, it will try to connect using IKEV2 first and if that fails, it will fall back to SSTP.
**IKEv2 uses non-standard UDP ports so you need to ensure that these ports are not blocked on the user's firewall. The ports in use are UDP 500 and 4500.**
-To add IKEv2 to an existing gateway, simply go to the "point-to-site configuration" tab under the Virtual Network Gateway in portal, and select **IKEv2 and SSTP (SSL)** from the drop-down box.
-
-![Screenshot that shows the "Point-to-site configuration" page with the "Tunnel type" drop-down open, and "IKEv2 and SSTP(SSL)" selected.](./media/ikev2-openvpn-from-sstp/sstptoikev2.png "IKEv2")
+To add IKEv2 to an existing gateway, go to the "point-to-site configuration" tab under the Virtual Network Gateway in portal, and select **IKEv2 and SSTP (SSL)** from the drop-down box.
### Option 2 - Remove SSTP and enable OpenVPN on the Gateway
-Since SSTP and OpenVPN are both TLS-based protocol, they cannot coexist on the same gateway. If you decide to move away from SSTP to OpenVPN, you will have to disable SSTP and enable OpenVPN on the gateway. This operation will cause the existing clients to lose connectivity to the VPN gateway until the new profile has been configured on the client.
+Since SSTP and OpenVPN are both TLS-based protocol, they can't coexist on the same gateway. If you decide to move away from SSTP to OpenVPN, you'll have to disable SSTP and enable OpenVPN on the gateway. This operation will cause the existing clients to lose connectivity to the VPN gateway until the new profile has been configured on the client.
You can enable OpenVPN along side with IKEv2 if you desire. OpenVPN is TLS-based and uses the standard TCP 443 port. To switch to OpenVPN, go to the "point-to-site configuration" tab under the Virtual Network Gateway in portal, and select **OpenVPN (SSL)** or **IKEv2 and OpenVPN (SSL)** from the drop-down box.
-![point-to-site](./media/ikev2-openvpn-from-sstp/sstptoopenvpn.png "OpenVPN")
-Once the gateway has been configured, existing clients will not be able to connect until you [deploy and configure the OpenVPN Clients](./vpn-gateway-howto-openvpn-clients.md).
+Once the gateway has been configured, existing clients won't be able to connect until you [deploy and configure the OpenVPN clients](./vpn-gateway-howto-openvpn-clients.md).
-If you are using Windows 10, you can also use the [Azure VPN Client for Windows](./openvpn-azure-ad-client.md#to-download-the-azure-vpn-client)
+If you're using Windows 10, you can also use the [Azure VPN Client for Windows](./openvpn-azure-ad-client.md#download)
+## <a name="faq"></a>Frequently asked questions
-## Frequently asked questions
### What are the client configuration requirements? >[!NOTE]
The zip file also provides the values of some of the important settings on the A
### <a name="IKE/IPsec policies"></a>What IKE/IPsec policies are configured on VPN gateways for P2S? - **IKEv2** | **Cipher** | **Integrity** | **PRF** | **DH Group** |
vpn-gateway Openvpn Azure Ad Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-client.md
Title: 'Configure VPN clients for P2S OpenVPN protocol connections: Azure AD authentication'
-description: Learn how to configure VPN clients to connect to a VNet using VPN Gateway Point-to-Site VPN, OpenVPN protocol connections, and Azure AD authentication.
+ Title: 'Configure Azure VPN Client for P2S OpenVPN protocol connections: Azure AD authentication: Windows'
+description: Learn how to configure the Azure VPN Client to connect to a VNet using VPN Gateway point-to-site VPN, OpenVPN protocol connections, and Azure AD authentication from a Windows computer.
- - Previously updated : 08/20/2021 Last updated : 05/05/2022
-# Configure VPN clients for P2S OpenVPN protocol connections - Azure AD authentication
+# Configure Azure VPN Client for P2S OpenVPN protocol connections - Azure AD authentication - Windows
-This article helps you configure a VPN client to connect to a virtual network using Point-to-Site VPN and Azure Active Directory authentication. Before you can connect and authenticate using Azure AD, you must first configure your Azure AD tenant. For more information, see [Configure an Azure AD tenant](openvpn-azure-ad-tenant.md). For more information about Point-to-Site, see [About Point-to-Site VPN](point-to-site-about.md).
+This article helps you configure the Azure VPN Client on a Windows computer to connect to a virtual network using a VPN Gateway point-to-site VPN and Azure Active Directory authentication. Before you can connect and authenticate using Azure AD, you must first configure your Azure AD tenant. For more information, see [Configure an Azure AD tenant](openvpn-azure-ad-tenant.md). For more information about point-to-site, see [About point-to-site VPN](point-to-site-about.md).
[!INCLUDE [OpenVPN note](../../includes/vpn-gateway-openvpn-auth-include.md)]
-## <a name="profile"></a>Working with client profiles
-
-For every computer that wants to connect to the VNet via the VPN client, you need to download the Azure VPN Client for the computer, and also configure a VPN client profile. If you want to configure multiple computers, you can create a client profile on one computer, export it, and then import it to other computers.
-
-### To download the Azure VPN client
+## <a name="workflow"></a>Workflow
+After your Azure VPN Gateway point-to-site configuration is complete, your next steps are as follows:
-### <a name="cert"></a>To create a certificate-based client profile
+1. Download and install the Azure VPN Client.
+1. Generate the VPN client profile configuration package.
+1. Import the client profile settings to the VPN client.
+1. Create a connection.
+1. Optional - export the profile settings from the client and import to other client computers.
-When working with a certificate-based profile, make sure that the appropriate certificates are installed on the client computer. For more information about certificates, see [Install client certificates](point-to-site-how-to-vpn-client-install-azure-cert.md).
-![Screenshot of certificate authentication.](./media/openvpn-azure-ad-client/create/create-cert1.jpg)
+## <a name="download"></a>Download the Azure VPN Client
-### <a name="radius"></a>To create a RADIUS client profile
-
-![Screenshot of RADIUS authentication.](./media/openvpn-azure-ad-client/create/create-radius1.jpg)
-
-> [!NOTE]
-> The Server Secret can be exported in the P2S VPN client profile. Instructions on how to export a client profile can be found [here](about-vpn-profile-download.md).
->
-
-### <a name="export"></a>To export and distribute a client profile
-
-Once you have a working profile and need to distribute it to other users, you can export it using the following steps:
-
-1. Highlight the VPN client profile that you want to export, select the **...**, then select **Export**.
- ![Screenshot that shows the "Azure VPN Client" page, with the ellipsis selected and "Export" highlighted.](./media/openvpn-azure-ad-client/export/export1.jpg)
+## <a name="generate"></a>Generate the VPN client profile configuration package
-2. Select the location that you want to save this profile to, leave the file name as is, then select **Save** to save the xml file.
+To generate the VPN client profile configuration package, see [Working with P2S VPN client profile files](about-vpn-profile-download.md). After you generate the package, follow the steps to extract the profile configuration files.
- ![export](./media/openvpn-azure-ad-client/export/export2.jpg)
+## <a name="import"></a>Import the profile file
-### <a name="import"></a>To import a client profile
+For Azure AD authentication configurations, the **azurevpnconfig.xml** is used. The file is located in the **AzureVPN** folder of the VPN client profile configuration package.
1. On the page, select **Import**. ![Screenshot that shows the "Add" button selected and the "Import" action highlighted in the lower left-side of the window.](./media/openvpn-azure-ad-client/import/import1.jpg)
-2. Browse to the profile xml file and select it. With the file selected, select **Open**.
+1. Browse to the profile xml file and select it. With the file selected, select **Open**.
![Screenshot that shows a profile x m l file selected.](./media/openvpn-azure-ad-client/import/import2.jpg)
-3. Specify the name of the profile and select **Save**.
+1. Specify the name of the profile and select **Save**.
- ![Screenshot that shows the "Connection Name" highlighted and the "Save" button selected.](./media/openvpn-azure-ad-client/import/import3.jpg)
+ ![Save the profile.](./media/openvpn-azure-ad-client/import/import3.jpg)
-4. Select **Connect** to connect to the VPN.
+1. Select **Connect** to connect to the VPN.
![Screenshot that shows the VPN and "Connect" button selected.](./media/openvpn-azure-ad-client/import/import4.jpg)
-5. Once connected, the icon will turn green and say **Connected**.
+1. Once connected, the icon will turn green and say **Connected**.
![import](./media/openvpn-azure-ad-client/import/import5.jpg)
-### <a name="delete"></a>To delete a client profile
-
-1. Select the ellipses next to the client profile that you want to delete. Then, select **Remove**.
-
- ![Screenshot that shows the ellipses and "Remove" option selected.](./media/openvpn-azure-ad-client/delete/delete1.jpg)
-
-2. Select **Remove** to delete.
-
- ![delete](./media/openvpn-azure-ad-client/delete/delete2.jpg)
- ## <a name="connection"></a>Create a connection 1. On the page, select **+**, then **+ Add**. ![Screenshot that shows the "Add" button selected.](./media/openvpn-azure-ad-client/create/create1.jpg)
-2. Fill out the connection information. If you are unsure of the values, contact your administrator. After filling out the values, select **Save**.
-
- ![Screenshot that shows the VPN connection properties highlighted and the "Save" button selected.](./media/openvpn-azure-ad-client/create/create2.jpg)
+1. Fill out the connection information. If you're unsure of the values, contact your administrator. After filling out the values, select **Save**.
-3. Select **Connect** to connect to the VPN.
+1. Select **Connect** to connect to the VPN.
- ![Screenshot that shows the "Connect" button selected.](./media/openvpn-azure-ad-client/create/create3.jpg)
+1. Select the proper credentials, then select **Continue**.
-4. Select the proper credentials, then select **Continue**.
-
- ![Screenshot that shows example credentials highlighted and the "Continue" button selected.](./media/openvpn-azure-ad-client/create/create4.jpg)
-
-5. Once successfully connected, the icon will turn green and say **Connected**.
-
- ![connection](./media/openvpn-azure-ad-client/create/create5.jpg)
+1. Once successfully connected, the icon will turn green and say **Connected**.
### <a name="autoconnect"></a>To connect automatically
These steps help you configure your connection to connect automatically with Alw
![Screenshot of the VPN home page with "VPN Settings" selected.](./media/openvpn-azure-ad-client/auto/auto1.jpg)
-2. Select **Yes** on the switch apps dialogue box.
+1. Select **Yes** on the switch apps dialogue box.
![Screenshot of the "Did you mean to switch apps?" dialog with the "Yes" button selected.](./media/openvpn-azure-ad-client/auto/auto2.jpg)
-3. Make sure the connection that you want to set is not already connected, then highlight the profile and check the **Connect automatically** check box.
+1. Make sure the connection that you want to set isn't already connected, then highlight the profile and check the **Connect automatically** check box.
![Screenshot of the "Settings" window, with the "Connect automatically" box checked.](./media/openvpn-azure-ad-client/auto/auto3.jpg)
-4. Select **Connect** to initiate the VPN connection.
+1. Select **Connect** to initiate the VPN connection.
![auto](./media/openvpn-azure-ad-client/auto/auto4.jpg)
+## <a name="export"></a>Export and distribute a client profile
+
+Once you have a working profile and need to distribute it to other users, you can export it using the following steps:
+
+1. Highlight the VPN client profile that you want to export, select the **...**, then select **Export**.
+
+ ![Screenshot that shows the "Azure VPN Client" page, with the ellipsis selected and "Export" highlighted.](./media/openvpn-azure-ad-client/export/export1.jpg)
+
+1. Select the location that you want to save this profile to, leave the file name as is, then select **Save** to save the xml file.
+
+ ![export](./media/openvpn-azure-ad-client/export/export2.jpg)
+
+## <a name="delete"></a>Delete a client profile
+
+1. Select the ellipses next to the client profile that you want to delete. Then, select **Remove**.
+
+ ![Screenshot that shows the ellipses and "Remove" option selected.](./media/openvpn-azure-ad-client/delete/delete1.jpg)
+
+1. Select **Remove** to delete.
+
+ ![delete](./media/openvpn-azure-ad-client/delete/delete2.jpg)
+ ## <a name="diagnose"></a>Diagnose connection issues 1. To diagnose connection issues, you can use the **Diagnose** tool. Select the **...** next to the VPN connection that you want to diagnose to reveal the menu. Then select **Diagnose**. ![Screenshot of the ellipsis and "Diagnose selected."](./media/openvpn-azure-ad-client/diagnose/diagnose1.jpg)
-2. On the **Connection Properties** page, select **Run Diagnosis**.
+1. On the **Connection Properties** page, select **Run Diagnosis**.
![Screenshot that shows the "Connection Properties" page with "Run Diagnosis" selected.](./media/openvpn-azure-ad-client/diagnose/diagnose2.jpg)
-3. Sign in with your credentials.
+1. Sign in with your credentials.
![Screenshot that shows the "Let's get you signed in" dialog with a "Work or school account" selected.](./media/openvpn-azure-ad-client/diagnose/diagnose3.jpg)
-4. View the diagnosis results.
+1. View the diagnosis results.
![diagnose](./media/openvpn-azure-ad-client/diagnose/diagnose4.jpg)
vpn-gateway Vpn Gateway Howto Openvpn Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-openvpn-clients.md
Title: 'How to configure OpenVPN clients for P2S VPN gateways' description: Learn how to configure OpenVPN clients for Azure VPN Gateway. This article helps you configure Windows, Linux, iOS, and Mac clients.- - Previously updated : 07/27/2021 Last updated : 05/05/2022 # Configure OpenVPN clients for Azure VPN Gateway
-This article helps you configure **OpenVPN &reg; Protocol** clients.
+This article helps you configure **OpenVPN &reg; Protocol** clients for Azure VPN Gateway point-to-site configurations that use OpenVPN.
+
+This article contains general instructions. For the following point-to-site authentication types, see the associated articles instead:
+
+* Azure AD authentication
+ * [Windows clients](openvpn-azure-ad-client.md)
+ * [macOS clients](openvpn-azure-ad-client.md)
+
+* RADIUS authentication
+ * [All RADIUS clients](point-to-site-vpn-client-configuration-radius.md)
## Before you begin
-Verify that you have completed the steps to configure OpenVPN for your VPN gateway. For details, see [Configure OpenVPN for Azure VPN Gateway](vpn-gateway-howto-openvpn.md).
+Verify that you've completed the steps to configure OpenVPN for your VPN gateway. For details, see [Configure OpenVPN for Azure VPN Gateway](vpn-gateway-howto-openvpn.md).
## VPN client configuration files You can generate and download the VPN client configuration files from the portal, or by using PowerShell. Either method returns the same zip file. Unzip the file to view the OpenVPN folder.
+When you open the zip file, if you don't see the OpenVPN folder, verify that your VPN gateway is configured to use the OpenVPN tunnel type. Additionally, if you're using Azure AD authentication, you may not have an OpenVPN folder. See the links at the top of this article for Azure AD instructions.
+ :::image type="content" source="./media/howto-openvpn-clients/download.png" alt-text="Screenshot of Download VPN client highlighted." ::: [!INCLUDE [configuration steps](../../includes/vpn-gateway-vwan-config-openvpn-clients.md)] ## Next steps
-If you want the VPN clients to be able to access resources in another VNet, then follow the instructions on the [VNet-to-VNet](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) article to set up a vnet-to-vnet connection. Be sure to enable BGP on the gateways and the connections, otherwise traffic will not flow.
+If you want the VPN clients to be able to access resources in another VNet, then follow the instructions on the [VNet-to-VNet](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) article to set up a vnet-to-vnet connection. Be sure to enable BGP on the gateways and the connections, otherwise traffic won't flow.
**"OpenVPN" is a trademark of OpenVPN Inc.**