Updates from: 01/12/2023 02:13:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-overview.md
Previously updated : 10/14/2021 Last updated : 01/10/2023
A custom policy is represented as one or more XML-formatted files, which refer t
## Custom policy starter pack
-Azure AD B2C custom policy [starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#get-the-starter-pack) comes with several pre-built policies to get you going quickly. Each of these starter packs contains the smallest number of technical profiles and user journeys needed to achieve the scenarios described:
+Azure AD B2C custom policy [starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#get-the-starter-pack) comes with several pre-built policies to get you started quickly. Each of these starter packs contains the smallest number of technical profiles and user journeys needed to achieve the scenarios described:
- **LocalAccounts** - Enables the use of local accounts only. - **SocialAccounts** - Enables the use of social (or federated) accounts only. - **SocialAndLocalAccounts** - Enables the use of both local and social accounts. Most of our samples refer to this policy. - **SocialAndLocalAccountsWithMFA** - Enables social, local, and multi-factor authentication options.
-In the [Azure AD B2C samples GitHub repository](https://github.com/azure-ad-b2c/samples), you'll find samples for several enhanced Azure AD B2C custom CIAM user journeys. For example, local account policy enhancements, social account policy enhancements, MFA enhancements, user interface enhancements, generic enhancements, app migration, user migration, conditional access, web test, and CI/CD.
+In the [Azure AD B2C samples GitHub repository](https://github.com/azure-ad-b2c/samples), you'll find samples for several enhanced Azure AD B2C custom CIAM user journeys and scenarios. For example, local account policy enhancements, social account policy enhancements, MFA enhancements, user interface enhancements, generic enhancements, app migration, user migration, conditional access, web test, and CI/CD.
## Understanding the basics ### Claims
-A claim provides temporary storage of data during an Azure AD B2C policy execution. It can store information about the user, such as first name, last name, or any other claim obtained from the user or other systems (claims exchanges). The [claims schema](claimsschema.md) is the place where you declare your claims.
+A claim provides temporary storage of data during an Azure AD B2C policy execution. Claims are more like variable in a programing language. It can store information about the user, such as first name, last name, or any other claim obtained from the user or other systems (claims exchanges). The [claims schema](claimsschema.md) is the place where you declare your claims.
When the policy runs, Azure AD B2C sends and receives claims to and from internal and external parties and then sends a subset of these claims to your relying party application as part of the token. Claims are used in these ways:
When the policy runs, Azure AD B2C sends and receives claims to and from interna
### Manipulating your claims
-The [claims transformations](claimstransformations.md) are predefined functions that can be used to convert a given claim into another one, evaluate a claim, or set a claim value. For example adding an item to a string collection, changing the case of a string, or evaluate a date and time claim. A claims transformation specifies a transform method.
+The [claims transformations](claimstransformations.md) are predefined functions that can be used to convert a given claim into another one, evaluate a claim, or set a claim value. For example adding an item to a string collection, changing the case of a string, or evaluate a date and time claim. A claims transformation specifies a transform method, which is also predefined.
### Customize and localize your UI
The following diagram illustrates how Azure AD B2C uses a validation technical p
## Inheritance model
-Each starter pack includes the following files:
+Each [starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack) includes the following files:
- A **Base** file that contains most of the definitions. To help with troubleshooting and long-term maintenance of your policies, try to minimize the number of changes you make to this file. - A **Localization** file that holds the localization strings. This policy file is derived from the Base file. Use this file to accommodate different languages to suit your customer needs.
The following diagram shows the relationship between the policy files and the re
### Best practices
-Within an Azure AD B2C custom policy, you can integrate your own business logic to build the user experiences you require and extend functionality of the service. We have a set of best practices and recommendations to get started.
+Within an Azure AD B2C custom policy, you can integrate your own business logic to build the user experiences you require and extend functionality of the service. We've a set of best practices and recommendations to get started.
- Create your logic within the **extension policy**, or **relying party policy**. You can add new elements, which will override the base policy by referencing the same ID. This approach will allow you to scale out your project while making it easier to upgrade base policy later on if Microsoft releases new starter packs. - Within the **base policy**, we highly recommend avoiding making any changes. When necessary, make comments where the changes are made.
You get started with Azure AD B2C custom policy:
1. Add the necessary [policy keys](tutorial-create-user-flows.md?pivots=b2c-custom-policy#add-signing-and-encryption-keys-for-identity-experience-framework-applications) and [register the Identity Experience Framework applications](tutorial-create-user-flows.md?pivots=b2c-custom-policy#register-identity-experience-framework-applications). 1. [Get the Azure AD B2C policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#get-the-starter-pack) and upload to your tenant. 1. After you upload the starter pack, [test your sign-up or sign-in policy](tutorial-create-user-flows.md?pivots=b2c-custom-policy#test-the-custom-policy).
-1. We recommend you to download and install [Visual Studio Code](https://code.visualstudio.com/) (VS Code). Visual Studio Code is a lightweight but powerful source code editor, which runs on your desktop and is available for Windows, macOS, and Linux. With VS Code, you can quickly navigate through and edit your Azure AD B2C custom policy XML files by installing the [Azure AD B2C extension for VS Code](https://marketplace.visualstudio.com/items?itemName=AzureADB2CTools.aadb2c)
+1. We recommend that you download and install [Visual Studio Code](https://code.visualstudio.com/) (VS Code). Visual Studio Code is a lightweight but powerful source code editor, which runs on your desktop and is available for Windows, macOS, and Linux. With VS Code, you can quickly navigate through and edit your Azure AD B2C custom policy XML files by installing the [Azure AD B2C extension for VS Code](https://marketplace.visualstudio.com/items?itemName=AzureADB2CTools.aadb2c)
## Next steps
active-directory-b2c Enable Authentication Web Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-api.md
Previously updated : 10/26/2021 Last updated : 01/10/2023
# Enable authentication in your own web API by using Azure AD B2C
-To authorize access to a web API, serve only requests that include a valid Azure Active Directory B2C (Azure AD B2C)-issued access token. This article shows you how to enable Azure AD B2C authorization to your web API. After you complete the steps in this article, only users who obtain a valid access token will be authorized to call your web API endpoints.
+To authorize access to a web API, you can serve only requests that include a valid access token that's issued by Azure Active Directory B2C (Azure AD B2C). This article shows you how to enable Azure AD B2C authorization to your web API. After you complete the steps in this article, only users who obtain a valid access token will be authorized to call your web API endpoints.
## Prerequisites
The app does the following:
1. It passes the access token as a bearer token in the authentication header of the HTTP request by using this format: ```http
- Authorization: Bearer <token>
+ Authorization: Bearer <access token>
``` The web API does the following:
The web API does the following:
### App registration overview
-To enable your app to sign in with Azure AD B2C and call a web API, you must register two applications in the Azure AD B2C directory.
+To enable your app to sign in with Azure AD B2C and call a web API, you need to register two applications in the Azure AD B2C directory.
- The *web, mobile, or SPA application* registration enables your app to sign in with Azure AD B2C. The app registration process generates an *Application ID*, also known as the *client ID*, which uniquely identifies your application (for example, *App ID: 1*).
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-operations.md
An email address that can be used by a [username sign-in account](sign-in-option
Manage the [identity providers](add-identity-provider.md) available to your user flows in your Azure AD B2C tenant. - [List identity providers available in the Azure AD B2C tenant](/graph/api/identityproviderbase-availableprovidertypes)-- [List identity providers configured in the Azure AD B2C tenant](/graph/api/iidentitycontainer-list-identityproviders)
+- [List identity providers configured in the Azure AD B2C tenant](/graph/api/identitycontainer-list-identityproviders)
- [Create an identity provider](/graph/api/identitycontainer-post-identityproviders) - [Get an identity provider](/graph/api/identityproviderbase-get) - [Update identity provider](/graph/api/identityproviderbase-update)
active-directory-b2c Password Complexity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/password-complexity.md
Previously updated : 09/20/2021 Last updated : 01/10/2023
Azure Active Directory B2C (Azure AD B2C) supports changing the complexity requi
## Password rule enforcement
-During sign-up or password reset, an end user must supply a password that meets the complexity rules. Password complexity rules are enforced per user flow. It is possible to have one user flow require a four-digit pin during sign-up while another user flow requires an eight character string during sign-up. For example, you may use a user flow with different password complexity for adults than for children.
+During sign-up or password reset, an end user must supply a password that meets the complexity rules. Password complexity rules are enforced per user flow. It's possible to have one user flow require a four-digit pin during sign-up while another user flow requires an eight character string during sign-up. For example, you may use a user flow with different password complexity for adults than for children.
Password complexity is never enforced during sign-in. Users are never prompted during sign-in to change their password because it doesn't meet the current complexity requirement.
-Password complexity can be configured in the following types of user flows:
+You can configure password complexity in the following types of user flows:
- Sign-up or Sign-in user flow - Password Reset user flow
-If you are using custom policies, you can ([configure password complexity in a custom policy](password-complexity.md)).
+If you're using custom policies, you can [configure password complexity in a custom policy](password-complexity.md).
## Configure password complexity 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**..
+1. Make sure you're using the directory that contains your Azure AD B2C tenant:
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**..
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **User flows**. 1. Select a user flow, and click **Properties**.
If you are using custom policies, you can ([configure password complexity in a c
| Complexity | Description | | | |
-| Simple | A password that is at least 8 to 64 characters. |
-| Strong | A password that is at least 8 to 64 characters. It requires 3 out of 4 of lowercase, uppercase, numbers, or symbols. |
+| Simple | A password that's at least *8* to *64* characters. |
+| Strong | A password that's at least *8* to *64* characters. It requires *3* out of *4* of lowercase, uppercase, numbers, or symbols. |
| Custom | This option provides the most control over password complexity rules. It allows configuring a custom length. It also allows accepting number-only passwords (pins). | ## Custom options
Save the policy file.
### Upload the files 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant:
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **Identity Experience Framework**.
-1. On the Custom Policies page, click **Upload Policy**.
+1. On the Custom Policies page, select **Upload Policy**.
1. Select **Overwrite the policy if it exists**, and then search for and select the *TrustFrameworkExtensions.xml* file.
-1. Click **Upload**.
+1. Select **Upload**.
### Run the policy
-1. Open the sign-up or sign-in policy. For example, *B2C_1A_signup_signin*.
+1. Open the sign-up or sign-in policy such as *B2C_1A_signup_signin*.
2. For **Application**, select your application that you previously registered. To see the token, the **Reply URL** should show `https://jwt.ms`.
-3. Click **Run now**.
-4. Select **Sign up now**, enter an email address, and enter a new password. Guidance is presented on password restrictions. Finish entering the user information, and then click **Create**. You should see the contents of the token that was returned.
+3. Select **Run now**.
+4. Select **Sign up now**, enter an email address, and enter a new password. Guidance is presented on password restrictions. Finish entering the user information, and then select **Create**. You should see the contents of the token that was returned.
## Next steps
active-directory-b2c User Profile Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-profile-attributes.md
Previously updated : 10/11/2021 Last updated : 01/10/2023
# User profile attributes
-Your Azure Active Directory (Azure AD) B2C directory user profile comes with a built-in set of attributes, such as given name, surname, city, postal code, and phone number. You can extend the user profile with your own application data without requiring an external data store.
+Your Azure Active Directory B2C (Azure AD B2C) directory user profile comes with a set of built-in attributes, such as given name, surname, city, postal code, and phone number. You can extend the user profile with your own application data without requiring an external data store.
Most of the attributes that can be used with Azure AD B2C user profiles are also supported by Microsoft Graph. This article describes supported Azure AD B2C user profile attributes. It also notes those attributes that are not supported by Microsoft Graph, as well as Microsoft Graph attributes that should not be used with Azure AD B2C. > [!IMPORTANT]
-> You should not use built-in or extension attributes to store sensitive personal data, such as account credentials, government identification numbers, cardholder data, financial account data, healthcare information, or sensitive background information.
+> You should'nt use built-in or extension attributes to store sensitive personal data, such as account credentials, government identification numbers, cardholder data, financial account data, healthcare information, or sensitive background information.
You can also integrate with external systems. For example, you can use Azure AD B2C for authentication, but delegate to an external customer relationship management (CRM) or customer loyalty database as the authoritative source of customer data. For more information, see the [remote profile](https://github.com/azure-ad-b2c/samples/tree/master/policies/remote-profile) solution.
The table below lists the [user resource type](/graph/api/resources/user) attrib
- Attribute name used by Azure AD B2C (followed by the Microsoft Graph name in parentheses, if different) - Attribute data type - Attribute description-- If the attribute is available in the Azure portal-- If the attribute can be used in a user flow-- If the attribute can be used in a custom policy [Azure AD technical profile](active-directory-technical-profile.md) and in which section (&lt;InputClaims&gt;, &lt;OutputClaims&gt;, or &lt;PersistedClaims&gt;)
+- Whether the attribute is available in the Azure portal
+- Whether the attribute can be used in a user flow
+- Whether the attribute can be used in a custom policy [Azure AD technical profile](active-directory-technical-profile.md) and in which section (&lt;InputClaims&gt;, &lt;OutputClaims&gt;, or &lt;PersistedClaims&gt;)
|Name |Type |Description|Azure portal|User flows|Custom policy| |||-||-|-|
The table below lists the [user resource type](/graph/api/resources/user) attrib
|userPrincipalName |String|The user principal name (UPN) of the user. The UPN is an Internet-style login name for the user based on the Internet standard RFC 822. The domain must be present in the tenant's collection of verified domains. This property is required when an account is created. Immutable.|No|No|Input, Persisted, Output| |usageLocation |String|Required for users that will be assigned licenses due to legal requirement to check for availability of services in countries/regions. Not nullable. A two letter country/region code (ISO standard 3166). Examples: "US", "JP", and "GB".|Yes|No|Persisted, Output| |userType |String|A string value that can be used to classify user types in your directory. Value must be Member. Read-only.|Read only|No|Persisted, Output|
-|userState (externalUserState)<sup>3</sup>|String|For Azure AD B2B account only, indicates whether the invitation is PendingAcceptance or Accepted.|No|No|Persisted, Output|
+|userState (externalUserState)<sup>3</sup>|String|For Azure AD B2B account only, and it indicates whether the invitation is PendingAcceptance or Accepted.|No|No|Persisted, Output|
|userStateChangedOn (externalUserStateChangeDateTime)<sup>2</sup>|DateTime|Shows the timestamp for the latest change to the UserState property.|No|No|Persisted, Output|
-<sup>1 </sup>Not supported by Microsoft Graph<br><sup>2 </sup>For more information, see [MFA phone number attribute](#mfa-phone-number-attribute)<br><sup>3 </sup>Should not be used with Azure AD B2C
+<sup>1 </sup>Not supported by Microsoft Graph<br><sup>2 </sup>For more information, see [MFA phone number attribute](#mfa-phone-number-attribute)<br><sup>3 </sup>Shouldn't be used with Azure AD B2C
## Required attributes
To create a user account in the Azure AD B2C directory, provide the following re
## Display name attribute
-The `displayName` is the name to display in Azure portal user management for the user, and in the access token Azure AD B2C returns to the application. This property is required.
+The `displayName` is the name to display in Azure portal user management for the user, and in the access token that Azure AD B2C returns to the application. This property is required.
## Identities attribute
In the Microsoft Graph API, both local and federated identities are stored in th
|issuer|string|Specifies the issuer of the identity. For local accounts (where **signInType** is not `federated`), this property is the local B2C tenant default domain name, for example `contoso.onmicrosoft.com`. For social identity (where **signInType** is `federated`) the value is the name of the issuer, for example `facebook.com`| |issuerAssignedId|string|Specifies the unique identifier assigned to the user by the issuer. The combination of **issuer** and **issuerAssignedId** must be unique within your tenant. For local account, when **signInType** is set to `emailAddress` or `userName`, it represents the sign-in name for the user.<br>When **signInType** is set to: <ul><li>`emailAddress` (or starts with `emailAddress` like `emailAddress1`) **issuerAssignedId** must be a valid email address</li><li>`userName` (or any other value), **issuerAssignedId** must be a valid [local part of an email address](https://tools.ietf.org/html/rfc3696#section-3)</li><li>`federated`, **issuerAssignedId** represents the federated account unique identifier</li></ul>|
-The following **Identities** attribute, with a local account identity with a sign-in name, an email address as sign-in, and with a social identity.
+The following JSON snippet shows **Identities** attribute, with a local account identity with a sign-in name, an email address as sign-in, and with a social identity.
```json "identities": [
For federated identities, depending on the identity provider, the **issuerAssign
## Password profile property
-For a local identity, the **passwordProfile** attribute is required, and contains the user's password. The `forceChangePasswordNextSignIn` attribute indicates whether a user must reset the password at the next sign-in. To handle a forced password reset, [set up forced password reset flow](force-password-reset.md).
+For a local identity, the **passwordProfile** attribute is required, and contains the user's password. The `forceChangePasswordNextSignIn` attribute indicates whether a user must reset the password at the next sign-in. To handle a forced password reset, us the the instructions in [set up forced password reset flow](force-password-reset.md).
For a federated (social) identity, the **passwordProfile** attribute is not required.
In Azure AD B2C [custom policies](custom-policy-overview.md), the phone number i
Every customer-facing application has unique requirements for the information to be collected. Your Azure AD B2C tenant comes with a built-in set of information stored in properties, such as Given Name, Surname, and Postal Code. With Azure AD B2C, you can extend the set of properties stored in each customer account. For more information, see [Add user attributes and customize user input in Azure Active Directory B2C](configure-user-input.md)
-Extension attributes [extend the schema](/graph/extensibility-overview#schema-extensions) of the user objects in the directory. The extension attributes can only be registered on an application object, even though they might contain data for a user. The extension attribute is attached to the application called `b2c-extensions-app`. Do not modify this application, as it's used by Azure AD B2C for storing user data. You can find this application under Azure Active Directory App registrations.
+Extension attributes [extend the schema](/graph/extensibility-overview#schema-extensions) of the user objects in the directory. The extension attributes can only be registered on an application object, even though they might contain data for a user. The extension attribute is attached to the application called `b2c-extensions-app`. Do not modify this application, as it's used by Azure AD B2C for storing user data. You can find this application under Azure Active Directory App registrations. [Learn more about Azure AD B2C](extensions-app.md) `b2c-extensions-app`.
> [!NOTE]
-> - Up to 100 extension attributes can be written to any user account.
+> - You can write up to 100 extension attributes to any user account.
> - If the b2c-extensions-app application is deleted, those extension attributes are removed from all users along with any data they contain. > - If an extension attribute is deleted by the application, it's removed from all user accounts and the values are deleted.
-Extension attributes in the Graph API are named by using the convention `extension_ApplicationClientID_AttributeName`, where the `ApplicationClientID` is the **Application (client) ID** of the `b2c-extensions-app` application (found in **App registrations** > **All Applications** in the Azure portal). Note that the **Application (client) ID** as it's represented in the extension attribute name includes no hyphens. For example:
+Extension attributes in the Graph API are named by using the convention `extension_ApplicationClientID_AttributeName`, where:
+
+- The `ApplicationClientID` is the **Application (client) ID** of the `b2c-extensions-app` application. [Learn how to find the extensions app](extensions-app.md#verifying-that-the-extensions-app-is-present).
+- The `AttributeName` is the name of the extension attribute.
+
+Note that the **Application (client) ID** as it's represented in the extension attribute name includes no hyphens. For example:
```json
-"extension_831374b3bd5041bfaa54263ec9e050fc_loyaltyNumber": "212342"
+ "extension_831374b3bd5041bfaa54263ec9e050fc_loyaltyNumber": "212342"
``` The following data types are supported when defining an attribute in a schema extension:
active-directory License Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/license-users-groups.md
There are several license plans available for the Azure AD service, including:
For specific information about each license plan and the associated licensing details, see [What license do I need?](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). To sign up for Azure AD premium license plans see [here](./active-directory-get-started-premium.md).
-Not all Microsoft services are available in all locations. Before a license can be assigned to a group, you must specify the **Usage location** for all members. You can set this value in the **Azure Active Directory &gt; Users &gt; Profile &gt; Settings** area in Azure AD. When assigning licenses to a group or bulk updates such as disabling the synchronization status for the organization, any user whose usage location isn't specified inherits the location of the Azure AD organization.
+Not all Microsoft services are available in all locations. Before a license can be assigned to a group, you must specify the **Usage location** for all members. You can set this value in the **Azure Active Directory &gt; Users &gt;** select a user **&gt; Properties &gt; Settings** area in Azure AD. When assigning licenses to a group or bulk updates such as disabling the synchronization status for the organization, any user whose usage location isn't specified inherits the location of the Azure AD organization.
## View license plans and plan details
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
You can restrict default permissions for member users in the following ways:
| **Create security groups** | Setting this option to **No** prevents users from creating security groups. Global administrators and user administrators can still create security groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). | | **Create Microsoft 365 groups** | Setting this option to **No** prevents users from creating Microsoft 365 groups. Setting this option to **Some** allows a set of users to create Microsoft 365 groups. Global administrators and user administrators can still create Microsoft 365 groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). | | **Restrict access to Azure AD administration portal** | **What does this switch do?** <br>**No** lets non-administrators browse the Azure AD administration portal. <br>**Yes** Restricts non-administrators from browsing the Azure AD administration portal. Non-administrators who are owners of groups or applications are unable to use the Azure portal to manage their owned resources. </p><p></p><p>**What does it not do?** <br> It doesn't restrict access to Azure AD data using PowerShell, Microsoft GraphAPI, or other clients such as Visual Studio. <br>It doesn't restrict access as long as a user is assigned a custom role (or any role). </p><p></p><p>**When should I use this switch?** <br>Use this option to prevent users from misconfiguring the resources that they own. </p><p></p><p>**When should I not use this switch?** <br>Don't use this switch as a security measure. Instead, create a Conditional Access policy that targets Microsoft Azure Management will block non-administrators access to [Microsoft Azure Management](../conditional-access/concept-conditional-access-cloud-apps.md#microsoft-azure-management). </p><p></p><p> **How do I grant only a specific non-administrator users the ability to use the Azure AD administration portal?** <br> Set this option to **Yes**, then assign them a role like global reader. </p><p></p><p>**Restrict access to the Entra administration portal** <br>A Conditional Access policy that targets Microsoft Azure Management will target access to all Azure management. |
-| **Restrict non-admin users from creating tenants** | Users can create tenants in the Azure AD and Entra administration portal under Manage tenant. The creation of a tenant is recorded in the Audit log as category DirectoryManagement and activity Create Company. Anyone who creates a tenant will become the Global Administrator of that tenant. The newly created tenant does not inherit any settings or configurations. </p><p></p><p>**What does this switch do?** <br> Setting this option to **Yes** restricts creation of Azure AD tenants to the Global Administrator or tenant creator roles. Setting this option to **No** allows non-admin users to create Azure AD tenants. Tenant create will continue to be recorded in the Audit log. </p><p></p><p>**How do I grant only a specific non-administrator users the ability to create new tenants?** <br> Set this option to No, then assign them the tenant creator role.|
+| **Restrict non-admin users from creating tenants** | Users can create tenants in the Azure AD and Entra administration portal under Manage tenant. The creation of a tenant is recorded in the Audit log as category DirectoryManagement and activity Create Company. Anyone who creates a tenant will become the Global Administrator of that tenant. The newly created tenant does not inherit any settings or configurations. </p><p></p><p>**What does this switch do?** <br> Setting this option to **Yes** restricts creation of Azure AD tenants to the Global Administrator or tenant creator roles. Setting this option to **No** allows non-admin users to create Azure AD tenants. Tenant create will continue to be recorded in the Audit log. </p><p></p><p>**How do I grant only a specific non-administrator users the ability to create new tenants?** <br> Set this option to Yes, then assign them the tenant creator role.|
| **Read other users** | This setting is available in Microsoft Graph and PowerShell only. Setting this flag to `$false` prevents all non-admins from reading user information from the directory. This flag doesn't prevent reading user information in other Microsoft services like Exchange Online.</p><p>This setting is meant for special circumstances, so we don't recommend setting the flag to `$false`. | ## Restrict guest users' default permissions
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
For listing your application in the Azure AD app gallery, please read the detail
### ADAL End of Support Announcement -- **Type:** N/A **Service category:** Other **Product capability:** Developer Experience
-As part of our ongoing initiative to improve the developer experience, service reliability, and security of customer applications, we will end support for the Azure Active Directory Authentication Library (ADAL). The final deadline to migrate your applications to Microsoft Authentication Library (MSAL) has been extended to **June 1, 2023**.
+As part of our ongoing initiative to improve the developer experience, service reliability, and security of customer applications, we will end support for the Azure Active Directory Authentication Library (ADAL). The final deadline to migrate your applications to Microsoft Authentication Library (MSAL) has been extended to **June 30, 2023**.
+
+### Why are we doing this?
-### Why are we doing this?
As we consolidate and evolve the Microsoft Identity platform, we are also investing in making significant improvements to the developer experience and service features that make it possible to build secure, robust and resilient applications. To make these features available to our customers we needed to update the architecture of our software development kits. As a result of this change, weΓÇÖve decided that the path forward requires us to sunset ADAL so that we can focus on developer experience investments with MSAL.
-### What happens?
-We recognize that changing libraries is not an easy task, and cannot be accomplished quickly. We are committed to helping customers plan their migrations to MSAL as well as execute them with minimal disruption.
+### What happens?
+
+We recognize that changing libraries is not an easy task, and cannot be accomplished quickly. We are committed to helping customers plan their migrations to MSAL as well as execute them with minimal disruption.
+ - In June 2020 we [announced the 2-year end of support timeline for ADAL](https://devblogs.microsoft.com/microsoft365dev/end-of-support-timelines-for-azure-ad-authentication-library-adal-and-azure-ad-graph/). - In December 2022 weΓÇÖve decided to extend the ADAL end of support to June 2023. - Through the next six months (January 2023 ΓÇô June 2023) we will continue informing customers about the upcoming end of support along with providing guidance on migration. - On June 2023 we will officially sunset ADAL, removing library documentation and archiving all GitHub repositories related to the project.
-### How to find out which applications in my tenant are using ADAL?
+### How to find out which applications in my tenant are using ADAL?
Refer to our post on [Microsoft Q&A](/answers/questions/360928/information-how-to-find-apps-using-adal-in-your-te.html) for details on identifying ADAL apps with the help of [Azure Workbooks](/azure/azure-monitor/visualize/workbooks-overview).
-### If IΓÇÖm using ADAL, what can I expect after the deadline?
+### If IΓÇÖm using ADAL, what can I expect after the deadline?
+ - There will be no new releases (security or otherwise) to the library after June 2023. -- We will not be accepting any incident reports or support requests for ADAL. ADAL to MSAL Migration support would continue. -- The underpinning services will continue working and applications that depend on ADAL should continue working; however, applications will be at increased security and reliability risk due to not having the latest updates, service configuration, and enhancements made available through the Microsoft Identity platform.
+- We will not be accepting any incident reports or support requests for ADAL. ADAL to MSAL migration support would continue.
+- The underpinning services will continue working and applications that depend on ADAL should continue working; however, applications and the resources they access will be at increased security and reliability risk due to not having the latest updates, service configuration, and enhancements made available through the Microsoft Identity platform.
### What features can I only access with MSAL?
active-directory Protect Against Consent Phishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/protect-against-consent-phishing.md
Administrators should be in control of application use by providing the right in
- Block [consent phishing emails with Microsoft Defender for Office 365](/microsoft-365/security/office-365-security/set-up-anti-phishing-policies#impersonation-settings-in-anti-phishing-policies-in-microsoft-defender-for-office-365) by protecting against phishing campaigns where an attacker is impersonating a known user in the organization. - Configure Microsoft Defender for Cloud Apps policies to help manage abnormal application activity in the organization. For example, [activity policies](/cloud-app-security/user-activity-policies), [anomaly detection](/cloud-app-security/anomaly-detection-policy), and [OAuth app policies](/cloud-app-security/app-permission-policy). - Investigate and hunt for consent phishing attacks by following the guidance on [advanced hunting with Microsoft 365 Defender](/microsoft-365/security/defender/advanced-hunting-overview).-- Allow access to trusted applications that meet certain criteria and that protect against those applications that don't:
+- Allow access to trusted applications that meet certain criteria and protect against those applications that don't:
- [Configure user consent settings](./configure-user-consent.md?tabs=azure-portal) to allow users to only consent to applications that meet certain criteria, such as applications developed by your organization or from verified publishers and only for low risk permissions you select. - Use applications that have been publisher verified. [Publisher verification](../develop/publisher-verification-overview.md) helps administrators and users understand the authenticity of application developers through a Microsoft supported vetting process. Even if an application does have a verified publisher, it is still important to review the consent prompt to understand and evaluate the request. For example, reviewing the permissions being requested to ensure they align with the scenario the app is requesting them to enable, additional app and publisher details on the consent prompt, etc. - Create proactive [application governance](/microsoft-365/compliance/app-governance-manage-app-governance) policies to monitor third-party application behavior on the Microsoft 365 platform to address common suspicious application behaviors.
Administrators should be in control of application use by providing the right in
- [Application consent grant investigation](/security/compass/incident-response-playbook-app-consent) - [Managing access to applications](./what-is-access-management.md) - [Restrict user consent operations in Azure AD](../../security/fundamentals/steps-secure-identity.md#restrict-user-consent-operations)
+- [Compromised and malicious applications investigation](/security/compass/incident-response-playbook-compromised-malicious-app)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Welcome to what's new in Azure Active Directory (Azure AD) application managemen
- [Grant consent on behalf of a single user by using PowerShell](grant-consent-single-user.md) - [Tutorial: Configure F5 BIG-IP SSL-VPN for Azure AD SSO](f5-aad-password-less-vpn.md) - [Integrate F5 BIG-IP with Azure Active Directory](f5-aad-integration.md)-- [Azure Active Directory application management: What's new](whats-new-docs.md) - [Deploy F5 BIG-IP Virtual Edition VM in Azure](f5-bigip-deployment-guide.md) - [End-user experiences for applications](end-user-experiences.md) - [Tutorial: Migrate your applications from Okta to Azure Active Directory](migrate-applications-from-okta-to-azure-active-directory.md)
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md
There are two types of managed identities:
- A service principal of a special type is created in Azure AD for the identity. The service principal is tied to the lifecycle of that Azure resource. When the Azure resource is deleted, Azure automatically deletes the service principal for you. - By design, only that Azure resource can use this identity to request tokens from Azure AD. - You authorize the managed identity to have access to one or more services.
+ - The name of the system-assigned service principal is always the same as the name of the Azure resource it is created for. For a deployment slot, the name of its system-assigned identity is <app-name>/slots/<slot-name>.
- **User-assigned**. You may also create a managed identity as a standalone Azure resource. You can [create a user-assigned managed identity](how-to-manage-ua-identity-portal.md) and assign it to one or more Azure Resources. When you enable a user-assigned managed identity: - A service principal of a special type is created in Azure AD for the identity. The service principal is managed separately from the resources that use it.
active-directory Exterro Legal Grc Software Platform Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/exterro-legal-grc-software-platform-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Exterro Legal GRC Software Platform
+description: Learn how to configure single sign-on between Azure Active Directory and Exterro Legal GRC Software Platform.
++++++++ Last updated : 01/04/2023++++
+# Azure Active Directory SSO integration with Exterro Legal GRC Software Platform
+
+In this article, you'll learn how to integrate Exterro Legal GRC Software Platform with Azure Active Directory (Azure AD). The Exterro Platform unifies all of Exterro's E-Discovery and Information Governance solutions, giving you the ability to easily add new Exterro applications as your business needs expand. When you integrate Exterro Legal GRC Software Platform with Azure AD, you can:
+
+* Control in Azure AD who has access to Exterro Legal GRC Software Platform.
+* Enable your users to be automatically signed-in to Exterro Legal GRC Software Platform with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Exterro Legal GRC Software Platform in a test environment. Exterro Legal GRC Software Platform supports both **SP** and **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Exterro Legal GRC Software Platform, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Exterro Legal GRC Software Platform single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Exterro Legal GRC Software Platform application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Exterro Legal GRC Software Platform from the Azure AD gallery
+
+Add Exterro Legal GRC Software Platform from the Azure AD application gallery to configure single sign-on with Exterro Legal GRC Software Platform. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Exterro Legal GRC Software Platform** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<tenant_id>.exterro.net/exterrosso`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<tenant_id>.exterro.net/exterrosso/saml/SSO`
+
+1. If you want to configure **SP** initiated SSO, then perform the following step:
+
+ In the **Sign on URL** textbox, type a URL using one of the following patterns:
+
+ | **Sign on URL** |
+ |-|
+ | `https://<tenant_id>.exterro.net/exterrosso/saml/` |
+ | `https://<tenant_id>.<domain>` |
+
+ > [!Note]
+ > These values are not the real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Exterro Legal GRC Software Platform Client support team](mailto:support@exterro.com) to get these values. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Exterro Legal GRC Software Platform** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Exterro Legal GRC Software Platform SSO
+
+To configure single sign-on on **Exterro Legal GRC Software Platform** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Exterro Legal GRC Software Platform support team](mailto:support@exterro.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Exterro Legal GRC Software Platform test user
+
+In this section, you create a user called Britta Simon at Exterro Legal GRC Software Platform. Work with [Exterro Legal GRC Software Platform support team](mailto:support@exterro.com) to add the users in the Exterro Legal GRC Software Platform platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Exterro Legal GRC Software Platform Sign-on URL where you can initiate the login flow.
+
+* Go to Exterro Legal GRC Software Platform Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Exterro Legal GRC Software Platform for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Exterro Legal GRC Software Platform tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Exterro Legal GRC Software Platform for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Exterro Legal GRC Software Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Lusha Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lusha-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Lusha
+description: Learn how to configure single sign-on between Azure Active Directory and Lusha.
++++++++ Last updated : 01/04/2023++++
+# Azure Active Directory SSO integration with Lusha
+
+In this article, you'll learn how to integrate Lusha with Azure Active Directory (Azure AD). Lusha is a sales intelligence solution that delivers instant and accurate contact and company data to help leading sales, marketing, and recruitment teams speed up sales with less work. When you integrate Lusha with Azure AD, you can:
+
+* Control in Azure AD who has access to Lusha.
+* Enable your users to be automatically signed-in to Lusha with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Lusha in a test environment. Lusha supports both **SP** and **IDP** initiated single sign-on and also supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Lusha, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Lusha single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Lusha application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Lusha from the Azure AD gallery
+
+Add Lusha from the Azure AD application gallery to configure single sign-on with Lusha. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Lusha** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user doesn't have to perform any step as the app is already pre-integrated with Azure.
+
+1. If you want to configure **SP** initiated SSO, then perform the following step:
+
+ In the **Sign on URL** textbox, type the URL:
+ `https://auth.lusha.com/sso-login`
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure Lusha SSO
+
+To configure single sign-on on **Lusha** side, you need to send the **App Federation Metadata Url** to [Lusha support team](mailto:support@lusha.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Lusha test user
+
+In this section, a user called B.Simon is created in Lusha. Lusha supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Lusha, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Lusha Sign-on URL where you can initiate the login flow.
+
+* Go to Lusha Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Lusha for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Lusha tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Lusha for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Lusha you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Mint Tms Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mint-tms-tutorial.md
+
+ Title: Azure Active Directory SSO integration with MINT TMS
+description: Learn how to configure single sign-on between Azure Active Directory and MINT TMS.
++++++++ Last updated : 01/04/2023++++
+# Azure Active Directory SSO integration with MINT TMS
+
+In this article, you'll learn how to integrate MINT TMS with Azure Active Directory (Azure AD). MINT TMS is a Training, Resource and Qualification Management System used as a reliable tool to plan, optimize and measure training and career progress and the actual records of their employees. When you integrate MINT TMS with Azure AD, you can:
+
+* Control in Azure AD who has access to MINT TMS.
+* Enable your users to be automatically signed-in to MINT TMS with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for MINT TMS in a test environment. MINT TMS supports **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with MINT TMS, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* MINT TMS single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the MINT TMS application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add MINT TMS from the Azure AD gallery
+
+Add MINT TMS from the Azure AD application gallery to configure single sign-on with MINT TMS. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **MINT TMS** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `<environment-name>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<environment-name>.mint-online.com/`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [MINT TMS Client support team](mailto:support@media-interactive.de) to get these values. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up MINT TMS** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure MINT TMS SSO
+
+To configure single sign-on on **MINT TMS** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [MINT TMS support team](mailto:support@media-interactive.de). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create MINT TMS test user
+
+In this section, you create a user called Britta Simon at MINT TMS. Work with [MINT TMS support team](mailto:support@media-interactive.de) to add the users in the MINT TMS platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the MINT TMS for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the MINT TMS tile in the My Apps, you should be automatically signed in to the MINT TMS for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure MINT TMS you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Mysdworxcom Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mysdworxcom-tutorial.md
+
+ Title: Azure Active Directory SSO integration with my.sdworx.com
+description: Learn how to configure single sign-on between Azure Active Directory and my.sdworx.com.
++++++++ Last updated : 01/04/2023++++
+# Azure Active Directory SSO integration with my.sdworx.com
+
+In this article, you'll learn how to integrate my.sdworx.com with Azure Active Directory (Azure AD). my.sdworx.com is a SD Worx portal. When you integrate my.sdworx.com with Azure AD, you can:
+
+* Control in Azure AD who has access to my.sdworx.com.
+* Enable your users to be automatically signed-in to my.sdworx.com with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for my.sdworx.com in a test environment. my.sdworx.com supports **IDP** initiated single sign-on.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with my.sdworx.com, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* my.sdworx.com single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the my.sdworx.com application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add my.sdworx.com from the Azure AD gallery
+
+Add my.sdworx.com from the Azure AD application gallery to configure single sign-on with my.sdworx.com. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **my.sdworx.com** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure my.sdworx.com SSO
+
+To configure single sign-on on **my.sdworx.com** side, you need to send the **App Federation Metadata Url** to [my.sdworx.com support team](mailto:support@sdworx.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create my.sdworx.com test user
+
+In this section, you create a user called Britta Simon at my.sdworx.com. Work with [my.sdworx.com support team](mailto:support@sdworx.com) to add the users in the my.sdworx.com platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the my.sdworx.com for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the my.sdworx.com tile in the My Apps, you should be automatically signed in to the my.sdworx.com for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure my.sdworx.com you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Pinpoint Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pinpoint-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Pinpoint (SAML)
+description: Learn how to configure single sign-on between Azure Active Directory and Pinpoint (SAML).
++++++++ Last updated : 01/04/2023++++
+# Azure Active Directory SSO integration with Pinpoint (SAML)
+
+In this article, you'll learn how to integrate Pinpoint (SAML) with Azure Active Directory (Azure AD). DDIΓÇÖs Pinpoint platform makes it easy to design, deliver, and track blended learning journeys for leaders. Pinpoint is seamlessly integrated into DDIΓÇÖs award-winning leadership development solutions. When you integrate Pinpoint (SAML) with Azure AD, you can:
+
+* Control in Azure AD who has access to Pinpoint (SAML).
+* Enable your users to be automatically signed-in to Pinpoint (SAML) with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Pinpoint (SAML) in a test environment. Pinpoint (SAML) supports only **SP** initiated single sign-on.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Pinpoint (SAML), you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Pinpoint (SAML) single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Pinpoint (SAML) application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Pinpoint (SAML) from the Azure AD gallery
+
+Add Pinpoint (SAML) from the Azure AD application gallery to configure single sign-on with Pinpoint (SAML). For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Pinpoint (SAML)** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the URL:
+ `https://login.ddiworld.com`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://login.ddiworld.com/SAML/sp/profile/post/acs`
+
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://pinpoint.ddiworld.com/<CustomerName>`
+
+ > [!Note]
+ > This value is not real. Update this value with the actual Sign on URL. Contact [Pinpoint (SAML) Client support team](mailto:ssosupport@ddiworld.com) to get the value. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Pinpoint (SAML)** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Pinpoint (SAML) SSO
+
+To configure single sign-on on **Pinpoint (SAML)** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Pinpoint (SAML) support team](mailto:ssosupport@ddiworld.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Pinpoint (SAML) test user
+
+In this section, you create a user called Britta Simon at Pinpoint (SAML). Work with [Pinpoint (SAML) support team](mailto:ssosupport@ddiworld.com) to add the users in the Pinpoint (SAML) platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Pinpoint (SAML) Sign-on URL where you can initiate the login flow.
+
+* Go to Pinpoint (SAML) Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Pinpoint (SAML) tile in the My Apps, this will redirect to Pinpoint (SAML) Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Pinpoint (SAML) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-rbac.md
description: Learn how to use Azure Active Directory group membership to restrict access to cluster resources using Kubernetes role-based access control (Kubernetes RBAC) in Azure Kubernetes Service (AKS) Previously updated : 12/07/2022 Last updated : 01/10/2023 # Control access to cluster resources using Kubernetes role-based access control and Azure Active Directory identities in Azure Kubernetes Service
-Azure Kubernetes Service (AKS) can be configured to use Azure Active Directory (AD) for user authentication. In this configuration, you sign in to an AKS cluster using an Azure AD authentication token. Once authenticated, you can use the built-in Kubernetes role-based access control (Kubernetes RBAC) to manage access to namespaces and cluster resources based on a user's identity or group membership.
+Azure Kubernetes Service (AKS) can be configured to use Azure Active Directory (Azure AD) for user authentication. In this configuration, you sign in to an AKS cluster using an Azure AD authentication token. Once authenticated, you can use the built-in Kubernetes role-based access control (Kubernetes RBAC) to manage access to namespaces and cluster resources based on a user's identity or group membership.
-This article shows you how to control access using Kubernetes RBAC in an AKS cluster based on Azure AD group membership. Example groups and users are created in Azure AD, then Roles and RoleBindings are created in the AKS cluster to grant the appropriate permissions to create and view resources.
+This article shows you how to:
-## Before you begin
-
-This article assumes that you have an existing AKS cluster enabled with Azure AD integration. If you need an AKS cluster, see [Integrate Azure Active Directory with AKS][azure-ad-aks-cli].
-
-Kubernetes RBAC is enabled by default during AKS cluster creation. If Kubernetes RBAC wasn't enabled when you originally deployed your cluster, you'll need to delete and recreate your cluster.
+* Control access using Kubernetes RBAC in an AKS cluster based on Azure AD group membership.
+* Create example groups and users in Azure AD.
+* Create Roles and RoleBindings in an AKS cluster to grant the appropriate permissions to create and view resources.
-Consider the following basic requirements before continuing:
+## Before you begin
-- The Azure CLI version 2.0.61 or later is installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].-- If using Terraform, install [Terraform][terraform-on-azure] version 2.99.0 or later.
+* This article assumes that you have an existing AKS cluster enabled with Azure AD integration. If you need an AKS cluster, see [Integrate Azure AD with AKS][azure-ad-aks-cli].
+* Kubernetes RBAC is enabled by default during AKS cluster creation. If Kubernetes RBAC wasn't enabled when you originally deployed your cluster, you'll need to delete and recreate your cluster.
+* Make sure that Azure CLI version 2.0.61 or later is installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* If using Terraform, install [Terraform][terraform-on-azure] version 2.99.0 or later.
-To verify if Kubernetes RBAC is enabled, you can check from Azure portal or Azure CLI.
+Use the Azure portal or Azure CLI to verify if Kubernetes RBAC is enabled.
#### [Azure portal](#tab/portal)
-From your browser, sign in to the [Azure portal](https://portal.azure.com).
+Verify Kubernetes RBAC is enabled using the Azure portal:
-Navigate to Kubernetes services, and from the left-hand pane select **Cluster configuration**. On the page, under the section **Authentication and Authorization**, verify the option **Local accounts with Kubernetes RBAC** is shown.
+* From your browser, sign in to the [Azure portal](https://portal.azure.com).
+* Navigate to Kubernetes services, and from the left-hand pane select **Cluster configuration**.
+* Under the **Authentication and Authorization** section, check to see if the **Local accounts with Kubernetes RBAC** or the **Azure AD authentication with Kubernetes RBAC** option is shown.
:::image type="content" source="./media/azure-ad-rbac/rbac-portal.png" alt-text="Example of Authentication and Authorization page in Azure portal." lightbox="./media/azure-ad-rbac/rbac-portal.png"::: #### [Azure CLI](#tab/azure-cli)
-To verify RBAC is enabled, you can use the `az aks show` command.
+Verify Kubernetes RBAC is enabled using Azure CLI, with the `az aks show` command:
-```azuecli
-az aks show --resource-group myResourceGroup --name myAKSCluster`
+```azurecli
+az aks show --resource-group myResourceGroup --name myAKSCluster
```
-The output will show that the value for `enableRbac` is `true`.
+If it's enabled, the output will show the value for `enableRbac` is `true`.
## Create demo groups in Azure AD
-In this article, let's create two user roles that can be used to show how Kubernetes RBAC and Azure AD control access to cluster resources. The following two example roles are used:
+In this article, we'll create two user roles to show how Kubernetes RBAC and Azure AD control access to cluster resources. The following two example roles are used:
* **Application developer**
- * A user named *aksdev* that is part of the *appdev* group.
+ * A user named *aksdev* that's part of the *appdev* group.
* **Site reliability engineer**
- * A user named *akssre* that is part of the *opssre* group.
+ * A user named *akssre* that's part of the *opssre* group.
In production environments, you can use existing users and groups within an Azure AD tenant.
-First, get the resource ID of your AKS cluster using the [az aks show][az-aks-show] command. Assign the resource ID to a variable named *AKS_ID* so that it can be referenced in additional commands.
+1. First, get the resource ID of your AKS cluster using the [`az aks show`][az-aks-show] command. Then, assign the resource ID to a variable named *AKS_ID* so it can be referenced in other commands.
-```azurecli-interactive
-AKS_ID=$(az aks show \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --query id -o tsv)
-```
+ ```azurecli-interactive
+ AKS_ID=$(az aks show \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --query id -o tsv)
+ ```
-Create the first example group in Azure AD for the application developers using the [az ad group create][az-ad-group-create] command. The following example creates a group named *appdev*:
+2. Create the first example group in Azure AD for the application developers using the [`az ad group create`][az-ad-group-create] command. The following example creates a group named *appdev*:
-```azurecli-interactive
-APPDEV_ID=$(az ad group create --display-name appdev --mail-nickname appdev --query Id -o tsv)
-```
+ ```azurecli-interactive
+ APPDEV_ID=$(az ad group create --display-name appdev --mail-nickname appdev --query Id -o tsv)
+ ```
-Now, create an Azure role assignment for the *appdev* group using the [az role assignment create][az-role-assignment-create] command. This assignment lets any member of the group use `kubectl` to interact with an AKS cluster by granting them the *Azure Kubernetes Service Cluster User Role*.
+3. Create an Azure role assignment for the *appdev* group using the [`az role assignment create`][az-role-assignment-create] command. This assignment lets any member of the group use `kubectl` to interact with an AKS cluster by granting them the *Azure Kubernetes Service Cluster User* Role.
-```azurecli-interactive
-az role assignment create \
- --assignee $APPDEV_ID \
- --role "Azure Kubernetes Service Cluster User Role" \
- --scope $AKS_ID
-```
+ ```azurecli-interactive
+ az role assignment create \
+ --assignee $APPDEV_ID \
+ --role "Azure Kubernetes Service Cluster User Role" \
+ --scope $AKS_ID
+ ```
> [!TIP]
-> If you receive an error such as `Principal 35bfec9328bd4d8d9b54dea6dac57b82 does not exist in the directory a5443dcd-cd0e-494d-a387-3039b419f0d5.`, wait a few seconds for the Azure AD group object ID to propagate through the directory then try the `az role assignment create` command again.
+> If you receive an error such as `Principal 35bfec9328bd4d8d9b54dea6dac57b82 doesn't exist in the directory a5443dcd-cd0e-494d-a387-3039b419f0d5.`, wait a few seconds for the Azure AD group object ID to propagate through the directory then try the `az role assignment create` command again.
-Create a second example group, this one for SREs named *opssre*:
+4. Create a second example group for SREs named *opssre*.
-```azurecli-interactive
-OPSSRE_ID=$(az ad group create --display-name opssre --mail-nickname opssre --query objectId -o tsv)
-```
+ ```azurecli-interactive
+ OPSSRE_ID=$(az ad group create --display-name opssre --mail-nickname opssre --query objectId -o tsv)
+ ```
-Again, create an Azure role assignment to grant members of the group the *Azure Kubernetes Service Cluster User Role*:
+5. Create an Azure role assignment to grant members of the group the *Azure Kubernetes Service Cluster User* Role.
-```azurecli-interactive
-az role assignment create \
- --assignee $OPSSRE_ID \
- --role "Azure Kubernetes Service Cluster User Role" \
- --scope $AKS_ID
-```
+ ```azurecli-interactive
+ az role assignment create \
+ --assignee $OPSSRE_ID \
+ --role "Azure Kubernetes Service Cluster User Role" \
+ --scope $AKS_ID
+ ```
## Create demo users in Azure AD
-With two example groups created in Azure AD for our application developers and SREs, now lets create two example users. To test the Kubernetes RBAC integration at the end of the article, you sign in to the AKS cluster with these accounts.
+Now that we have two example groups created in Azure AD for our application developers and SREs, we'll create two example users. To test the Kubernetes RBAC integration at the end of the article, you'll sign in to the AKS cluster with these accounts.
+
+### Set the user principal name and password for application developers
-Set the user principal name (UPN) and password for the application developers. The following command prompts you for the UPN and sets it to *AAD_DEV_UPN* for use in a later command (remember that the commands in this article are entered into a BASH shell). The UPN must include the verified domain name of your tenant, for example `aksdev@contoso.com`.
+Set the user principal name (UPN) and password for the application developers. The UPN must include the verified domain name of your tenant, for example `aksdev@contoso.com`.
+
+The following command prompts you for the UPN and sets it to *AAD_DEV_UPN* so it can be used in a later command:
```azurecli-interactive echo "Please enter the UPN for application developers: " && read AAD_DEV_UPN ```
-The following command prompts you for the password and sets it to *AAD_DEV_PW* for use in a later command.
+The following command prompts you for the password and sets it to *AAD_DEV_PW* for use in a later command:
```azurecli-interactive echo "Please enter the secure password for application developers: " && read AAD_DEV_PW ```
-Create the first user account in Azure AD using the [az ad user create][az-ad-user-create] command.
+### Create the user accounts
-The following example creates a user with the display name *AKS Dev* and the UPN and secure password using the values in *AAD_DEV_UPN* and *AAD_DEV_PW*:
+1. Create the first user account in Azure AD using the [`az ad user create`][az-ad-user-create] command. The following example creates a user with the display name *AKS Dev* and the UPN and secure password using the values in *AAD_DEV_UPN* and *AAD_DEV_PW*:
```azurecli-interactive AKSDEV_ID=$(az ad user create \
AKSDEV_ID=$(az ad user create \
--query objectId -o tsv) ```
-Now add the user to the *appdev* group created in the previous section using the [az ad group member add][az-ad-group-member-add] command:
+2. Add the user to the *appdev* group created in the previous section using the [`az ad group member add`][az-ad-group-member-add] command.
```azurecli-interactive az ad group member add --group appdev --member-id $AKSDEV_ID ```
-Set the UPN and password for SREs. The following command prompts you for the UPN and sets it to *AAD_SRE_UPN* for use in a later command (remember that the commands in this article are entered into a BASH shell). The UPN must include the verified domain name of your tenant, for example `akssre@contoso.com`.
+3. Set the UPN and password for SREs. The UPN must include the verified domain name of your tenant, for example `akssre@contoso.com`. The following command prompts you for the UPN and sets it to *AAD_SRE_UPN* for use in a later command:
```azurecli-interactive echo "Please enter the UPN for SREs: " && read AAD_SRE_UPN ```
-The following command prompts you for the password and sets it to *AAD_SRE_PW* for use in a later command.
+4. The following command prompts you for the password and sets it to *AAD_SRE_PW* for use in a later command:
```azurecli-interactive echo "Please enter the secure password for SREs: " && read AAD_SRE_PW ```
-Create a second user account. The following example creates a user with the display name *AKS SRE* and the UPN and secure password using the values in *AAD_SRE_UPN* and *AAD_SRE_PW*:
+5. Create a second user account. The following example creates a user with the display name *AKS SRE* and the UPN and secure password using the values in *AAD_SRE_UPN* and *AAD_SRE_PW*:
```azurecli-interactive # Create a user for the SRE role
AKSSRE_ID=$(az ad user create \
az ad group member add --group opssre --member-id $AKSSRE_ID ```
-## Create the AKS cluster resources for app devs
+## Create AKS cluster resources for app devs
-The Azure AD groups and users are now created. Azure role assignments were created for the group members to connect to an AKS cluster as a regular user. Now, let's configure the AKS cluster to allow these different groups access to specific resources.
+We have our Azure AD groups, users, and Azure role assignments created. Now, we'll configure the AKS cluster to allow these different groups access to specific resources.
-First, get the cluster admin credentials using the [az aks get-credentials][az-aks-get-credentials] command. In one of the following sections, you get the regular *user* cluster credentials to see the Azure AD authentication flow in action.
+1. Get the cluster admin credentials using the [`az aks get-credentials`][az-aks-get-credentials] command. In one of the following sections, you get the regular *user* cluster credentials to see the Azure AD authentication flow in action.
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster --admin ```
-Create a namespace in the AKS cluster using the [kubectl create namespace][kubectl-create] command. The following example creates a namespace name *dev*:
+2. Create a namespace in the AKS cluster using the [`kubectl create namespace`][kubectl-create] command. The following example creates a namespace name *dev*:
```console kubectl create namespace dev ```
-In Kubernetes, *Roles* define the permissions to grant, and *RoleBindings* apply them to desired users or groups. These assignments can be applied to a given namespace, or across the entire cluster. For more information, see [Using Kubernetes RBAC authorization][rbac-authorization].
-
-First, create a Role for the *dev* namespace. This role grants full permissions to the namespace. In production environments, you can specify more granular permissions for different users or groups.
+> [!NOTE]
+> In Kubernetes, *Roles* define the permissions to grant, and *RoleBindings* apply them to desired users or groups. These assignments can be applied to a given namespace, or across the entire cluster. For more information, see [Using Kubernetes RBAC authorization][rbac-authorization].
-Create a file named `role-dev-namespace.yaml` and paste the following YAML manifest:
+3. Create a Role for the *dev* namespace, which grants full permissions to the namespace. In production environments, you can specify more granular permissions for different users or groups. Create a file named `role-dev-namespace.yaml` and paste the following YAML manifest:
```yaml kind: Role
rules:
verbs: ["*"] ```
-Create the Role using the [kubectl apply][kubectl-apply] command and specify the filename of your YAML manifest:
+4. Create the Role using the [`kubectl apply`][kubectl-apply] command and specify the filename of your YAML manifest.
```console kubectl apply -f role-dev-namespace.yaml ```
-Next, get the resource ID for the *appdev* group using the [az ad group show][az-ad-group-show] command. This group is set as the subject of a RoleBinding in the next step.
+5. Get the resource ID for the *appdev* group using the [`az ad group show`][az-ad-group-show] command. This group is set as the subject of a RoleBinding in the next step.
```azurecli-interactive az ad group show --group appdev --query id -o tsv ```
-Now, create a RoleBinding for the *appdev* group to use the previously created Role for namespace access. Create a file named `rolebinding-dev-namespace.yaml` and paste the following YAML manifest. On the last line, replace *groupObjectId* with the group object ID output from the previous command:
+6. Create a RoleBinding for the *appdev* group to use the previously created Role for namespace access. Create a file named `rolebinding-dev-namespace.yaml` and paste the following YAML manifest. On the last line, replace *groupObjectId* with the group object ID output from the previous command.
```yaml kind: RoleBinding
subjects:
> [!TIP] > If you want to create the RoleBinding for a single user, specify *kind: User* and replace *groupObjectId* with the user principal name (UPN) in the above sample.
-Create the RoleBinding using the [kubectl apply][kubectl-apply] command and specify the filename of your YAML manifest:
+7. Create the RoleBinding using the [`kubectl apply`][kubectl-apply] command and specify the filename of your YAML manifest:
```console kubectl apply -f rolebinding-dev-namespace.yaml
kubectl apply -f rolebinding-dev-namespace.yaml
## Create the AKS cluster resources for SREs
-Now, repeat the previous steps to create a namespace, Role, and RoleBinding for the SREs.
+Now, we'll repeat the previous steps to create a namespace, Role, and RoleBinding for the SREs.
-First, create a namespace for *sre* using the [kubectl create namespace][kubectl-create] command:
+1. Create a namespace for *sre* using the [`kubectl create namespace`][kubectl-create] command.
```console kubectl create namespace sre ```
-Create a file named `role-sre-namespace.yaml` and paste the following YAML manifest:
+2. Create a file named `role-sre-namespace.yaml` and paste the following YAML manifest:
```yaml kind: Role
rules:
verbs: ["*"] ```
-Create the Role using the [kubectl apply][kubectl-apply] command and specify the filename of your YAML manifest:
+3. Create the Role using the [`kubectl apply`][kubectl-apply] command and specify the filename of your YAML manifest.
```console kubectl apply -f role-sre-namespace.yaml ```
-Get the resource ID for the *opssre* group using the [az ad group show][az-ad-group-show] command:
+4. Get the resource ID for the *opssre* group using the [az ad group show][az-ad-group-show] command.
```azurecli-interactive az ad group show --group opssre --query id -o tsv ```
-Create a RoleBinding for the *opssre* group to use the previously created Role for namespace access. Create a file named `rolebinding-sre-namespace.yaml` and paste the following YAML manifest. On the last line, replace *groupObjectId* with the group object ID output from the previous command:
+4. Create a RoleBinding for the *opssre* group to use the previously created Role for namespace access. Create a file named `rolebinding-sre-namespace.yaml` and paste the following YAML manifest. On the last line, replace *groupObjectId* with the group object ID output from the previous command.
```yaml kind: RoleBinding
subjects:
name: groupObjectId ```
-Create the RoleBinding using the [kubectl apply][kubectl-apply] command and specify the filename of your YAML manifest:
+5. Create the RoleBinding using the [`kubectl apply`][kubectl-apply] command and specify the filename of your YAML manifest.
```console kubectl apply -f rolebinding-sre-namespace.yaml
kubectl apply -f rolebinding-sre-namespace.yaml
## Interact with cluster resources using Azure AD identities
-Now, let's test the expected permissions work when you create and manage resources in an AKS cluster. In these examples, you schedule and view pods in the user's assigned namespace. Then, you try to schedule and view pods outside of the assigned namespace.
+Now, we'll test that the expected permissions work when you create and manage resources in an AKS cluster. In these examples, we'll schedule and view pods in the user's assigned namespace, and try to schedule and view pods outside of the assigned namespace.
-First, reset the *kubeconfig* context using the [az aks get-credentials][az-aks-get-credentials] command. In a previous section, you set the context using the cluster admin credentials. The admin user bypasses Azure AD sign-in prompts. Without the `--admin` parameter, the user context is applied that requires all requests to be authenticated using Azure AD.
+1. Reset the *kubeconfig* context using the [`az aks get-credentials`][az-aks-get-credentials] command. In a previous section, you set the context using the cluster admin credentials. The admin user bypasses Azure AD sign-in prompts. Without the `--admin` parameter, the user context is applied that requires all requests to be authenticated using Azure AD.
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster --overwrite-existing ```
-Schedule a basic NGINX pod using the [kubectl run][kubectl-run] command in the *dev* namespace:
+2. Schedule a basic NGINX pod using the [`kubectl run`][kubectl-run] command in the *dev* namespace.
```console kubectl run nginx-dev --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --namespace dev ```
-As the sign-in prompt, enter the credentials for your own `appdev@contoso.com` account created at the start of the article. Once you are successfully signed in, the account token is cached for future `kubectl` commands. The NGINX is successfully schedule, as shown in the following example output:
+3. Enter the credentials for your own `appdev@contoso.com` account created at the start of the article as the sign-in prompt. Once you're successfully signed in, the account token is cached for future `kubectl` commands. The NGINX is successfully schedule, as shown in the following example output:
```console $ kubectl run nginx-dev --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --namespace dev
To sign in, use a web browser to open the page https://microsoft.com/devicelogin
pod/nginx-dev created ```
-Now use the [kubectl get pods][kubectl-get] command to view pods in the *dev* namespace.
+4. Use the [`kubectl get pods`][kubectl-get] command to view pods in the *dev* namespace.
```console kubectl get pods --namespace dev ```
-As shown in the following example output, the NGINX pod is successfully *Running*:
+5. Ensure the status of the NGINX pod is *Running*. The output will look similar to the following output:
```console $ kubectl get pods --namespace dev
nginx-dev 1/1 Running 0 4m
### Create and view cluster resources outside of the assigned namespace
-Now try to view pods outside of the *dev* namespace. Use the [kubectl get pods][kubectl-get] command again, this time to see `--all-namespaces` as follows:
+Try to view pods outside of the *dev* namespace. Use the [`kubectl get pods`][kubectl-get] command again, this time to see `--all-namespaces`.
```console kubectl get pods --all-namespaces ```
-The user's group membership does not have a Kubernetes Role that allows this action, as shown in the following example output:
+The user's group membership doesn't have a Kubernetes Role that allows this action, as shown in the following example output:
```console
-$ kubectl get pods --all-namespaces
- Error from server (Forbidden): pods is forbidden: User "aksdev@contoso.com" cannot list resource "pods" in API group "" at the cluster scope ```
-In the same way, try to schedule a pod in different namespace, such as the *sre* namespace. The user's group membership does not align with a Kubernetes Role and RoleBinding to grant these permissions, as shown in the following example output:
+In the same way, try to schedule a pod in a different namespace, such as the *sre* namespace. The user's group membership doesn't align with a Kubernetes Role and RoleBinding to grant these permissions, as shown in the following example output:
```console $ kubectl run nginx-dev --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --namespace sre
Error from server (Forbidden): pods is forbidden: User "aksdev@contoso.com" cann
To confirm that our Azure AD group membership and Kubernetes RBAC work correctly between different users and groups, try the previous commands when signed in as the *opssre* user.
-Reset the *kubeconfig* context using the [az aks get-credentials][az-aks-get-credentials] command that clears the previously cached authentication token for the *aksdev* user:
+1. Reset the *kubeconfig* context using the [`az aks get-credentials`][az-aks-get-credentials] command that clears the previously cached authentication token for the *aksdev* user.
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster --overwrite-existing ```
-Try to schedule and view pods in the assigned *sre* namespace. When prompted, sign in with your own `opssre@contoso.com` credentials created at the start of the article:
+2. Try to schedule and view pods in the assigned *sre* namespace. When prompted, sign in with your own `opssre@contoso.com` credentials created at the start of the article.
```console kubectl run nginx-sre --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --namespace sre
As shown in the following example output, you can successfully create and view t
```console $ kubectl run nginx-sre --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --namespace sre
-To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code BM4RHP3FD to authenticate.
+3. To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code BM4RHP3FD to authenticate.
pod/nginx-sre created
NAME READY STATUS RESTARTS AGE
nginx-sre 1/1 Running 0 ```
-Now, try to view or schedule pods outside of assigned SRE namespace:
+4. Try to view or schedule pods outside of assigned SRE namespace.
```console kubectl get pods --all-namespaces kubectl run nginx-sre --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --namespace dev ```
-These `kubectl` commands fail, as shown in the following example output. The user's group membership and Kubernetes Role and RoleBindings don't grant permissions to create or manager resources in other namespaces:
+These `kubectl` commands fail, as shown in the following example output. The user's group membership and Kubernetes Role and RoleBindings don't grant permissions to create or manager resources in other namespaces.
```console $ kubectl get pods --all-namespaces
Error from server (Forbidden): pods is forbidden: User "akssre@contoso.com" cann
## Clean up resources
-In this article, you created resources in the AKS cluster and users and groups in Azure AD. To clean up all these resources, run the following commands:
+In this article, you created resources in the AKS cluster and users and groups in Azure AD. To clean up all of the resources, run the following commands:
```azurecli-interactive
-# Get the admin kubeconfig context to delete the necessary cluster resources
+# Get the admin kubeconfig context to delete the necessary cluster resources.
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster --admin
-# Delete the dev and sre namespaces. This also deletes the pods, Roles, and RoleBindings
+# Delete the dev and sre namespaces. This also deletes the pods, Roles, and RoleBindings.
+ kubectl delete namespace dev kubectl delete namespace sre
-# Delete the Azure AD user accounts for aksdev and akssre
+# Delete the Azure AD user accounts for aksdev and akssre.
+ az ad user delete --upn-or-object-id $AKSDEV_ID az ad user delete --upn-or-object-id $AKSSRE_ID # Delete the Azure AD groups for appdev and opssre. This also deletes the Azure role assignments.+ az ad group delete --group appdev az ad group delete --group opssre ``` ## Next steps
-For more information about how to secure Kubernetes clusters, see [Access and identity options for AKS)][rbac-authorization].
+* For more information about how to secure Kubernetes clusters, see [Access and identity options for AKS][rbac-authorization].
-For best practices on identity and resource control, see [Best practices for authentication and authorization in AKS][operator-best-practices-identity].
+* For best practices on identity and resource control, see [Best practices for authentication and authorization in AKS][operator-best-practices-identity].
<!-- LINKS - external --> [kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
aks Tutorial Kubernetes Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/tutorial-kubernetes-workload-identity.md
Title: Tutorial - Use a workload identity with an application on Azure Kubernete
description: In this Azure Kubernetes Service (AKS) tutorial, you deploy an Azure Kubernetes Service cluster and configure an application to use a workload identity. Previously updated : 12/02/2022 Last updated : 01/11/2023 # Tutorial: Use a workload identity with an application on Azure Kubernetes Service (AKS)
kind: Pod
metadata: name: quick-start namespace: ${SERVICE_ACCOUNT_NAMESPACE}
+ labels:
+ azure.workload.identity/use: "true"
spec: serviceAccountName: ${SERVICE_ACCOUNT_NAME} containers:
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workl
description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity (preview). Previously updated : 01/06/2023 Last updated : 01/11/2023 # Deploy and configure workload identity (preview) on an Azure Kubernetes Service (AKS) cluster
az identity federated-credential create --name myfederatedIdentity --identity-na
> [!NOTE] > It takes a few seconds for the federated identity credential to be propagated after being initially added. If a token request is made immediately after adding the federated identity credential, it might lead to failure for a couple of minutes as the cache is populated in the directory with old data. To avoid this issue, you can add a slight delay after adding the federated identity credential.
+## Deploy your application
+
+> [!IMPORTANT]
+> Ensure your application pods using workload identity have added the following label [azure.workload.identity/use: "true"] to your running pods/deployments, otherwise the pods will fail once restarted.
+
+```azurecli-interactive
+kubectl apply -f <your application>
+```
+ ## Disable workload identity To disable the Azure AD workload identity on the AKS cluster where it's been enabled and configured, you can run the following command:
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md
Previously updated : 09/13/2022 Last updated : 01/06/2023
When adding virtual machines running Windows to the VNet, allow outbound connect
## Control plane IP addresses
-The following IP addresses are divided by **Azure Environment**. When allowing inbound requests, IP addresses marked with **Global** must be permitted, along with the **Region**-specific IP address. In some cases, two IP addresses are listed. Permit both IP addresses.
+The following IP addresses are divided by **Azure Environment** and **Region**. In some cases, two IP addresses are listed. Permit both IP addresses.
> [!IMPORTANT] > Control plane IP addresses should be configured for network access rules only when needed in certain networking scenarios. We recommend using the **ApiManagement** [service tag](../virtual-network/service-tags-overview.md) instead of control plane IP addresses to prevent downtime when infrastructure improvements necessitate IP address changes. | **Azure Environment**| **Region**| **IP address**| |--|-||
-| Azure Public| South Central US (Global)| 104.214.19.224|
-| Azure Public| North Central US (Global)| 52.162.110.80|
| Azure Public| Australia Central| 20.37.52.67| | Azure Public| Australia Central 2| 20.39.99.81| | Azure Public| Australia East| 20.40.125.155|
applied-ai-services Form Recognizer Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-image-tags.md
Release notes for `v2.1`:
| Container | Tags | Retrieve image | ||:|| | **Layout**| &bullet; `latest` </br> &bullet; `2.1`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout)`|
-| **Business Card** | &bullet; `latest` </br> &bullet; `2.1` |`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt` |
+| **Business Card** | &bullet; `latest` </br> &bullet; `2.1` |`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/businesscard` |
| **ID Document** | &bullet; `latest` </br> &bullet; `2.1`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document`| | **Receipt**| &bullet; `latest` </br> &bullet; `2.1`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt` | | **Invoice**| &bullet; `latest` </br> &bullet; `2.1`|`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
automation Region Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/region-mappings.md
This article provides the supported mappings in order to successfully enable and
For more information, see [Log Analytics workspace and Automation account](../../azure-monitor/insights/solutions.md#log-analytics-workspace-and-automation-account).
-## Supported mappings for version 1
+## Supported mappings for Log Analytics and Azure Automation
> [!NOTE] > As shown in following table, only one mapping can exist between Log Analytics and Azure Automation.
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/operating-system-requirements.md
The following table lists the supported operating systems for update assessments
All operating systems are assumed to be x64. x86 is not supported for any operating system. > [!NOTE]
-> - Update assessment of Linux machines is only supported in certain regions as listed in the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings-for-version-1).
+> - Update assessment of Linux machines is only supported in certain regions as listed in the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings-for-log-analytics-and-azure-automation).
> - Update Management does not support CIS hardened images. # [Windows operating system](#tab/os-win)
All operating systems are assumed to be x64. x86 is not supported for any operat
# [Linux operating system](#tab/os-linux) > [!NOTE]
-> Update assessment of Linux machines is only supported in certain regions as listed in the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings-for-version-1).
+> Update assessment of Linux machines is only supported in certain regions as listed in the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings-for-log-analytics-and-azure-automation).
|Operating system |Notes | |||
By default, Windows VMs that are deployed from Azure Marketplace are set to rece
- The Update Management feature depends on the system Hybrid Runbook Worker role, and you should confirm its [system requirements](../automation-linux-hrw-install.md#prerequisites). Because Update Management uses Automation runbooks to initiate assessment and update of your machines, review the [version of Python required](../automation-linux-hrw-install.md#supported-runbook-types) for your supported Linux distro. > [!NOTE]
-> Update assessment of Linux machines is supported in certain regions only. See the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings-for-version-1).
+> Update assessment of Linux machines is supported in certain regions only. See the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings-for-log-analytics-and-azure-automation).
For hybrid machines, we recommend installing the Log Analytics agent for Linux by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, to monitor the machines use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) instead of Azure Monitor for VMs.
automation Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new-archive.md
Automation account and State Configuration availability in Brazil South East. Fo
**Type:** New feature
-Azure Automation region mapping updated to support Update Management feature in South Central US region. See [Supported region mapping](how-to/region-mappings.md#supported-mappings-for-version-1) for updates to the documentation to reflect this change.
+Azure Automation region mapping updated to support Update Management feature in South Central US region. See [Supported region mapping](how-to/region-mappings.md#supported-mappings-for-log-analytics-and-azure-automation) for updates to the documentation to reflect this change.
## September 2020
The New-OnPremiseHybridWorker runbook has been updated to support Az modules. Fo
**Type:** New feature
-Azure Automation region mapping updated to support Update Management feature in China East 2 region. See [Supported region mapping](how-to/region-mappings.md#supported-mappings-for-version-1) for updates to the documentation to reflect this change.
+Azure Automation region mapping updated to support Update Management feature in China East 2 region. See [Supported region mapping](how-to/region-mappings.md#supported-mappings-for-log-analytics-and-azure-automation) for updates to the documentation to reflect this change.
## May 2020
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
description: Significant updates to Azure Automation updated each month.
Previously updated : 11/02/2021 Last updated : 01/11/2022
Azure Automation receives improvements on an ongoing basis. To stay up to date w
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md).
+## January 2023
+
+### Public Preview of Automation extension for Visual Studio Code
+
+ Azure Automation now provides an [extension from VS Code](https://marketplace.visualstudio.com/items?itemName=azure-automation.vscode-azureautomation&ssr=false#overview) that allows you to create and manage runbooks. For more information, see the [Key features and limitations](automation-runbook-authoring.md) and [runbook management operations](how-to/runbook-authoring-extension-for-vscode.md).
++ ## November 2022 ### General Availability: Azure Automation User Hybrid Runbook Worker Extension
Azure Automation now supports runbooks in latest Runtime versions - PowerShell 7
### Guidance for Disaster Recovery of Azure Automation account
-Set up disaster recovery for your Automation accounts to handle a region-wide or zone-wide failure. [Learn more](https://learn.microsoft.com/azure/automation/automation-disaster-recovery).
+Set up disaster recovery for your Automation accounts to handle a region-wide or zone-wide failure. [Learn more](automation-disaster-recovery.md).
+ ## September 2022 ### Availability zones support for Azure Automation
-Azure Automation now supports [Azure availability zones](../reliability/availability-zones-overview.md#availability-zones) to provide improved resiliency and reliability by providing high availability to the service, runbooks, and other Automation assets. [Learn more](https://learn.microsoft.com/azure/automation/automation-availability-zones).
+Azure Automation now supports [Azure availability zones](../reliability/availability-zones-overview.md#availability-zones) to provide improved resiliency and reliability by providing high availability to the service, runbooks, and other Automation assets. [Learn more](automation-availability-zones.md).
## July 2022
Start/Stop VMs during off-hours (v1) will deprecate on May 21, 2022. Customers s
**Type:** New feature
-Region mapping has been updated to support Update Management and Change Tracking in Norway East, UAE North, North Central US, Brazil South, and Korea Central. For more information, see [Supported mappings](./how-to/region-mappings.md#supported-mappings-for-version-1).
+Region mapping has been updated to support Update Management and Change Tracking in Norway East, UAE North, North Central US, Brazil South, and Korea Central. For more information, see [Supported mappings](./how-to/region-mappings.md#supported-mappings-for-log-analytics-and-azure-automation).
### Support for system-assigned Managed Identities
azure-arc Ssh Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-overview.md
To leverage this functionality, please ensure the following:
- [Ensure the Arc-enabled server has the "sshd" service enabled](/windows-server/administration/openssh/openssh_install_firstuse). - Ensure you have the Virtual Machine Local User Login role assigned (role ID: 602da2baa5c241dab01d5360126ab525)
+Authenticating with Azure AD credentials has additional requirements:
+ - `aadsshlogin` and `aadsshlogin-selinux` (as appropriate) must be installed on the Arc-enabled server. These packages are installed with the AADSSHLoginForLinux VM extension.
+ - Configure role assignments for the VM. Two Azure roles are used to authorize VM login:
+ - **Virtual Machine Administrator Login**: Users who have this role assigned can log in to an Azure virtual machine with administrator privileges.
+ - **Virtual Machine User Login**: Users who have this role assigned can log in to an Azure virtual machine with regular user privileges.
+
+ An Azure user who has the Owner or Contributor role assigned for a VM doesn't automatically have privileges to Azure AD login to the VM over SSH. There's an intentional (and audited) separation between the set of people who control virtual machines and the set of people who can access virtual machines.
+
+ > [!NOTE]
+ > The Virtual Machine Administrator Login and Virtual Machine User Login roles use `dataActions` and can be assigned at the management group, subscription, resource group, or resource scope. We recommend that you assign the roles at the management group, subscription, or resource level and not at the individual VM level. This practice avoids the risk of reaching the [Azure role assignments limit](../../role-based-access-control/troubleshooting.md#limits) per subscription.
+ ### Availability SSH access to Arc-enabled servers is currently supported in the following regions: - eastus2euap, eastus, eastus2, westus2, southeastasia, westeurope, northeurope, westcentralus, southcentralus, uksouth, australiaeast, francecentral, japaneast, eastasia, koreacentral, westus3, westus, centralus, northcentralus.
azure-functions Create First Function Vs Code Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-python.md
In this section, you use Visual Studio Code to create a local Azure Functions pr
7. Replace the `app.route()` method call with the following code: ```python
- @app.route(route="hello", http_auth_level=func.AuthLevel.ANONYMOUS)
+ @app.route(route="hello", auth_level=func.AuthLevel.ANONYMOUS)
``` This code enables your HTTP function endpoint to be called in Azure without having to provide an [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys). Local execution doesn't require authorization keys.
In this section, you use Visual Studio Code to create a local Azure Functions pr
Your function code should now look like the following example: ```python
+ app = func.FunctionApp()
@app.function_name(name="HttpTrigger1")
- @app.route(route="hello", http_auth_level=func.AuthLevel.ANONYMOUS)
+ @app.route(route="hello", auth_level=func.AuthLevel.ANONYMOUS)
def test_function(req: func.HttpRequest) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.')
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
To complete this procedure, you need:
- Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac). - [Data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint). - [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.-- A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that writes logs to a text file.
- - The log file must be stored on the local drive of the machine on which Azure Monitor Agent is running.
- - Each entry in the log file must be delineated with an end of line.
- - The log file must not allow circular logging, log rotation where the file is overwritten with new entries, or renaming where a file is moved and a new file with the same name is opened.
+- A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that writes logs to a text file.
+
+ Text file requirements:
+ - Store on the local drive of the machine on which Azure Monitor Agent is running.
+ - Delineate with an end of line.
+ - Use ASCII or UTF-8 encoding. Other formats such as UTF-16 aren't supported.
+ - Do not allow circular logging, log rotation where the file is overwritten with new entries, or renaming where a file is moved and a new file with the same name is opened.
## Create a custom table
azure-monitor Alerts Common Schema Test Action Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema-test-action-definitions.md
Title: Alert schema definitions in Azure Monitor for Test Action Group
-description: Understanding the common alert schema definitions for Azure Monitor for Test Action group
+description: Understand the common alert schema definitions for Azure Monitor for the Test Action group.
Last updated 01/14/2022 ms.revewer: jagummersall
-# Common alert schema definitions for Test Action Group (Preview)
+# Common alert schema definitions for Test Action Group (preview)
-This article describes the [common alert schema definitions](./alerts-common-schema.md) for Azure Monitor, including those for webhooks, Azure Logic Apps, Azure Functions, and Azure Automation runbooks.
+This article describes the [common alert schema definitions](./alerts-common-schema.md) for Azure Monitor. It includes schema definitions for webhooks, Azure Logic Apps, Azure Functions, and Azure Automation runbooks.
Any alert instance describes the resource that was affected and the cause of the alert. These instances are described in the common schema in the following sections:
-* **Essentials**: A set of standardized fields, common across all alert types, which describe what resource the alert is on, along with additional common alert metadata (for example, severity or description).
-* **Alert context**: A set of fields that describes the cause of the alert, with fields that vary based on the alert type. For example, a metric alert includes fields like the metric name and metric value in the alert context, whereas an activity log alert has information about the event that generated the alert.
+* **Essentials**: A set of standardized fields common across all alert types that describes what resource the alert is on, along with more common alert metadata like severity or description.
+* **Alert context**: A set of fields that describes the cause of the alert, with fields that vary based on the alert type. For example, a metric alert includes fields like the metric name and metric value in the alert context. An activity log alert has information about the event that generated the alert.
**Sample alert payload** ```json
Any alert instance describes the resource that was affected and the cause of the
| Field | Description| |:|:|
-| alertId | The unique resource ID identifying the alert instance. |
-| alertRule | The name of the alert rule that generated the alert instance. |
-| Severity | The severity of the alert. Possible values: Sev0, Sev1, Sev2, Sev3, or Sev4. |
-| signalType | Identifies the signal on which the alert rule was defined. Possible values: Metric, Log, or Activity Log. |
-| monitorCondition | When an alert fires, the alert's monitor condition is set to **Fired**. When the underlying condition that caused the alert to fire clears, the monitor condition is set to **Resolved**. |
-| monitoringService | The monitoring service or solution that generated the alert. The fields for the alert context are dictated by the monitoring service. |
-| alertTargetIds | The list of the Azure Resource Manager IDs that are affected targets of an alert. For a log alert defined on a Log Analytics workspace or Application Insights instance, it's the respective workspace or application. |
-| configurationItems | The list of affected resources of an alert. The configuration items can be different from the alert targets in some cases, e.g. in metric-for-log or log alerts defined on a Log Analytics workspace, where the configuration items are the actual resources sending the telemetry, and not the workspace. This field is used by ITSM systems to correlate alerts to resources in a CMDB. |
-| originAlertId | The ID of the alert instance, as generated by the monitoring service generating it. |
-| firedDateTime | The date and time when the alert instance was fired in Coordinated Universal Time (UTC). |
-| resolvedDateTime | The date and time when the monitor condition for the alert instance is set to **Resolved** in UTC. Currently only applicable for metric alerts.|
-| description | The description, as defined in the alert rule. |
-|essentialsVersion| The version number for the essentials section.|
-|alertContextVersion | The version number for the `alertContext` section. |
+| `alertId` | The unique resource ID that identifies the alert instance. |
+| `alertRule` | The name of the alert rule that generated the alert instance. |
+| `Severity` | The severity of the alert. Possible values: `Sev0`, `Sev1`, `Sev2`, `Sev3`, or `Sev4`. |
+| `signalType` | Identifies the signal on which the alert rule was defined. Possible values: `Metric`, `Log`, or `Activity Log`. |
+| `monitorCondition` | When an alert fires, the alert's monitor condition is set to `Fired`. When the underlying condition that caused the alert to fire clears, the monitor condition is set to `Resolved`. |
+| `monitoringService` | The monitoring service or solution that generated the alert. The fields for the alert context are dictated by the monitoring service. |
+| `alertTargetIds` | The list of the Azure Resource Manager IDs that are affected targets of an alert. For a log alert defined on a Log Analytics workspace or Application Insights instance, it's the respective workspace or application. |
+| `configurationItems` | The list of affected resources of an alert. The configuration items can be different from the alert targets in some cases, for example, in metric-for-log or log alerts defined on a Log Analytics workspace. The configuration items are the actual resources that send the telemetry and not the workspace. This field is used by IT service management systems to correlate alerts to resources in a configuration management database. |
+| `originAlertId` | The ID of the alert instance, as generated by the monitoring service generating it. |
+| `firedDateTime` | The date and time when the alert instance was fired in Coordinated Universal Time (UTC). |
+| `resolvedDateTime` | The date and time when the monitor condition for the alert instance is set to `Resolved` in UTC. Currently, only applicable for metric alerts.|
+| `description` | The description, as defined in the alert rule. |
+|`essentialsVersion`| The version number for the `essentials` section.|
+|`alertContextVersion` | The version number for the `alertContext` section. |
**Sample values** ```json
Any alert instance describes the resource that was affected and the cause of the
### Metric alerts - Static threshold
-#### `monitoringService` = `Platform`
+#### monitoringService = Platform
**Sample values** ```json
Any alert instance describes the resource that was affected and the cause of the
### Metric alerts - Dynamic threshold
-#### `monitoringService` = `Platform`
+#### monitoringService = Platform
**Sample values** ```json
Any alert instance describes the resource that was affected and the cause of the
### Log alerts > [!NOTE]
-> For log alerts that have a custom email subject and/or JSON payload defined, enabling the common schema reverts email subject and/or payload schema to the one described as follows. This means that if you want to have a custom JSON payload defined, the webhook cannot use the common alert schema. Alerts with the common schema enabled have an upper size limit of 256 KB per alert. Search results aren't embedded in the log alerts payload if they cause the alert size to cross this threshold. You can determine this by checking the flag `IncludedSearchResults`. When the search results aren't included, you should use the `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get).
+> For log alerts that have a custom email subject and/or JSON payload defined, enabling the common schema reverts email subject and/or payload schema to the one described as follows. This means that if you want to have a custom JSON payload defined, the webhook can't use the common alert schema.
-#### `monitoringService` = `Log Alerts V1 ΓÇô Metric`
+Alerts with the common schema enabled have an upper size limit of 256 KB per alert. Search results aren't embedded in the log alerts payload if they cause the alert size to cross this threshold. To determine size, check the flag `IncludedSearchResults`. When the search results aren't included, use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get).
+
+#### monitoringService = Log Alerts V1 ΓÇô Metric
**Sample values** ```json
Any alert instance describes the resource that was affected and the cause of the
} ```
-#### `monitoringService` = `Log Alerts V1 - Numresults`
+#### monitoringService = Log Alerts V1 - Numresults
**Sample values** ```json
Any alert instance describes the resource that was affected and the cause of the
} ```
-#### `monitoringService` = `Log Alerts V2`
+#### monitoringService = Log Alerts V2
> [!NOTE]
-> Log alerts rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log alerts payload when using this version. You should use [dimensions](./alerts-unified-log.md#split-by-alert-dimensions) to provide context to fired alerts. You can also use the `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links the generate a custom payload.
+> Log alerts rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log alerts payload when you use this version. Use [dimensions](./alerts-unified-log.md#split-by-alert-dimensions) to provide context to fired alerts.
+
+You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links to generate a custom payload.
**Sample values** ```json
Any alert instance describes the resource that was affected and the cause of the
### Activity log alerts
-#### `monitoringService` = `Activity Log - Administrative`
+#### monitoringService = Activity Log - Administrative
**Sample values** ```json
Any alert instance describes the resource that was affected and the cause of the
} ```
-#### `monitoringService` = `ServiceHealth`
+#### monitoringService = ServiceHealth
**Sample values** ```json
Any alert instance describes the resource that was affected and the cause of the
} ```
-#### `monitoringService` = `Resource Health`
+#### monitoringService = Resource Health
**Sample values** ```json
Any alert instance describes the resource that was affected and the cause of the
} ```
-#### `monitoringService` = `Budget`
+#### monitoringService = Budget
**Sample values** ```json
Any alert instance describes the resource that was affected and the cause of the
} ```
-#### `monitoringService` = `Smart Alert`
+#### monitoringService = Smart Alert
**Sample values** ```json
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
You can also create an activity log alert on future events similar to an activit
The current alert rule wizard is different from the earlier experience: - Previously, search results were included in the payload of the triggered alert and its associated notifications. The email included only 10 rows from the unfiltered results while the webhook payload contained 1,000 unfiltered results. To get detailed context information about the alert so that you can decide on the appropriate action:
- - We recommend using [Dimensions](alerts-types.md#narrow-the-target-using-dimensions). Dimensions provide the column value that fired the alert, which gives you context for why the alert fired and how to fix the issue.
+ - We recommend using [Dimensions](alerts-types.md#narrow-the-target-by-using-dimensions). Dimensions provide the column value that fired the alert, which gives you context for why the alert fired and how to fix the issue.
- When you need to investigate in the logs, use the link in the alert to the search results in logs. - If you need the raw search results or for any other advanced customizations, use Azure Logic Apps. - The new alert rule wizard doesn't support customization of the JSON payload.
azure-monitor Alerts Log Api Switch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-api-switch.md
In the past, users used the [legacy Log Analytics Alert API](./api-alerts.md) to
- Single template for creation of alert rules (previously needed three separate templates). - Single API for all Azure resources log alerting. - Support for stateful (preview) and 1-minute log alerts.-- [PowerShell cmdlets](./alerts-manage-alerts-previous-version.md#manage-log-alerts-using-powershell) and [Azure CLI](./alerts-log.md#manage-log-alerts-using-cli) support for switched rules.
+- [PowerShell cmdlets](./alerts-manage-alerts-previous-version.md#manage-log-alerts-by-using-powershell) and [Azure CLI](./alerts-log.md#manage-log-alerts-using-cli) support for switched rules.
- Alignment of severities with all other alert types and newer rules. - Ability to create [cross workspace log alert](../logs/cross-workspace-query.md) that span several external resources like Log Analytics workspaces or Application Insights resources for switched rules. - Users can specify dimensions to split the alerts for switched rules.
In the past, users used the [legacy Log Analytics Alert API](./api-alerts.md) to
## Impact -- All switched rules must be created/edited with the current API. See [sample use via Azure Resource Template](alerts-log-create-templates.md) and [sample use via PowerShell](./alerts-manage-alerts-previous-version.md#manage-log-alerts-using-powershell).
+- All switched rules must be created/edited with the current API. See [sample use via Azure Resource Template](alerts-log-create-templates.md) and [sample use via PowerShell](./alerts-manage-alerts-previous-version.md#manage-log-alerts-by-using-powershell).
- As rules become Azure Resource Manager tracked resources in the current API and must be unique, rules resource ID will change to this structure: `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Display names of the alert rule will remain unchanged. ## Process
If the Log Analytics workspace wasn't switched, the response is:
- Learn about the [Azure Monitor - Log Alerts](./alerts-unified-log.md). - Learn how to [manage your log alerts using the API](alerts-log-create-templates.md).-- Learn how to [manage log alerts using PowerShell](./alerts-manage-alerts-previous-version.md#manage-log-alerts-using-powershell).
+- Learn how to [manage log alerts using PowerShell](./alerts-manage-alerts-previous-version.md#manage-log-alerts-by-using-powershell).
- Learn more about the [Azure Alerts experience](./alerts-overview.md).
azure-monitor Alerts Manage Alert Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-instances.md
The **alerts details** page provides details about the selected alert.
- To see all closed alerts, select the **History** tab. :::image type="content" source="media/alerts-managing-alert-instances/alerts-details-page.png" alt-text="Screenshot of the alerts details page in the Azure portal.":::+
+## Manage your alerts programmatically
+
+You can query your alerts instances to create custom views outside of the Azure portal, or to analyze your alerts to identify patterns and trends.
+We recommended that you use [Azure Resource Graphs](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade) with the 'AlertsManagementResources' schema for managing alerts across multiple subscriptions. For a sample query, see [Azure Resource Graph sample queries for Azure Monitor](../resource-graph-samples.md).
+
+You can use Azure Resource Graphs:
+ - with [Azure PowerShell](/powershell/module/az.monitor/)
+ - with the [Azure CLI](/cli/azure/monitor?view=azure-cli-latest&preserve-view=true)
+ - in the Azure portal
+
+You can also use the [Alert Management REST API](/rest/api/monitor/alertsmanagement/alerts) for lower scale querying or to update fired alerts.
+ ## Next steps - [Learn about Azure Monitor alerts](./alerts-overview.md) - [Create a new alert rule](alerts-log.md)-- [Manage your alerts programmatically](alerts-overview.md#manage-your-alerts-programmatically)+
azure-monitor Alerts Manage Alerts Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md
Title: View and manage log alert rules created in previous versions| Microsoft Docs
-description: Use the Azure Monitor portal to manage log alert rules created in earlier versions
+description: Use the Azure Monitor portal to manage log alert rules created in earlier versions.
Last updated 2/23/2022
# Manage alert rules created in previous versions
-> [!NOTE]
-> This article describes the process of managing alert rules created in the previous UI or using API version `2018-04-16` or earlier. Alert rules created in the latest UI are viewed and managed in the new UI, as described in [Create, view, and manage log alerts using Azure Monitor](alerts-log.md).
+This article describes the process of managing alert rules created in the previous UI or by using API version `2018-04-16` or earlier. Alert rules created in the latest UI are viewed and managed in the new UI, as described in [Create, view, and manage log alerts by using Azure Monitor](alerts-log.md).
-1. In the [portal](https://portal.azure.com/), select the relevant resource.
+1. In the [Azure portal](https://portal.azure.com/), select the resource you want.
1. Under **Monitoring**, select **Alerts**.
-1. From the top command bar, select **Alert rules**.
+1. On the top bar, select **Alert rules**.
1. Select the alert rule that you want to edit. 1. In the **Condition** section, select the condition.
-1. The **Configure signal logic** pane opens, with historical data for the query appearing as a graph. You can change the time period of the chart to display data from the last six hours to last week.
- If your query results contain summarized data or specific columns without time column, the chart shows a single value.
+1. The **Configure signal logic** pane opens with historical data for the query that appears as a graph. You can change the **Time range** of the chart to display data from the last six hours to last week.
+ If your query results contain summarized data or specific columns without the time column, the chart shows a single value.
- :::image type="content" source="media/alerts-log/alerts-edit-alerts-rule.png" alt-text="Edit alerts rule.":::
+ :::image type="content" source="media/alerts-log/alerts-edit-alerts-rule.png" alt-text="Screenshot that shows the Configure signal logic pane.":::
-1. Edit the alert rule conditions using these sections:
- - **Search Query**. In this section, you can modify your query.
- - **Alert logic**. Log Alerts can be based on two types of [**Measures**](./alerts-unified-log.md#measure):
- 1. **Number of results** - Count of records returned by the query.
- 1. **Metric measurement** - *Aggregate value* calculated using summarize grouped by the expressions chosen and the [bin()](/azure/data-explorer/kusto/query/binfunction) selection. For example:
+1. Edit the alert rule conditions by using these sections:
+ - **Search query**: In this section, you can modify your query.
+ - **Alert logic**: Log alerts can be based on two types of [measures](./alerts-unified-log.md#measure):
+ 1. **Number of results**: Count of records returned by the query.
+ 1. **Metric measurement**: **Aggregate value** is calculated by using `summarize` grouped by the expressions chosen and the [bin()](/azure/data-explorer/kusto/query/binfunction) selection. For example:
```Kusto // Reported errors union Event, Syslog // Event table stores Windows event records, Syslog stores Linux records
or SeverityLevel== "err" // SeverityLevel is used in Syslog (Linux) records | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m) ```
- For metric measurements alert logic, you can specify how to [split the alerts by dimensions](./alerts-unified-log.md#split-by-alert-dimensions) using the **Aggregate on** option. The row grouping expression must be unique and sorted.
- > [!NOTE]
- > Since the [bin()](/azure/data-explorer/kusto/query/binfunction) can result in uneven time intervals, the alert service will automatically convert the [bin()](/azure/data-explorer/kusto/query/binfunction) function to a [binat()](/azure/data-explorer/kusto/query/binatfunction) function with appropriate time at runtime, to ensure results with a fixed point.
+ For metric measurements alert logic, you can specify how to [split the alerts by dimensions](./alerts-unified-log.md#split-by-alert-dimensions) by using the **Aggregate on** option. The row grouping expression must be unique and sorted.
+
+ The [bin()](/azure/data-explorer/kusto/query/binfunction) function can result in uneven time intervals, so the alert service automatically converts the [bin()](/azure/data-explorer/kusto/query/binfunction) function to a [binat()](/azure/data-explorer/kusto/query/binatfunction) function with appropriate time at runtime to ensure results with a fixed point.
+
> [!NOTE]
- > Split by alert dimensions is only available for the current scheduledQueryRules API. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you will need to switch. [Learn more about switching](./alerts-log-api-switch.md). Resource centric alerting at scale is only supported in the API version `2021-08-01` and above.
-
- :::image type="content" source="media/alerts-log/aggregate-on.png" alt-text="Aggregate on.":::
-
- - **Period**. Choose the time range over which to assess the specified condition, using [**Period**](./alerts-unified-log.md#query-time-range) option.
-
-1. When you are finished editing the conditions, select **Done**.
-1. Using the preview data, set the [**Operator**, **Threshold Value**](./alerts-unified-log.md#threshold-and-operator), and [**Frequency**](./alerts-unified-log.md#frequency).
-1. Set the [number of violations to trigger an alert](./alerts-unified-log.md#number-of-violations-to-trigger-alert) by using **Total or Consecutive Breaches**.
-1. Select **Done**.
-1. You can edit the rule **Description**, and **Severity**. These details are used in all alert actions. Additionally, you can choose to not activate the alert rule on creation by selecting **Enable rule upon creation**.
-1. Use the [**Suppress Alerts**](./alerts-unified-log.md#state-and-resolving-alerts) option if you want to suppress rule actions for a specified time after an alert is fired. The rule will still run and create alerts but actions won't be triggered to prevent noise. Mute actions value must be greater than the frequency of alert to be effective.
+ > The **Split by alert dimensions** option is only available for the current scheduledQueryRules API. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you'll need to switch. [Learn more about switching](./alerts-log-api-switch.md). Resource-centric alerting at scale is only supported in the API version `2021-08-01` and later.
+
+ :::image type="content" source="media/alerts-log/aggregate-on.png" alt-text="Screenshot that shows Aggregate on.":::
+
+ - **Period**: Choose the time range over which to assess the specified condition by using the [Period](./alerts-unified-log.md#query-time-range) option.
+
+1. When you're finished editing the conditions, select **Done**.
+1. Use the preview data to set the [Operator, Threshold value](./alerts-unified-log.md#threshold-and-operator), and [Frequency](./alerts-unified-log.md#frequency).
+1. Set the [number of violations to trigger an alert](./alerts-unified-log.md#number-of-violations-to-trigger-alert) by using **Total** or **Consecutive breaches**.
+1. Select **Done**.
+1. You can edit the rule **Description** and **Severity**. These details are used in all alert actions. You can also choose to not activate the alert rule on creation by selecting **Enable rule upon creation**.
+1. Use the [Suppress Alerts](./alerts-unified-log.md#state-and-resolving-alerts) option if you want to suppress rule actions for a specified time after an alert is fired. The rule will still run and create alerts, but actions won't be triggered to prevent noise. The **Mute actions** value must be greater than the frequency of the alert to be effective.
1. To make alerts stateful, select **Automatically resolve alerts (preview)**.
- ![Suppress Alerts for Log Alerts](media/alerts-log/AlertsPreviewSuppress.png)
-1. Specify if the alert rule should trigger one or more [**Action Groups**](./action-groups.md#webhook) when alert condition is met.
+ ![Screenshot that shows the Alert Details pane.](media/alerts-log/AlertsPreviewSuppress.png)
+1. Specify if the alert rule should trigger one or more [action groups](./action-groups.md#webhook) when the alert condition is met.
> [!NOTE]
- > Refer to the [Azure subscription service limits](../../azure-resource-manager/management/azure-subscription-service-limits.md) for limits on the actions that can be performed.
+ > For limits on the actions that can be performed, see [Azure subscription service limits](../../azure-resource-manager/management/azure-subscription-service-limits.md).
1. (Optional) Customize actions in log alert rules:
- - **Custom Email Subject**: Overrides the *e-mail subject* of email actions. You can't modify the body of the mail and this field **isn't for email addresses**.
- - **Include custom Json payload**: Overrides the webhook JSON used by Action Groups assuming the action group contains a webhook action. Learn more about [webhook action for Log Alerts](./alerts-log-webhook.md).
- ![Action Overrides for Log Alerts](media/alerts-log/AlertsPreviewOverrideLog.png)
-1. When you have finished editing all of the alert rule options, select **Save**.
+ - **Custom email subject**: Overrides the *email subject* of email actions. You can't modify the body of the mail and this field *isn't for email addresses*.
+ - **Include custom Json payload for webhook**: Overrides the webhook JSON used by action groups, assuming that the action group contains a webhook action. Learn more about [webhook actions for log alerts](./alerts-log-webhook.md).
+ ![Screenshot that shows Action overrides for log alerts.](media/alerts-log/AlertsPreviewOverrideLog.png)
+1. After you've finished editing all the alert rule options, select **Save**.
-## Manage log alerts using PowerShell
+## Manage log alerts by using PowerShell
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-Use the PowerShell cmdlets listed below to manage rules with the [Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules).
--- [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) : PowerShell cmdlet to create a new log alert rule.-- [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) : PowerShell cmdlet to update an existing log alert rule.-- [New-AzScheduledQueryRuleSource](/powershell/module/az.monitor/new-azscheduledqueryrulesource) : PowerShell cmdlet to create or update object specifying source parameters for a log alert. Used as input by [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlet.-- [New-AzScheduledQueryRuleSchedule](/powershell/module/az.monitor/new-azscheduledqueryruleschedule): PowerShell cmdlet to create or update object specifying schedule parameters for a log alert. Used as input by [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlet.-- [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction) : PowerShell cmdlet to create or update object specifying action parameters for a log alert. Used as input by [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlet.-- [New-AzScheduledQueryRuleAznsActionGroup](/powershell/module/az.monitor/new-azscheduledqueryruleaznsactiongroup) : PowerShell cmdlet to create or update object specifying action groups parameters for a log alert. Used as input by [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction) cmdlet.-- [New-AzScheduledQueryRuleTriggerCondition](/powershell/module/az.monitor/new-azscheduledqueryruletriggercondition) : PowerShell cmdlet to create or update object specifying trigger condition parameters for log alert. Used as input by [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction) cmdlet.-- [New-AzScheduledQueryRuleLogMetricTrigger](/powershell/module/az.monitor/new-azscheduledqueryrulelogmetrictrigger) : PowerShell cmdlet to create or update object specifying metric trigger condition parameters for a 'metric measurement' log alert. Used as input by [New-AzScheduledQueryRuleTriggerCondition](/powershell/module/az.monitor/new-azscheduledqueryruletriggercondition) cmdlet.-- [Get-AzScheduledQueryRule](/powershell/module/az.monitor/get-azscheduledqueryrule) : PowerShell cmdlet to list existing log alert rules or a specific log alert rule-- [Update-AzScheduledQueryRule](/powershell/module/az.monitor/update-azscheduledqueryrule) : PowerShell cmdlet to enable or disable log alert rule-- [Remove-AzScheduledQueryRule](/powershell/module/az.monitor/remove-azscheduledqueryrule): PowerShell cmdlet to delete an existing log alert rule
+Use the following PowerShell cmdlets to manage rules with the [Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules):
+
+- [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule): PowerShell cmdlet to create a new log alert rule.
+- [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule): PowerShell cmdlet to update an existing log alert rule.
+- [New-AzScheduledQueryRuleSource](/powershell/module/az.monitor/new-azscheduledqueryrulesource): PowerShell cmdlet to create or update the object that specifies source parameters for a log alert. Used as input by the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlets.
+- [New-AzScheduledQueryRuleSchedule](/powershell/module/az.monitor/new-azscheduledqueryruleschedule): PowerShell cmdlet to create or update the object that specifies schedule parameters for a log alert. Used as input by the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlets.
+- [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction): PowerShell cmdlet to create or update the object that specifies action parameters for a log alert. Used as input by the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlets.
+- [New-AzScheduledQueryRuleAznsActionGroup](/powershell/module/az.monitor/new-azscheduledqueryruleaznsactiongroup): PowerShell cmdlet to create or update the object that specifies action group parameters for a log alert. Used as input by the [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction) cmdlet.
+- [New-AzScheduledQueryRuleTriggerCondition](/powershell/module/az.monitor/new-azscheduledqueryruletriggercondition): PowerShell cmdlet to create or update the object that specifies trigger condition parameters for a log alert. Used as input by the [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction) cmdlet.
+- [New-AzScheduledQueryRuleLogMetricTrigger](/powershell/module/az.monitor/new-azscheduledqueryrulelogmetrictrigger): PowerShell cmdlet to create or update the object that specifies metric trigger condition parameters for a metric measurement log alert. Used as input by the [New-AzScheduledQueryRuleTriggerCondition](/powershell/module/az.monitor/new-azscheduledqueryruletriggercondition) cmdlet.
+- [Get-AzScheduledQueryRule](/powershell/module/az.monitor/get-azscheduledqueryrule): PowerShell cmdlet to list existing log alert rules or a specific log alert rule.
+- [Update-AzScheduledQueryRule](/powershell/module/az.monitor/update-azscheduledqueryrule): PowerShell cmdlet to enable or disable a log alert rule.
+- [Remove-AzScheduledQueryRule](/powershell/module/az.monitor/remove-azscheduledqueryrule): PowerShell cmdlet to delete an existing log alert rule.
+ > [!NOTE]
-> ScheduledQueryRules PowerShell cmdlets can only manage rules created in [this version of the Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). Log alert rules created using legacy [Log Analytics Alert API](./api-alerts.md) can only be managed using PowerShell only after [switching to Scheduled Query Rules API](../alerts/alerts-log-api-switch.md).
+> The `ScheduledQueryRules` PowerShell cmdlets can only manage rules created in [this version of the Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). Log alert rules created by using the legacy [Log Analytics Alert API](./api-alerts.md) can only be managed by using PowerShell after you [switch to the Scheduled Query Rules API](../alerts/alerts-log-api-switch.md).
+
+Example steps for creating a log alert rule by using PowerShell:
-Here are example steps for creating a log alert rule using PowerShell:
```powershell $source = New-AzScheduledQueryRuleSource -Query 'Heartbeat | summarize AggregatedValue = count() by bin(TimeGenerated, 5m), _ResourceId' -DataSourceId "/subscriptions/a123d7efg-123c-1234-5678-a12bc3defgh4/resourceGroups/contosoRG/providers/microsoft.OperationalInsights/workspaces/servicews" $schedule = New-AzScheduledQueryRuleSchedule -FrequencyInMinutes 15 -TimeWindowInMinutes 30
$aznsActionGroup = New-AzScheduledQueryRuleAznsActionGroup -ActionGroup "/subscr
$alertingAction = New-AzScheduledQueryRuleAlertingAction -AznsAction $aznsActionGroup -Severity "3" -Trigger $triggerCondition New-AzScheduledQueryRule -ResourceGroupName "contosoRG" -Location "Region Name for your Application Insights App or Log Analytics Workspace" -Action $alertingAction -Enabled $true -Description "Alert description" -Schedule $schedule -Source $source -Name "Alert Name" ```
-Here are example steps for creating a log alert rule using the PowerShell with cross-resource queries:
+
+Example steps for creating a log alert rule by using PowerShell with cross-resource queries:
+ ```powershell $authorized = @ ("/subscriptions/a123d7efg-123c-1234-5678-a12bc3defgh4/resourceGroups/contosoRG/providers/microsoft.OperationalInsights/workspaces/servicewsCrossExample", "/subscriptions/a123d7efg-123c-1234-5678-a12bc3defgh4/resourceGroups/contosoRG/providers/microsoft.insights/components/serviceAppInsights") $source = New-AzScheduledQueryRuleSource -Query 'Heartbeat | summarize AggregatedValue = count() by bin(TimeGenerated, 5m), _ResourceId' -DataSourceId "/subscriptions/a123d7efg-123c-1234-5678-a12bc3defgh4/resourceGroups/contosoRG/providers/microsoft.OperationalInsights/workspaces/servicews" -AuthorizedResource $authorized
$aznsActionGroup = New-AzScheduledQueryRuleAznsActionGroup -ActionGroup "/subscr
$alertingAction = New-AzScheduledQueryRuleAlertingAction -AznsAction $aznsActionGroup -Severity "3" -Trigger $triggerCondition New-AzScheduledQueryRule -ResourceGroupName "contosoRG" -Location "Region Name for your Application Insights App or Log Analytics Workspace" -Action $alertingAction -Enabled $true -Description "Alert description" -Schedule $schedule -Source $source -Name "Alert Name" ```
-You can also create the log alert using a [template and parameters](./alerts-log-create-templates.md) files using PowerShell:
+
+You can also create the log alert by using [a template and parameters](./alerts-log-create-templates.md) files using PowerShell:
+ ```powershell Connect-AzAccount Select-AzSubscription -SubscriptionName <yourSubscriptionName>
New-AzResourceGroupDeployment -Name AlertDeployment -ResourceGroupName ResourceG
## Next steps * Learn about [log alerts](./alerts-unified-log.md).
-* Create log alerts using [Azure Resource Manager Templates](./alerts-log-create-templates.md).
+* Create log alerts by using [Azure Resource Manager templates](./alerts-log-create-templates.md).
* Understand [webhook actions for log alerts](./alerts-log-webhook.md). * Learn more about [log queries](../logs/log-query-overview.md).
azure-monitor Alerts Smart Detections Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-smart-detections-migration.md
Title: Upgrade Azure Monitor Application Insights smart detection to alerts (Preview) | Microsoft Docs
-description: Learn about the steps required to upgrade your Azure Monitor Application Insights smart detection to alert rules
+ Title: Upgrade Azure Monitor Application Insights smart detection to alerts (preview) | Microsoft Docs
+description: Learn about the steps required to upgrade your Azure Monitor Application Insights smart detection to alert rules.
Last updated 2/23/2022
-# Migrate Azure Monitor Application Insights smart detection to alerts (Preview)
+# Migrate Azure Monitor Application Insights smart detection to alerts (preview)
-This article describes the process of migrating Application Insights smart detection to alerts. The migration creates alert rules for the different smart detection modules. You can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, providing you with multiple methods of actions or notifications on new detections.
+This article describes the process of migrating Application Insights smart detection to alerts. The migration creates alert rules for the different smart detection modules. You can manage and configure these rules like any other Azure Monitor alert rules. You can also configure action groups for these rules to get multiple methods of actions or notifications on new detections.
## Benefits of migration to alerts With the migration, smart detection now allows you to take advantage of the full capabilities of Azure Monitor alerts, including: -- **Rich Notification options for all detectors** - [Action groups](../alerts/action-groups.md) allow you to configure multiple types of notifications and actions that are triggered when an alert is fired. You can configure notification by email, SMS, voice call or push notifications, and actions such as calling a secure webhook, Logic App, automation runbook, and more. Action groups further management at scale by allowing you to configure actions once and use them across multiple alert rules.-- **At-scale management** of smart detection alerts using the Azure Monitor alerts experience and API.-- **Rule based suppression of notifications** - [Action Rules](../alerts/alerts-action-rules.md) help you define or suppress actions at any Azure Resource Manager scope (Azure subscription, resource group, or target resource). They have various filters that help you narrow down the specific subset of alert instances that you want to act on.
+- **Rich notification options for all detectors**: Use [action groups](../alerts/action-groups.md) to configure multiple types of notifications and actions that are triggered when an alert is fired. You can configure notification by email, SMS, voice call, or push notifications. You can configure actions like calling a secure webhook, logic app, and automation runbook. Action groups further management at scale by allowing you to configure actions once and use them across multiple alert rules.
+- **At-scale management**: Smart detection alerts use the Azure Monitor alerts experience and API.
+- **Rule-based suppression of notifications**: Use [action rules](../alerts/alerts-action-rules.md) to define or suppress actions at any Azure Resource Manager scope such as Azure subscription, resource group, or target resource. Filters help you narrow down the specific subset of alert instances that you want to act on.
## Migrated smart detection capabilities
-A new set of alert rules is created when migrating an Application Insights resource. One rule is created for each of the migrated smart detection capabilities. The following table maps the pre-migration smart detection capabilities to post-migration alert rules:
+A new set of alert rules is created when you migrate an Application Insights resource. One rule is created for each of the migrated smart detection capabilities. The following table maps the pre-migration smart detection capabilities to post-migration alert rules.
| Smart detection rule name <sup>(1)</sup> | Alert rule name <sup>(2)</sup> | | - | |
A new set of alert rules is created when migrating an Application Insights resou
| Degradation in trace severity ratio (preview) | Trace severity degradation - *\<Application Insights resource name\>*| | Abnormal rise in exception volume (preview) | Exception anomalies - *\<Application Insights resource name\>*| | Potential memory leak detected (preview) | Potential memory leak - *\<Application Insights resource name\>*|
-| Slow page load time | *discontinued* <sup>(3)</sup> |
-| Slow server response time | *discontinued* <sup>(3)</sup> |
-| Long dependency duration | *discontinued* <sup>(3)</sup> |
-| Potential security issue detected (preview) | *discontinued* <sup>(3)</sup> |
-| Abnormal rise in daily data volume (preview) | *discontinued* <sup>(3)</sup> |
+| Slow page load time | No longer supported <sup>(3)</sup> |
+| Slow server response time | No longer supported <sup>(3)</sup> |
+| Long dependency duration | No longer supported <sup>(3)</sup> |
+| Potential security issue detected (preview) | No longer supported <sup>(3)</sup> |
+| Abnormal rise in daily data volume (preview) | No longer supported <sup>(3)</sup> |
-<sup>(1)</sup> Name of rule as appears in smart detection Settings pane
-<sup>(2)</sup> Name of new alert rule after migration
-<sup>(3)</sup> These smart detection capabilities aren't converted to alerts, because of low usage and reassessment of detection effectiveness. These detectors will no longer be supported for this resource once its migration is completed.
+<sup>(1)</sup> The name of the rule as it appears in the smart detection **Settings** pane.<br>
+<sup>(2)</sup> The name of the new alert rule after migration.<br>
+<sup>(3)</sup> These smart detection capabilities aren't converted to alerts because of low usage and reassessment of detection effectiveness. These detectors will no longer be supported for this resource after its migration is finished.
> [!NOTE]
- > The **Failure Anomalies** smart detector is already created as an alert rule and therefore does not require migration, it is not covered in this document.
-
+ > The **Failure Anomalies** smart detector is already created as an alert rule and doesn't require migration. It isn't discussed in this article.
+ The migration doesn't change the algorithmic design and behavior of smart detection. The same detection performance is expected before and after the change. You need to apply the migration to each Application Insights resource separately. For resources that aren't explicitly migrated, smart detection will continue to work as before.
You need to apply the migration to each Application Insights resource separately
As part of migration, each new alert rule is automatically configured with an action group. The migration can assign a default action group for each rule. The default action group is configured according to the rule notification before the migration: -- If the **smart detection rule had the default email or no notifications configured**, then the new alert rule is configured with an action group named ΓÇ£Application Insights Smart Detection".
- - If the migration tool finds an existing action group with that name, it links the new alert rule to that action group.
- - Otherwise, it creates a new action group with that name. The new group in configured for "Email Azure Resource Manager Role" actions and sends notification to your Azure Resource Manager Monitoring Contributor and Monitoring Reader users.
+- If the smart detection rule had the default email or no notifications configured, the new alert rule is configured with an action group named Application Insights Smart Detection.
+ - If the migration tool finds an existing action group with that name, it links the new alert rule to that action group.
+ - Otherwise, it creates a new action group with that name. The new group is configured for Email Azure Resource Manager Role actions and sends notification to your Azure Resource Manager Monitoring Contributor and Monitoring Reader users.
-- If the **default email notification was changed** before migration, then an action group called "Application Insights Smart Detection \<n\>" is created, with an email action sending notifications to the previously configured email addresses.
+- If the default email notification was changed before migration, an action group called Application Insights Smart Detection \<n\> is created, with an email action sending notifications to the previously configured email addresses.
Instead of using the default action group, you select an existing action group that will be configured for all the new alert rules.
-## Executing smart detection migration process
+## Execute the smart detection migration process
-### Migrate your smart detection using the Azure portal
+Use the Azure portal, the Azure CLI, or Azure Resource Manager templates (ARM templates) to perform the migration.
-To migrate smart detection in your resource, take the following steps:
+### Migrate your smart detection by using the Azure portal
-1. Select **Smart detection** under the **Investigate** heading in your Application Insights resource left-side menu.
+To migrate smart detection in your resource:
-2. Click on the banner reading **"Migrate smart detection to alerts (Preview)**. The migration dialog is opened.
+1. Select **Smart detection** under the **Investigate** heading in your Application Insights resource.
- ![Smart detection feed banner](media/alerts-smart-detections-migration/smart-detection-feed-banner.png)
-
-3. Check the option "Migrate all Application Insights resources in this subscription", or leave it unchecked if you want to migrate only the current resource you are in.
- > [!NOTE]
- > Checking this option will impact all **existing** Application Insights resources (that were not migrated yet). As long as the migration to alerts is in preview, new Application Insights resources will still be created with non-alerts smart detection.
+1. Select the banner reading **Migrate smart detection to alerts (Preview)**. The migration dialog appears.
+ ![Screenshot that shows the Smart Detection feed banner.](media/alerts-smart-detections-migration/smart-detection-feed-banner.png)
+1. Select the **Migrate all Application Insights resources in this subscription** option. Or you can leave the option cleared if you want to migrate only the current resource you're in.
+ > [!NOTE]
+ > Selecting this option affects all existing Application Insights resources that weren't migrated yet. As long as the migration to alerts is in preview, new Application Insights resources will still be created with non-alerts smart detection.
-4. Select an action group to be configured for the new alert rules. You can choose between using the default action group (as explained above) or using one of your existing action groups.
+1. Select an action group to be configured for the new alert rules. You can use the default action group as explained or use one of your existing action groups.
-5. Select **Migrate** to start the migration process.
+1. Select **Migrate** to start the migration process.
- ![Smart detection migration dialog](media/alerts-smart-detections-migration/smart-detection-migration-dialog.png)
+ ![Screenshot that shows the Smart Detection migration dialog.](media/alerts-smart-detections-migration/smart-detection-migration-dialog.png)
-After the migration, new alert rules are created for your Application Insight resource, as explained above.
+After the migration, new alert rules are created for your Application Insight resource, as explained.
-### Migrate your smart detection using Azure CLI
+### Migrate your smart detection by using the Azure CLI
-You can start the smart detection migration using the following Azure CLI command. The command triggers the pre-configured migration process as described previously.
+Start the smart detection migration by using the following Azure CLI command. The command triggers the preconfigured migration process as previously described.
```azurecli az rest --method POST --uri /subscriptions/{subscriptionId}/providers/Microsoft.AlertsManagement/migrateFromSmartDetection?api-version=2021-01-01-preview --body @body.txt ```
-For migrating a single Application Insights resource, body.txt should include:
+To migrate a single Application Insights resource, *body.txt* should include:
```json {
For migrating a single Application Insights resource, body.txt should include:
"customActionGroupName" : "{actionGroupName}" } ```
-For migrating all the Application Insights resources in a subscription, body.txt should include:
+
+To migrate all the Application Insights resources in a subscription, *body.txt* should include:
```json {
For migrating all the Application Insights resources in a subscription, body.txt
"customActionGroupName" : "{actionGroupName}" } ```
-**ActionGroupCreationPolicy** selects the policy for migrating the email settings in the smart detection rules into action groups. Allowed values are:
-- **'Auto'**, which uses the default action groups as described in this document-- **'Custom'**, which creates all alert rules with the action group specified in **'customActionGroupName'**.-- *\<blank\>* - If **ActionGroupCreationPolicy** isn't specified, the 'Auto' policy is used.
+The `ActionGroupCreationPolicy` parameter selects the policy for migrating the email settings in the smart detection rules into action groups. Allowed values are:
+
+- **Auto**: Uses the default action groups as described in this document.
+- **Custom**: Creates all alert rules with the action group specified in `customActionGroupName`.
+- **\<blank\>**: If `ActionGroupCreationPolicy` isn't specified, the `Auto` policy is used.
-### Migrate your smart detection using Azure Resource Manager templates
+### Migrate your smart detection by using ARM templates
-You can trigger the smart detection migration to alerts for a specific Application Insights resource, using Azure Resource Manager templates. Using this method you would need to:
+You can trigger the smart detection migration to alerts for a specific Application Insights resource by using ARM templates. To use this method, you need to:
-- Create a smart detection alert rule for each for the supported detectors-- Modify the Application Insight properties to indicate that the migration was completed
+- Create a smart detection alert rule for each of the supported detectors.
+- Modify the Application Insight properties to indicate that the migration was completed.
-This method allows you to control which alert rules to create, define your own alert rule name and description, and select any action group you desire for each rule.
+With this method, you can control which alert rules to create, define your own alert rule name and description, and select any action group you desire for each rule.
-The following templates should be used for this purpose (edit as needed to provide your Subscription ID, and Application Insights Resource Name)
+Use the following templates for this purpose. Edit them as needed to provide your subscription ID and Application Insights resource name.
```json {
The following templates should be used for this purpose (edit as needed to provi
} ```
-## Viewing your alerts after the migration
+## View your alerts after the migration
+
+After migration, you can view your smart detection alerts by selecting the **Alerts** entry in your Application Insights resource. For **Signal type**, select **Smart Detector** to filter and present only smart detection alerts. You can select an alert to see its detection details.
-Following the migration process, you can view your smart detection alerts by selecting the Alerts entry in your Application Insights resource left-side menu. Select **Signal Type** to be **Smart Detector** to filter and present only the smart detection alerts. You can select an alert to see its detection details.
+![Screenshot that shows smart detection alerts.](media/alerts-smart-detections-migration/smart-detector-alerts.png)
-![Smart detection alerts](media/alerts-smart-detections-migration/smart-detector-alerts.png)
+You can also still see the available detections in the **Smart Detection** feed of your Application Insights resource.
-You can also still see the available detections in the smart detection feed of your Application Insights resource.
+![Screenshot that shows the Smart Detection feed.](media/alerts-smart-detections-migration/smart-detection-feed.png)
-![Smart detection feed](media/alerts-smart-detections-migration/smart-detection-feed.png)
+## Manage smart detection alert rules settings after migration
-## Managing smart detection alert rules settings after the migration
+Use the Azure portal or ARM templates to manage smart detection alert rules settings after migration.
-### Managing alert rules settings using the Azure portal
+### Manage alert rules settings by using the Azure portal
-After the migration is complete, you access the new smart detection alert rules in a similar way to other alert rules defined for the resource:
+After the migration is finished, you access the new smart detection alert rules in a similar way to other alert rules defined for the resource.
-1. Select **Alerts** under the **Monitoring** heading in your Application Insights resource left-side menu.
+1. Select **Alerts** under the **Monitoring** heading in your Application Insights resource.
- ![Alerts menu](media/alerts-smart-detections-migration/application-insights-alerts.png)
+ ![Screenshot that shows the Alerts menu.](media/alerts-smart-detections-migration/application-insights-alerts.png)
-2. Select **Manage Alert Rules**
+1. Select **Manage alert rules**.
- ![Manage alert rules](media/alerts-smart-detections-migration/manage-alert-rules.png)
+ ![Screenshot that shows Manage alert rules.](media/alerts-smart-detections-migration/manage-alert-rules.png)
-3. Select **Signal Type** to be **Smart Detector** to filter and present the smart detection alert rules.
+1. For **Signal type**, select **Smart Detector** to filter and present the smart detection alert rules.
- ![Smart Detector rules](media/alerts-smart-detections-migration/smart-detector-rules.png)
+ ![Screenshot that shows smart detection rules.](media/alerts-smart-detections-migration/smart-detector-rules.png)
-### Enabling / disabling smart detection alert rules
+### Enable or disable smart detection alert rules
-Smart detection alert rules can be enabled or disabled through the portal UI or programmatically, just like any other alert rule.
+Smart detection alert rules can be enabled or disabled through the portal UI or programmatically, like any other alert rule.
-If a specific smart detection rule was disabled before the migration, the new alert rule will be disabled as well.
+If a specific smart detection rule was disabled before the migration, the new alert rule will also be disabled.
-### Configuring action group for your alert rules
+### Configure action groups for your alert rules
-You can create and manage action groups for the new smart detection alert rules just like for any other Azure Monitor alert rule.
+You can create and manage action groups for the new smart detection alert rules like for any other Azure Monitor alert rule.
-### Managing alert rule settings using Azure Resource Manager templates
+### Manage alert rule settings by using ARM templates
-After completing the migration, you can use Azure Resource Manager templates to configure settings for smart detection alert rule settings.
+After the migration is finished, you can use ARM templates to configure settings for smart detection alert rule settings.
> [!NOTE]
-> After completion of migration, smart detection settings must be configured using smart detection alert rule templates, and can no longer be configured using the [Application Insights Resource Manager template](./proactive-arm-config.md#smart-detection-rule-configuration).
-
-This Azure Resource Manager template example demonstrates configuring an **Response Latency Degradation** alert rule in an **Enabled** state with a severity of 2.
-* Smart detection is a global service, therefore rule location is created in the **global** location.
-* "id" property should change according to the specific detector configured. The value must be one of:
-
- - **FailureAnomaliesDetector**
- - **RequestPerformanceDegradationDetector**
- - **DependencyPerformanceDegradationDetector**
- - **ExceptionVolumeChangedDetector**
- - **TraceSeverityDetector**
- - **MemoryLeakDetector**
+> After migration is finished, smart detection settings must be configured by using smart detection alert rule templates. They can no longer be configured by using the [Application Insights Resource Manager template](./proactive-arm-config.md#smart-detection-rule-configuration).
+
+This ARM template example demonstrates how to configure a `Response Latency Degradation` alert rule in an `Enabled` state with a severity of `2`.
+* Smart detection is a global service, so rule location is created in the `global` location.
+* The `id` property should change according to the specific detector configured. The value must be one of:
+
+ - `FailureAnomaliesDetector`
+ - `RequestPerformanceDegradationDetector`
+ - `DependencyPerformanceDegradationDetector`
+ - `ExceptionVolumeChangedDetector`
+ - `TraceSeverityDetector`
+ - `MemoryLeakDetector`
```json {
This Azure Resource Manager template example demonstrates configuring an **Respo
} ``` --
-## Next Steps
+## Next steps
- [Learn more about alerts in Azure](./alerts-overview.md) - [Learn more about smart detection in Application Insights](./proactive-diagnostics.md)
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
# Types of Azure Monitor alerts
-This article describes the kinds of Azure Monitor alerts you can create, and helps you understand when to use each type of alert.
+This article describes the kinds of Azure Monitor alerts you can create. It helps you understand when to use each type of alert.
-There are four types of alerts:
+The types of alerts are:
- [Metric alerts](#metric-alerts) - [Log alerts](#log-alerts) - [Activity log alerts](#activity-log-alerts)
There are four types of alerts:
- [Smart detection alerts](#smart-detection-alerts) - [Prometheus alerts](#prometheus-alerts-preview) (preview)
-## Choosing the right alert type
+## Choose the right alert type
-This table can help you decide when to use what type of alert. For more detailed information about pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+The information in this table can help you decide when to use each type of alert. For more information about pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
-|Alert Type |When to Use |Pricing Information|
+|Alert type |When to use |Pricing information|
||||
-|Metric alert|Metric data is stored in the system already pre-computed. Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. We recommend using metric alerts if the data you want to monitor is available in metric data.|Each metric alert rule is charged based on the number of time-series that are monitored. |
-|Log alert|Log alerts allow you to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of KQL for data manipulation using log alerts.|Each log alert rule is billed based on the interval at which the log query is evaluated (more frequent query evaluation results in a higher cost). Additionally, for log alerts configured for [at scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. |
-|Activity Log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to be alerted when a specific event happens to a resource, for example, a restart, a shutdown, or the creation or deletion of a resource. Service Health alerts and Resource Health alerts can let you know when there is an issue with one of your services or resources.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).|
-|Prometheus alerts (preview)| Prometheus alerts are primarily used for alerting on performance and health of Kubernetes clusters (including AKS). The alert rules are based on PromQL, which is an open source query language. | There is no charge for Prometheus alerts during the preview period. |
+|Metric alert|Metric data is stored in the system already pre-computed. Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. Use metric alerts if the data you want to monitor is available in metric data.|Each metric alert rule is charged based on the number of time series that are monitored. |
+|Log alert|You can use log alerts to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of Kusto Query Language (KQL) for data manipulation by using log alerts.|Each log alert rule is billed based on the interval at which the log query is evaluated. More frequent query evaluation results in a higher cost. For log alerts configured for [at-scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. |
+|Activity log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to be alerted when a specific event happens to a resource like a restart, a shutdown, or the creation or deletion of a resource. Service Health alerts and Resource Health alerts let you know when there's an issue with one of your services or resources.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).|
+|Prometheus alerts (preview)| Prometheus alerts are primarily used for alerting on performance and health of Kubernetes clusters, including Azure Kubernetes Service. The alert rules are based on PromQL, which is an open-source query language. | There's no charge for Prometheus alerts during the preview period. |
+ ## Metric alerts A metric alert rule monitors a resource by evaluating conditions on the resource metrics at regular intervals. If the conditions are met, an alert is fired. A metric time-series is a series of metric values captured over a period of time.
-You can create rules using these metrics:
+You can create rules by using these metrics:
- [Platform metrics](alerts-metric-near-real-time.md#metrics-and-dimensions-supported) - [Custom metrics](../essentials/metrics-custom-overview.md) - [Application Insights custom metrics](../app/api-custom-events-metrics.md)
You can create rules using these metrics:
Metric alert rules include these features: - You can use multiple conditions on an alert rule for a single resource.-- You can add granularity by [monitoring multiple metric dimensions](#narrow-the-target-using-dimensions). -- You can use [Dynamic thresholds](#dynamic-thresholds) driven by machine learning.
+- You can add granularity by [monitoring multiple metric dimensions](#narrow-the-target-by-using-dimensions).
+- You can use [dynamic thresholds](#dynamic-thresholds) driven by machine learning.
- You can configure if metric alerts are [stateful or stateless](alerts-overview.md#alerts-and-state). Metric alerts are stateful by default. The target of the metric alert rule can be:-- A single resource, such as a VM. See [this article](alerts-metric-near-real-time.md) for supported resource types.
+- A single resource, such as a virtual machine (VM). For supported resource types, see [Supported resources for metric alerts in Azure Monitor](alerts-metric-near-real-time.md).
- [Multiple resources](#monitor-multiple-resources) of the same type in the same Azure region, such as a resource group. ### Multiple conditions
-When you create an alert rule for a single resource, you can apply multiple conditions. For example, you could create an alert rule to monitor an Azure virtual machine and alert when both "Percentage CPU is higher than 90%" and "Queue length is over 300 items". When an alert rule has multiple conditions, the alert fires when all the conditions in the alert rule are true and is resolved when at least one of the conditions is no longer true for three consecutive checks.
-### Narrow the target using Dimensions
+When you create an alert rule for a single resource, you can apply multiple conditions. For example, you could create an alert rule to monitor an Azure VM and alert when both "Percentage CPU is higher than 90%" and "Queue length is over 300 items." When an alert rule has multiple conditions, the alert fires when all the conditions in the alert rule are true. It's resolved when at least one of the conditions is no longer true for three consecutive checks.
+
+### Narrow the target by using dimensions
+
+Dimensions are name-value pairs that contain more data about the metric value. When you use dimensions, you can filter metrics and monitor specific time-series instead of monitoring the aggregate of all the dimensional values.
-Dimensions are name-value pairs that contain more data about the metric value. Using dimensions allows you to filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values.
-For example, the Transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, PutPage). You can choose to have an alert fired when there's a high number of transactions in any API name (which is the aggregated data), or you can use dimensions to further break it down to alert only when the number of transactions is high for specific API names.
-If you use more than one dimension, the metric alert rule can monitor multiple dimension values from different dimensions of a metric.
-The alert rule separately monitors all the dimensions value combinations.
-See [this article](alerts-metric-multiple-time-series-single-rule.md) for detailed instructions on using dimensions in metric alert rules.
+For example, the Transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction like GetBlob, DeleteBlob, and PutPage. You can choose to have an alert fired when there's a high number of transactions in any API name, which is the aggregated data. Or you can use dimensions to further break it down to alert only when the number of transactions is high for specific API names.
-### Create resource-centric alerts using splitting by dimensions
+If you use more than one dimension, the metric alert rule can monitor multiple dimension values from different dimensions of a metric. The alert rule separately monitors all the dimensions value combinations.
-To monitor for the same condition on multiple Azure resources, you can use splitting by dimensions. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations. Splitting on Azure resource ID column makes the specified resource into the alert target.
+For instructions on using dimensions in metric alert rules, see [Monitor multiple time series in a single metric alert rule](alerts-metric-multiple-time-series-single-rule.md).
-You may also decide not to split when you want a condition applied to multiple resources in the scope. For example, if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
+### Create resource-centric alerts by using splitting by dimensions
+
+To monitor for the same condition on multiple Azure resources, you can use splitting by dimensions. When you use splitting by dimensions, you can create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations. Splitting on an Azure resource ID column makes the specified resource into the alert target.
+
+You might also decide not to split when you want a condition applied to multiple resources in the scope. For example, you might want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
### Monitor multiple resources
The platform metrics for these services in the following Azure clouds are suppor
| Service | Global Azure | Government | China | |:--|:-|:--|:--|
-| Virtual machines* | Yes |Yes | Yes |
-| SQL server databases | Yes | Yes | Yes |
-| SQL server elastic pools | Yes | Yes | Yes |
+| Virtual machines | Yes |Yes | Yes |
+| SQL Server databases | Yes | Yes | Yes |
+| SQL Server elastic pools | Yes | Yes | Yes |
| NetApp files capacity pools | Yes | Yes | Yes | | NetApp files volumes | Yes | Yes | Yes |
-| Key vaults | Yes | Yes | Yes |
+| Azure Key Vault | Yes | Yes | Yes |
| Azure Cache for Redis | Yes | Yes | Yes | | Azure Stack Edge devices | Yes | Yes | Yes | | Recovery Services vaults | Yes | No | No |
-| Azure Database for PostgreSQL - Flexible Servers | Yes | Yes | Yes |
+| Azure Database for PostgreSQL - Flexible Server | Yes | Yes | Yes |
> [!NOTE]
- > Multi-resource metric alerts are not supported for the following scenarios:
- > - Alerting on virtual machines' guest metrics
- > - Alerting on virtual machines' network metrics (Network In Total, Network Out Total, Inbound Flows, Outbound Flows, Inbound Flows Maximum Creation Rate, Outbound Flows Maximum Creation Rate).
+ > Multi-resource metric alerts aren't supported for:
+ > - Alerting on VM guest metrics.
+ > - Alerting on VM network metrics (Network In Total, Network Out Total, Inbound Flows, Outbound Flows, Inbound Flows Maximum Creation Rate, and Outbound Flows Maximum Creation Rate).
-You can specify the scope of monitoring with a single metric alert rule in one of three ways. For example, with virtual machines you can specify the scope as:
+You can specify the scope of monitoring with a single metric alert rule in one of three ways. For example, with VMs you can specify the scope as:
-- a list of virtual machines (in one Azure region) within a subscription-- all virtual machines (in one Azure region) in one or more resource groups in a subscription-- all virtual machines (in one Azure region) in a subscription
+- A list of VMs in one Azure region within a subscription.
+- All VMs in one Azure region in one or more resource groups in a subscription.
+- All VMs in one Azure region in a subscription.
### Dynamic thresholds
-Dynamic thresholds use advanced machine learning (ML) to:
-- Learn the historical behavior of metrics-- Identify patterns and adapt to metric changes over time, such as hourly, daily or weekly patterns. -- Recognize anomalies that indicate possible service issues-- Calculate the most appropriate threshold for the metric
+Dynamic thresholds use advanced machine learning to:
+- Learn the historical behavior of metrics.
+- Identify patterns and adapt to metric changes over time, such as hourly, daily, or weekly patterns.
+- Recognize anomalies that indicate possible service issues.
+- Calculate the most appropriate threshold for the metric.
-Machine Learning continuously uses new data to learn more and make the threshold more accurate. Because the system adapts to the metricsΓÇÖ behavior over time, and alerts based on deviations from its pattern, you don't have to know the "right" threshold for each metric.
+Machine learning continuously uses new data to learn more and make the threshold more accurate. Because the system adapts to the metrics' behavior over time, and alerts based on deviations from its pattern, you don't have to know the "right" threshold for each metric.
Dynamic thresholds help you: - Create scalable alerts for hundreds of metric series with one alert rule. If you have fewer alert rules, you spend less time creating and managing alerts rules.-- Create rules without having to know what threshold to configure-- Configure up metric alerts using high-level concepts without extensive domain knowledge about the metric-- Prevent noisy (low precision) or wide (low recall) thresholds that donΓÇÖt have an expected pattern
+- Create rules without having to know what threshold to configure.
+- Configure metric alerts by using high-level concepts without extensive domain knowledge about the metric.
+- Prevent noisy (low precision) or wide (low recall) thresholds that donΓÇÖt have an expected pattern.
- Handle noisy metrics (such as machine CPU or memory) and metrics with low dispersion (such as availability and error rate).
-See [this article](alerts-dynamic-thresholds.md) for detailed instructions on using dynamic thresholds in metric alert rules.
+For instructions on using dynamic thresholds in metric alert rules, see [Dynamic thresholds in metric alerts](alerts-dynamic-thresholds.md).
## Log alerts
A log alert rule monitors a resource by using a Log Analytics query to evaluate
The target of the log alert rule can be: - A single resource, such as a VM. - A single container of resources, like a resource group or subscription.-- Multiple resources using [cross-resource query](../logs/cross-workspace-query.md).
+- Multiple resources that use a [cross-resource query](../logs/cross-workspace-query.md).
Log alerts can measure two different things, which can be used for different monitoring scenarios:-- Table rows: The number of rows returned can be used to work with events such as Windows event logs, syslog, application exceptions.-- Calculation of a numeric column: Calculations based on any numeric column can be used to include any number of resources. For example, CPU percentage.
+- **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions.
+- **Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage.
-You can configure if log alerts are [stateful or stateless](alerts-overview.md#alerts-and-state) (currently in preview).
+You can configure if log alerts are [stateful or stateless](alerts-overview.md#alerts-and-state). This feature is currently in preview.
> [!NOTE]
-> Log alerts work best when you are trying to detect specific data in the logs, as opposed to when you are trying to detect a **lack** of data in the logs. Since logs are semi-structured data, they are inherently more latent than metric data on information like a VM heartbeat. To avoid misfires when you are trying to detect a lack of data in the logs, consider using [metric alerts](#metric-alerts). You can send data to the metric store from logs using [metric alerts for logs](alerts-metric-logs.md).
+> Log alerts work best when you're trying to detect specific data in the logs, as opposed to when you're trying to detect a lack of data in the logs. Because logs are semi-structured data, they're inherently more latent than metric data on information like a VM heartbeat. To avoid misfires when you're trying to detect a lack of data in the logs, consider using [metric alerts](#metric-alerts). You can send data to the metric store from logs by using [metric alerts for logs](alerts-metric-logs.md).
### Dimensions in log alert rules
-You can use dimensions when creating log alert rules to monitor the values of multiple instances of a resource with one rule. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually notifications are sent for each instance.
+You can use dimensions when you create log alert rules to monitor the values of multiple instances of a resource with one rule. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually. Notifications are sent for each instance.
### Splitting by dimensions in log alert rules
-To monitor for the same condition on multiple Azure resources, you can use splitting by dimensions. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations using numerical or string columns. Splitting on the Azure resource ID column makes the specified resource into the alert target.
-You may also decide not to split when you want a condition applied to multiple resources in the scope. For example, if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
+To monitor for the same condition on multiple Azure resources, you can use splitting by dimensions. When you use splitting by dimensions, you can create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations by using numerical or string columns. Splitting on the Azure resource ID column makes the specified resource into the alert target.
+
+You might also decide not to split when you want a condition applied to multiple resources in the scope. For example, you might want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
-### Using the API
+### Use the API
-Manage new rules in your workspaces using the [ScheduledQueryRules](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) API.
+Manage new rules in your workspaces by using the [ScheduledQueryRules](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) API.
> [!NOTE]
-> Log alerts for Log Analytics used to be managed using the legacy [Log Analytics Alert API](api-alerts.md). Learn more about [switching to the current ScheduledQueryRules API](alerts-log-api-switch.md).
+> Log alerts for Log Analytics used to be managed by using the legacy [Log Analytics Alert API](api-alerts.md). Learn more about [switching to the current ScheduledQueryRules API](alerts-log-api-switch.md).
+ ## Log alerts on your Azure bill
-Log Alerts are listed under resource provider microsoft.insights/scheduledqueryrules with:
-- Log Alerts on Application Insights shown with exact resource name along with resource group and alert properties.-- Log Alerts on Log Analytics shown with exact resource name along with resource group and alert properties; when created using scheduledQueryRules API.-- Log alerts created from [legacy Log Analytics API](./api-alerts.md) aren't tracked [Azure Resources](../../azure-resource-manager/management/overview.md) and don't have enforced unique resource names. These alerts are still created on `microsoft.insights/scheduledqueryrules` as hidden resources, which have this resource naming structure `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Log Alerts on legacy API are shown with above hidden resource name along with resource group and alert properties.
+Log alerts are listed under resource provider `microsoft.insights/scheduledqueryrules` with:
+- Log alerts on Application Insights shown with the exact resource name along with resource group and alert properties.
+- Log alerts on Log Analytics are shown with the exact resource name along with resource group and alert properties when they're created by using the scheduledQueryRules API.
+- Log alerts created from the [legacy Log Analytics API](./api-alerts.md) aren't tracked [Azure resources](../../azure-resource-manager/management/overview.md) and don't have enforced unique resource names. These alerts are still created on `microsoft.insights/scheduledqueryrules` as hidden resources, which have the resource naming structure `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Log alerts on the legacy API are shown with the preceding hidden resource name along with resource group and alert properties.
> [!Note]
-> Unsupported resource characters such as <, >, %, &, \, ?, / are replaced with _ in the hidden resource names and this will also reflect in the billing information.
+> Unsupported resource characters like <, >, %, &, \, ? and / are replaced with an underscore (_) in the hidden resource names. This character change is also reflected in the billing information.
+ ## Activity log alerts
-An activity log alert monitors a resource by checking the activity logs for a new activity log event that matches the defined conditions.
+An activity log alert monitors a resource by checking the activity logs for a new activity log event that matches the defined conditions.
-You may want to use activity log alerts for these types of scenarios:
-- When a specific operation occurs on resources in a specific resource group or subscription. For example, you may want to be notified when:
- - Any virtual machine in a production resource group is deleted.
- - Any new roles are assigned to a user in your subscription.
-- A service health event occurs. Service health events include notifications of incidents and maintenance events that apply to resources in your subscription.
+You might want to use activity log alerts for these types of scenarios:
+- When a specific operation occurs on resources in a specific resource group or subscription. For example, you might want to be notified when:
+ - A VM in a production resource group is deleted.
+ - New roles are assigned to a user in your subscription.
+- A Service Health event occurs. Service Health events include notifications of incidents and maintenance events that apply to resources in your subscription.
You can create an activity log alert on: - Any of the activity log [event categories](../essentials/activity-log-schema.md), other than on alert events. -- Any activity log event in top-level property in the JSON object.
+- Any activity log event in a top-level property in the JSON object.
-Activity log alert rules are Azure resources, so they can be created by using an Azure Resource Manager template. They also can be created, updated, or deleted in the Azure portal.
+Activity log alert rules are Azure resources, so they can be created by using an Azure Resource Manager template. They also can be created, updated, or deleted in the Azure portal.
An activity log alert only monitors events in the subscription in which the alert is created. ### Service Health alerts
-Service Health alerts are a type of activity alert. [Service Health](../../service-health/overview.md) lets you know about outages, planned maintenance activities, and other health advisories because the authenticated Service Health experience knows which services and resources you currently use.
+Service Health alerts are a type of activity alert. [Service Health](../../service-health/overview.md) lets you know about outages, planned maintenance activities, and other health advisories because the authenticated Service Health experience knows which services and resources you currently use.
-The best way to use Service Health is to set up Service Health alerts to notify you using your preferred communication channels when service issues, planned maintenance, or other changes may affect the Azure services and regions you use.
+The best way to use Service Health is to set up Service Health alerts to notify you by using your preferred communication channels when service issues, planned maintenance, or other changes might affect the Azure services and regions you use.
### Resource Health alerts
-Resource Health alerts are a type of activity alert. [Resource Health overview](../../service-health/resource-health-overview.md) helps you diagnose and get support for service problems that affect your Azure resources. It reports on the current and past health of your resources. Resource Health relies on signals from different Azure services to assess whether a resource is healthy. If a resource is unhealthy, Resource Health analyzes additional information to determine the source of the problem. It also reports on actions that Microsoft is taking to fix the problem and identifies things that you can do to address it.
+Resource Health alerts are a type of activity alert. The [Resource Health overview](../../service-health/resource-health-overview.md) helps you diagnose and get support for service problems that affect your Azure resources. It reports on the current and past health of your resources.
+
+Resource Health relies on signals from different Azure services to assess whether a resource is healthy. If a resource is unhealthy, Resource Health analyzes more information to determine the source of the problem. It also reports on actions that Microsoft is taking to fix the problem and identifies actions you can take to address it.
-## Smart Detection alerts
+## Smart detection alerts
-After setting up Application Insights for your project, when your app generates a certain minimum amount of data, Smart Detection takes 24 hours to learn the normal behavior of your app. Your app's performance has a typical pattern of behavior. Some requests or dependency calls will be more prone to failure than others; and the overall failure rate may go up as load increases. Smart Detection uses machine learning to find these anomalies. Smart Detection monitors the data received from your app, and in particular the failure rates. Application Insights automatically alerts you in near real time if your web app experiences an abnormal rise in the rate of failed requests.
+After you set up Application Insights for your project and your app generates a certain amount of data, smart detection takes 24 hours to learn the normal behavior of your app. Your app's performance has a typical pattern of behavior. Some requests or dependency calls will be more prone to failure than others, and the overall failure rate might go up as load increases.
-As data comes into Application Insights from your web app, Smart Detection compares the current behavior with the patterns seen over the past few days. If there's an abnormal rise in failure rate compared to previous performance, an analysis is triggered. To help you triage and diagnose the problem, an analysis of the characteristics of the failures and related application data is provided in the alert details. There are also links to the Application Insights portal for further diagnosis. The feature needs no set-up nor configuration, as it uses machine learning algorithms to predict the normal failure rate.
+Smart detection uses machine learning to find these anomalies. Smart detection monitors the data received from your app, and in particular the failure rates. Application Insights automatically alerts you in near real time if your web app experiences an abnormal rise in the rate of failed requests.
-While metric alerts tell you there might be a problem, Smart Detection starts the diagnostic work for you, performing much of the analysis you would otherwise have to do yourself. You get the results neatly packaged, helping you to get quickly to the root of the problem.
+As data comes into Application Insights from your web app, smart detection compares the current behavior with the patterns seen over the past few days. If there's an abnormal rise in failure rate compared to previous performance, an analysis is triggered.
+
+To help you triage and diagnose a problem, an analysis of the characteristics of the failures and related application data is provided in the alert details. There are also links to the Application Insights portal for further diagnosis. The feature doesn't need setup or configuration because it uses machine learning algorithms to predict the normal failure rate.
+
+Although metric alerts tell you there might be a problem, smart detection starts the diagnostic work for you. It performs much of the analysis you would otherwise have to do yourself. You get the results neatly packaged, which helps you to quickly get to the root of the problem.
Smart detection works for web apps hosted in the cloud or on your own servers that generate application requests or dependency data. ## Prometheus alerts (preview)
-Prometheus alerts are based on metric values stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). They fire when the results of a PromQL query resolves to true. Prometheus alerts are displayed and managed like other alert types when they fire, but they are configured with a Prometheus rule group. See [Rule groups in Azure Monitor managed service for Prometheus](../essentials/prometheus-rule-groups.md) for details.
+Prometheus alerts are based on metric values stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). They fire when the result of a PromQL query resolves to true. Prometheus alerts are displayed and managed like other alert types when they fire, but they're configured with a Prometheus rule group. For more information, see [Rule groups in Azure Monitor managed service for Prometheus](../essentials/prometheus-rule-groups.md).
## Next steps - Get an [overview of alerts](alerts-overview.md). - [Create an alert rule](alerts-log.md).-- Learn more about [Smart Detection](proactive-failure-diagnostics.md).-
+- Learn more about [smart detection](proactive-failure-diagnostics.md).
azure-monitor Itsmc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-overview.md
Last updated 04/28/2022
-# IT Service Management (ITSM) Integration
+# IT Service Management integration
:::image type="icon" source="media/itsmc-overview/itsmc-symbol.png"::: This article describes how you can integrate Azure Monitor with supported IT Service Management (ITSM) products.
-Azure services like Azure Log Analytics and Azure Monitor provide tools to detect, analyze, and troubleshoot problems with your Azure and non-Azure resources. But the work items related to an issue typically reside in an ITSM product or service.
+Azure services like Log Analytics and Azure Monitor provide tools to detect, analyze, and troubleshoot problems with your Azure and non-Azure resources. But the work items related to an issue typically reside in an ITSM product or service.
-Azure Monitor provides a bi-directional connection between Azure and ITSM tools to help you resolve issues faster. You can create work items in your ITSM tool, based on your Azure alerts (Metric Alerts, Activity Log Alerts, and Log Analytics alerts).
+Azure Monitor provides a bidirectional connection between Azure and ITSM tools to help you resolve issues faster. You can create work items in your ITSM tool based on your Azure metric alerts, activity log alerts, and Log Analytics alerts.
Azure Monitor supports connections with the following ITSM tools: -- ServiceNow ITSM or ITOM-- BMC
+- ServiceNow ITSM or IT Operations Management (ITOM)
+- BMC
-For information about legal terms and the privacy policy, see [Microsoft Privacy Statement](https://go.microsoft.com/fwLink/?LinkID=522330&clcid=0x9).
-## ITSM Integration Workflow
-Depending on your integration, start connecting to your ITSM with these steps:
+For information about legal terms and the privacy policy, see the [Microsoft privacy statement](https://go.microsoft.com/fwLink/?LinkID=522330&clcid=0x9).
-- For Service Now ITOM events or BMC Helix use the Secure webhook action:
+## ITSM integration workflow
+Depending on your integration, start connecting to your ITSM tool with these steps:
- 1. [Register your app with Azure AD](./itsm-connector-secure-webhook-connections-azure-configuration.md#register-with-azure-active-directory).
- 1. [Define a Service principal](./itsm-connector-secure-webhook-connections-azure-configuration.md#define-service-principal).
- 1. [Create a Secure Webhook action group](./itsm-connector-secure-webhook-connections-azure-configuration.md#create-a-secure-webhook-action-group).
+- For ServiceNow ITOM events or BMC Helix, use the secure webhook action:
+
+ 1. [Register your app with Azure Active Directory](./itsm-connector-secure-webhook-connections-azure-configuration.md#register-with-azure-active-directory).
+ 1. [Define a service principal](./itsm-connector-secure-webhook-connections-azure-configuration.md#define-service-principal).
+ 1. [Create a secure webhook action group](./itsm-connector-secure-webhook-connections-azure-configuration.md#create-a-secure-webhook-action-group).
1. Configure your partner environment. Secure Export supports connections with the following ITSM tools: - [ServiceNow ITOM](./itsmc-secure-webhook-connections-servicenow.md) - [BMC Helix](./itsmc-secure-webhook-connections-bmc.md) - For ServiceNow ITSM, use the ITSM action:
- 1. Connect to your ITSM. See [the ServiceNow connection instructions](./itsmc-connections-servicenow.md).
- 1. (Optional) Set up the IP Ranges. In order to list the ITSM IP addresses to allow ITSM connections from partner ITSM tools, we recommend listing the whole public IP range of Azure region where their LogAnalytics workspace belongs. [See details here](https://www.microsoft.com/en-us/download/details.aspx?id=56519). For regions EUS/WEU/EUS2/WUS2/US South Central the customer can list ActionGroup network tag only.
- 1. [Configure your Azure ITSM Solution and create the ITSM connection](./itsmc-definition.md#install-it-service-management-connector).
- 1. [Configure Action Group to leverage ITSM connector](./itsmc-definition.md#define-a-template).
+ 1. Connect to your ITSM. For more information, see the [ServiceNow connection instructions](./itsmc-connections-servicenow.md).
+ 1. (Optional) Set up the IP ranges. To list the ITSM IP addresses to allow ITSM connections from partner ITSM tools, list the whole public IP range of an Azure region where the Log Analytics workspace belongs. For more information, see the [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=56519). For regions EUS/WEU/EUS2/WUS2/US South Central, the customer can list the ActionGroup network tag only.
+ 1. [Configure your Azure ITSM solution and create the ITSM connection](./itsmc-definition.md#install-it-service-management-connector).
+ 1. [Configure an action group to use the ITSM connector](./itsmc-definition.md#define-a-template).
## Next steps-- [ServiceNow connection instructions](./itsmc-connections-servicenow.md).
+[ServiceNow connection instructions](./itsmc-connections-servicenow.md)
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md
Each workspace can have multiple accounts associated with it. Each account can h
| Read the workspace keys to allow sending logs to this workspace. | `Microsoft.OperationalInsights/workspaces/sharedKeys/action` | | Add and remove monitoring solutions. | `Microsoft.Resources/deployments/*` <br> `Microsoft.OperationalInsights/*` <br> `Microsoft.OperationsManagement/*` <br> `Microsoft.Automation/*` <br> `Microsoft.Resources/deployments/*/write`<br><br>These permissions need to be granted at resource group or subscription level. | | View data in the **Backup** and **Site Recovery** solution tiles. | Administrator/Co-administrator<br><br>Accesses resources deployed by using the classic deployment model. |
+| Run a search job. | `Microsoft.OperationalInsights/workspaces/tables/write` <br> `Microsoft.OperationalInsights/workspaces/searchJobs/write`|
+| Restore data from archived table. | `Microsoft.OperationalInsights/workspaces/tables/write` <br> `Microsoft.OperationalInsights/workspaces/restoreLogs/write`|
### Built-in roles
The `/read` permission is usually granted from a role that includes _\*/read or_
In addition to using the built-in roles for a Log Analytics workspace, you can create custom roles to assign more granular permissions. Here are some common examples.
-**Example 1: Grant a user access to log data from their resources.**
+**Example 1: Grant a user permission to read log data from their resources.**
- Configure the workspace access control mode to *use workspace or resource permissions*. - Grant users `*/read` or `Microsoft.Insights/logs/*/read` permissions to their resources. If they're already assigned the [Log Analytics Reader](../../role-based-access-control/built-in-roles.md#reader) role on the workspace, it's sufficient.
-**Example 2: Grant a user access to log data from their resources and configure their resources to send logs to the workspace.**
+
+**Example 2: Grant a user permission to read log data from their resources and run a search job.**
+
+- Configure the workspace access control mode to *use workspace or resource permissions*.
+- Grant users `*/read` or `Microsoft.Insights/logs/*/read` permissions to their resources. If they're already assigned the [Log Analytics Reader](../../role-based-access-control/built-in-roles.md#reader) role on the workspace, it's sufficient.
+- Grant users the following permissions on the workspace:
+ - `Microsoft.OperationalInsights/workspaces/tables/write`: Required to be able to create the search results table (_SRCH).
+ - `Microsoft.OperationalInsights/workspaces/searchJobs/write`: Required to allow executing the search job operation.
++
+**Example 3: Grant a user permission to read log data from their resources and configure their resources to send logs to the Log Analytics workspace.**
- Configure the workspace access control mode to *use workspace or resource permissions*. - Grant users the following permissions on the workspace: `Microsoft.OperationalInsights/workspaces/read` and `Microsoft.OperationalInsights/workspaces/sharedKeys/action`. With these permissions, users can't perform any workspace-level queries. They can only enumerate the workspace and use it as a destination for diagnostic settings or agent configuration. - Grant users the following permissions to their resources: `Microsoft.Insights/logs/*/read` and `Microsoft.Insights/diagnosticSettings/write`. If they're already assigned the [Log Analytics Contributor](../../role-based-access-control/built-in-roles.md#contributor) role, assigned the Reader role, or granted `*/read` permissions on this resource, it's sufficient.
-**Example 3: Grant a user access to log data from their resources without being able to read security events and send data.**
+**Example 4: Grant a user permission to read log data from their resources, but not to send logs to the Log Analytics workspace or read security events.**
- Configure the workspace access control mode to *use workspace or resource permissions*. - Grant users the following permissions to their resources: `Microsoft.Insights/logs/*/read`. - Add the following NonAction to block users from reading the SecurityEvent type: `Microsoft.Insights/logs/SecurityEvent/read`. The NonAction shall be in the same custom role as the action that provides the read permission (`Microsoft.Insights/logs/*/read`). If the user inherits the read action from another role that's assigned to this resource or to the subscription or resource group, they could read all log types. This scenario is also true if they inherit `*/read` that exists, for example, with the Reader or Contributor role.
-**Example 4: Grant a user access to log data from their resources and read all Azure AD sign-in and read Update Management solution log data from the workspace.**
+**Example 5: Grant a user permission to read log data from their resources and all Azure AD sign-in and read Update Management solution log data in the Log Analytics workspace.**
- Configure the workspace access control mode to *use workspace or resource permissions*. - Grant users the following permissions on the workspace:
In addition to using the built-in roles for a Log Analytics workspace, you can c
- `Microsoft.OperationalInsights/workspaces/query/ComputerGroup/read`: Required to be able to use Update Management solutions - Grant users the following permissions to their resources: `*/read`, assigned to the Reader role, or `Microsoft.Insights/logs/*/read`
+**Example 6: Restrict a user from restoring archived logs.**
+
+- Configure the workspace access control mode to *use workspace or resource permissions*.
+- Assign the user to the [Log Analytics Contributor](../../role-based-access-control/built-in-roles.md#contributor) role.
+- Add the following NonAction to block users from restoring archived logs: `Microsoft.OperationalInsights/workspaces/restoreLogs/write`
++ ## Set table-level read access [Azure custom roles](../../role-based-access-control/custom-roles.md) let you grant specific users or groups access to specific tables in the workspace. Azure custom roles apply to workspaces with either workspace-context or resource-context [access control modes](#access-control-mode) regardless of the user's [access mode](#access-mode).
azure-web-pubsub Quickstart Bicep Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-bicep-template.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Review the Bicep file
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-web-pubsub/).
+The template used in this quickstart is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/azure-web-pubsub/).
:::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.web/azure-web-pubsub/main.bicep":::
Remove-AzResourceGroup -Name exampleRG
For a step-by-step tutorial that guides you through the process of creating a Bicep file using Visual Studio Code, see: > [!div class="nextstepaction"]
-> [Quickstart: Create Bicep files with Visual Studio Code](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md)
+> [Quickstart: Create Bicep files with Visual Studio Code](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md)
azure-web-pubsub Resource Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/resource-faq.md
Azure Web PubSub service is more suitable for situations where:
## Where does my data reside?
-Azure Web PubSub service works as a data processor service. It won't store any customer content, and data residency is included by design. If you use Azure Web PubSub service together with other Azure services, like Azure Storage for diagnostics, see [this white paper](https://azure.microsoft.com/resources/achieving-compliant-data-residency-and-security-with-azure/) for guidance about how to keep data residency in Azure regions.
+Azure Web PubSub service works as a data processor service and doesn't store any customer content. Azure Web PubSub service processes customer data within the region the customer deploys the service instance in. If you use Azure Web PubSub service together with other Azure services, like Azure Storage for diagnostics, see [this white paper](https://azure.microsoft.com/resources/achieving-compliant-data-residency-and-security-with-azure/) for guidance about how to keep data residency in Azure regions.
cognitive-services Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-synthesis.md
Batch synthesis properties are described in the following table.
|`synthesisConfig`|The configuration settings to use for batch synthesis of plain text.<br/><br/>This property is only applicable when `textType` is set to `"PlainText"`.| |`synthesisConfig.pitch`|The pitch of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.| |`synthesisConfig.rate`|The rate of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
-|`synthesisConfig.style`|For some voices, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm. You can optimize the voice for different scenarios like customer service, newscast, and voice assistant.<br/><br/>For information about the available styles per voice, see [voice styles and roles](language-support.md?tabs=stt-tts#voice-styles-and-roles).<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
-|`synthesisConfig.voice`|The voice that speaks the audio output.<br/><br/>For information about the available prebuilt neural voices, see [language and voice support](language-support.md?tabs=stt-tts). To use a custom voice, you must specify a valid custom voice and deployment ID mapping in the `customVoices` property.<br/><br/>This property is required when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.style`|For some voices, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm. You can optimize the voice for different scenarios like customer service, newscast, and voice assistant.<br/><br/>For information about the available styles per voice, see [voice styles and roles](language-support.md?tabs=tts#voice-styles-and-roles).<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.voice`|The voice that speaks the audio output.<br/><br/>For information about the available prebuilt neural voices, see [language and voice support](language-support.md?tabs=tts). To use a custom voice, you must specify a valid custom voice and deployment ID mapping in the `customVoices` property.<br/><br/>This property is required when `textType` is set to `"PlainText"`.|
|`synthesisConfig.volume`|The volume of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.| |`textType`|Indicates whether the `inputs` text property should be plain text or SSML. The possible case-insensitive values are "PlainText" and "SSML". When the `textType` is set to `"PlainText"`, you must also set the `synthesisConfig` voice property.<br/><br/>This property is required.|
cognitive-services Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/conversation-transcription.md
Audio data is processed live to return the speaker identifier and transcript, an
## Language support
-Currently, conversation transcription supports [all speech-to-text languages](language-support.md?tabs=stt-tts) in the following regions:ΓÇ»`centralus`, `eastasia`, `eastus`, `westeurope`.
+Currently, conversation transcription supports [all speech-to-text languages](language-support.md?tabs=stt) in the following regions:ΓÇ»`centralus`, `eastasia`, `eastus`, `westeurope`.
## Next steps
cognitive-services Custom Neural Voice Lite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice-lite.md
Speech Studio provides two Custom Neural Voice (CNV) project types: CNV Lite and
With a CNV Lite project, you record your voice online by reading 20-50 pre-defined scripts provided by Microsoft. After you've recorded at least 20 samples, you can start to train a model. Once the model is trained successfully, you can review the model and check out 20 output samples produced with another set of pre-defined scripts.
-See the [supported languages](language-support.md?tabs=stt-tts) for Custom Neural Voice.
+See the [supported languages](language-support.md?tabs=tts) for Custom Neural Voice.
## Compare project types
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md
Custom Neural Voice (CNV) is a text-to-speech feature that lets you create a one
> [!IMPORTANT] > Custom Neural Voice access is [limited](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).
-Out of the box, [text-to-speech](text-to-speech.md) can be used with prebuilt neural voices for each [supported language](language-support.md?tabs=stt-tts). The prebuilt neural voices work very well in most text-to-speech scenarios if a unique voice isn't required.
+Out of the box, [text-to-speech](text-to-speech.md) can be used with prebuilt neural voices for each [supported language](language-support.md?tabs=tts). The prebuilt neural voices work very well in most text-to-speech scenarios if a unique voice isn't required.
-Custom Neural Voice is based on the neural text-to-speech technology and the multilingual, multi-speaker, universal model. You can create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the [supported languages](language-support.md?tabs=stt-tts) for Custom Neural Voice.
+Custom Neural Voice is based on the neural text-to-speech technology and the multilingual, multi-speaker, universal model. You can create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the [supported languages](language-support.md?tabs=tts) for Custom Neural Voice.
## How does it work?
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-speech-overview.md
With Custom Speech, you can evaluate and improve the Microsoft speech-to-text accuracy for your applications and products.
-Out of the box, speech to text utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing a variety of common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt-tts) is used by default. The base model works very well in most speech recognition scenarios.
+Out of the box, speech to text utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing a variety of common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works very well in most speech recognition scenarios.
A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions.
cognitive-services Direct Line Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/direct-line-speech.md
Sample code for creating a voice assistant is available on GitHub. These samples
Voice assistants built using Speech service can use the full range of customization options available for [speech-to-text](speech-to-text.md), [text-to-speech](text-to-speech.md), and [custom keyword selection](./custom-keyword-basics.md). > [!NOTE]
-> Customization options vary by language/locale (see [Supported languages](./language-support.md?tabs=stt-tts)).
+> Customization options vary by language/locale (see [Supported languages](./language-support.md?tabs=stt)).
Direct Line Speech and its associated functionality for voice assistants are an ideal supplement to the [Virtual Assistant Solution and Enterprise Template](/azure/bot-service/bot-builder-enterprise-template-overview). Though Direct Line Speech can work with any compatible bot, these resources provide a reusable baseline for high-quality conversational experiences as well as common supporting skills and models to get started quickly.
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
The tool is based on [Speech Synthesis Markup Language (SSML)](speech-synthesis-
- No-code approach: You can use the Audio Content Creation tool for text-to-speech synthesis without writing any code. The output audio might be the final deliverable that you want. For example, you can use the output audio for a podcast or a video narration. - Developer-friendly: You can listen to the output audio and adjust the SSML to improve speech synthesis. Then you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-basics.md) to integrate the SSML into your applications. For example, you can use the SSML for building a chat bot.
-You have easy access to a broad portfolio of [languages and voices](language-support.md?tabs=stt-tts). These voices include state-of-the-art prebuilt neural voices and your custom neural voice, if you've built one.
+You have easy access to a broad portfolio of [languages and voices](language-support.md?tabs=tts). These voices include state-of-the-art prebuilt neural voices and your custom neural voice, if you've built one.
To learn more, view the Audio Content Creation tutorial video [on YouTube](https://youtu.be/ygApYuOOG6w).
Each step in the preceding diagram is described here:
1. Choose the Speech resource you want to work with. 1. [Create an audio tuning file](#create-an-audio-tuning-file) by using plain text or SSML scripts. Enter or upload your content into Audio Content Creation.
-1. Choose the voice and the language for your script content. Audio Content Creation includes all of the [prebuilt text-to-speech voices](language-support.md?tabs=stt-tts). You can use prebuilt neural voices or a custom neural voice.
+1. Choose the voice and the language for your script content. Audio Content Creation includes all of the [prebuilt text-to-speech voices](language-support.md?tabs=tts). You can use prebuilt neural voices or a custom neural voice.
> [!NOTE] > Gated access is available for Custom Neural Voice, which allows you to create high-definition voices that are similar to natural-sounding speech. For more information, see [Gating process](./text-to-speech.md).
cognitive-services How To Custom Speech Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-create-project.md
zone_pivot_groups: speech-studio-cli-rest
# Create a Custom Speech project
-Custom Speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a [locale](language-support.md?tabs=stt-tts). For example, you might create a project for English in the United States.
+Custom Speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a [locale](language-support.md?tabs=stt). For example, you might create a project for English in the United States.
## Create a project
cognitive-services How To Custom Speech Model And Endpoint Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md
When a custom model or base model expires, it is no longer available for transcr
|Transcription route |Expired model result |Recommendation | ||||
-|Custom endpoint|Speech recognition requests will fall back to the most recent base model for the same [locale](language-support.md?tabs=stt-tts). You will get results, but recognition might not accurately transcribe your domain data. |Update the endpoint's model as described in the [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md) guide. |
+|Custom endpoint|Speech recognition requests will fall back to the most recent base model for the same [locale](language-support.md?tabs=stt). You will get results, but recognition might not accurately transcribe your domain data. |Update the endpoint's model as described in the [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md) guide. |
|Batch transcription |[Batch transcription](batch-transcription.md) requests for expired models will fail with a 4xx error. |In each [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) REST API request body, set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. |
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
You can use audio + human-labeled transcript data for both [training](how-to-cus
- To improve the acoustic aspects like slight accents, speaking styles, and background noises. - To measure the accuracy of Microsoft's speech-to-text accuracy when it's processing your audio files.
-For a list of base models that support training with audio data, see [Language support](language-support.md?tabs=stt-tts). Even if a base model does support training with audio data, the service might use only part of the audio. And it will still use all the transcripts.
+For a list of base models that support training with audio data, see [Language support](language-support.md?tabs=stt). Even if a base model does support training with audio data, the service might use only part of the audio. And it will still use all the transcripts.
> [!IMPORTANT] > If a base model doesn't support customization with audio data, only the transcription text will be used for training. If you switch to a base model that supports customization with audio data, the training time may increase from several hours to several days. The change in training time would be most noticeable when you switch to a base model in a [region](regions.md#speech-service) without dedicated hardware for training. If the audio data is not required, you should remove it to decrease the training time.
Expected utterances often follow a certain pattern. One common pattern is that u
* "I have a question about `product`," where `product` is a list of possible products. * "Make that `object` `color`," where `object` is a list of geometric shapes and `color` is a list of colors.
-For a list of supported base models and locales for training with structured text, see [Language support](language-support.md?tabs=stt-tts). You must use the latest base model for these locales. For locales that don't support training with structured text, the service will take any training sentences that don't reference any classes as part of training with plain-text data.
+For a list of supported base models and locales for training with structured text, see [Language support](language-support.md?tabs=stt). You must use the latest base model for these locales. For locales that don't support training with structured text, the service will take any training sentences that don't reference any classes as part of training with plain-text data.
The structured-text file should have an .md extension. The maximum file size is 200 MB, and the text encoding must be UTF-8 BOM. The syntax of the Markdown is the same as that from the Language Understanding models, in particular list entities and example utterances. For more information about the complete Markdown syntax, see the <a href="/azure/bot-service/file-format/bot-builder-lu-file-format" target="_blank"> Language Understanding Markdown</a>.
Here's an example structured text file:
Specialized or made up words might have unique pronunciations. These words can be recognized if they can be broken down into smaller words to pronounce them. For example, to recognize "Xbox", pronounce it as "X box". This approach won't increase overall accuracy, but can improve recognition of this and other keywords.
-You can provide a custom pronunciation file to improve recognition. Don't use custom pronunciation files to alter the pronunciation of common words. For a list of languages that support custom pronunciation, see [language support](language-support.md?tabs=stt-tts).
+You can provide a custom pronunciation file to improve recognition. Don't use custom pronunciation files to alter the pronunciation of common words. For a list of languages that support custom pronunciation, see [language support](language-support.md?tabs=stt).
> [!NOTE] > You can use a pronunciation file alongside any other training dataset except structured text training data. To use pronunciation data with structured text, it must be within a structured text file.
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
After you validate your data files, you can use them to build your Custom Neural
- [Neural](?tabs=neural#train-your-custom-neural-voice-model): Create a voice in the same language of your training data, select **Neural** method. -- [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model) (Preview): Create a secondary language for your voice model to speak a different language from your training data. For example, with the `zh-CN` training data, you can create a voice that speaks `en-US`. The language of the training data and the target language must both be one of the [languages that are supported](language-support.md?tabs=stt-tts) for cross lingual voice training. You don't need to prepare training data in the target language, but your test script must be in the target language.
+- [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model) (Preview): Create a secondary language for your voice model to speak a different language from your training data. For example, with the `zh-CN` training data, you can create a voice that speaks `en-US`. The language of the training data and the target language must both be one of the [languages that are supported](language-support.md?tabs=tts) for cross lingual voice training. You don't need to prepare training data in the target language, but your test script must be in the target language.
- [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model) (Preview): Create a custom neural voice that speaks in multiple styles and emotions, without adding new training data. Multi-style voices are particularly useful for video game characters, conversational chatbots, audiobooks, content readers, and more. To create a multi-style voice, you just need to prepare a set of general training data (at least 300 utterances), and select one or more of the preset target speaking styles. You can also create up to 10 custom styles by providing style samples (at least 100 utterances per style) as additional training data for the same voice.
-The language of the training data must be one of the [languages that are supported](language-support.md?tabs=stt-tts) for custom neural voice neural, cross-lingual, or multi-style training.
+The language of the training data must be one of the [languages that are supported](language-support.md?tabs=tts) for custom neural voice neural, cross-lingual, or multi-style training.
## Train your Custom Neural Voice model
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
Content for [Custom Neural Voice](https://aka.ms/customvoice) like data, models,
> [!TIP] > Try [Custom Neural Voice (CNV) Lite](custom-neural-voice-lite.md) to demo and evaluate CNV before investing in professional recordings to create a higher-quality voice.
-All it takes to get started are a handful of audio files and the associated transcriptions. See if Custom Neural Voice supports your [language](language-support.md?tabs=stt-tts) and [region](regions.md#speech-service).
+All it takes to get started are a handful of audio files and the associated transcriptions. See if Custom Neural Voice supports your [language](language-support.md?tabs=tts) and [region](regions.md#speech-service).
## Create a Custom Neural Voice Pro project
cognitive-services How To Migrate To Prebuilt Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-to-prebuilt-neural-voice.md
# Migrate from prebuilt standard voice to prebuilt neural voice > [!IMPORTANT]
-> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md?tabs=stt-tts). After August 31, the standard voices won't be supported with any Speech resource.
+> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md?tabs=tts). After August 31, the standard voices won't be supported with any Speech resource.
The prebuilt neural voice provides more natural sounding speech output, and thus, a better end-user experience.
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
zone_pivot_groups: programming-languages-speech-services-nomore-variant
# Get facial position with viseme > [!NOTE]
-> Viseme ID supports neural voices in [all viseme-supported locales](language-support.md?tabs=stt-tts). Scalable Vector Graphics (SVG) only supports neural voices in `en-US` locale, and blend shapes supports neural voices in `en-US` and `zh-CN` locales.
+> Viseme ID supports neural voices in [all viseme-supported locales](language-support.md?tabs=tts). Scalable Vector Graphics (SVG) only supports neural voices in `en-US` locale, and blend shapes supports neural voices in `en-US` and `zh-CN` locales.
A *viseme* is the visual description of a phoneme in spoken language. It defines the position of the face and mouth while a person is speaking. Each viseme depicts the key facial poses for a specific set of phonemes.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
The following tables summarize language support for [speech-to-text](speech-to-text.md), [text-to-speech](text-to-speech.md), [pronunciation assessment](how-to-pronunciation-assessment.md), [speech translation](speech-translation.md), [speaker recognition](speaker-recognition-overview.md), and additional service features.
+You can also get a list of locales and voices supported for each specific region or endpoint through the [Speech SDK](speech-sdk.md), [Speech-to-text REST API](rest-speech-to-text.md), [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) and [Text-to-speech REST API](rest-text-to-speech.md#get-a-list-of-voices).
+ ## Supported languages Language support varies by Speech service functionality. **Choose a Speech feature**
-# [Speech-to-text and Text-to-speech](#tab/stt-tts)
+# [Speech-to-text](#tab/stt)
-The table in this section summarizes the locales and voices supported for Speech-to-text and Text-to-speech. Please see the table footnotes for more details.
+The table in this section summarizes the locales and voices supported for Speech-to-text. Please see the table footnotes for more details.
-Additional remarks for Speech-to-text locales are included in the [Custom Speech](#custom-speech) section below. Additional remarks for Text-to-speech locales are included in the [Prebuilt neural voices](#prebuilt-neural-voices), [Voice styles and roles](#voice-styles-and-roles), and [Custom Neural Voice](#custom-neural-voice) sections below.
+Additional remarks for Speech-to-text locales are included in the [Custom Speech](#custom-speech) section below.
### Custom Speech To improve Speech-to-text recognition accuracy, customization is available for some languages and base models. Depending on the locale, you can upload audio + human-labeled transcripts, plain text, structured text, and pronunciation data. By default, plain text customization is supported for all available base models. To learn more about customization, see [Custom Speech](./custom-speech-overview.md).
-### Prebuilt neural voices
-
-Each prebuilt neural voice supports a specific language and dialect, identified by locale. You can try the demo and hear the voices on [this website](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#features).
-
-> [!IMPORTANT]
-> Pricing varies for Prebuilt Neural Voice (see *Neural* on the pricing page) and Custom Neural Voice (see *Custom Neural* on the pricing page). For more information, see the [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page.
+# [Text-to-speech](#tab/tts)
-Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing.
+The tables in this section summarizes the locales and voices supported for Text-to-speech. Please see the table footnotes for more details.
-Please note that the following neural voices are retired.
+Additional remarks for Text-to-speech locales are included in the [Voice styles and roles](#voice-styles-and-roles), [Prebuilt neural voices](#prebuilt-neural-voices), and [Custom Neural Voice](#custom-neural-voice) sections below.
-- The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021. If you're using container Neural TTS, [download](speech-container-howto.md#get-the-container-image-with-docker-pull) and deploy the latest version. Starting from October 30, 2021, all requests with previous versions will not succeed.-- The `en-US-JessaNeural` voice is retired and replaced by `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria." ### Voice styles and roles
Use the following table to determine supported styles and roles for each neural
[!INCLUDE [Language support include](includes/language-support/voice-styles-and-roles.md)]
+### Prebuilt neural voices
+
+Each prebuilt neural voice supports a specific language and dialect, identified by locale. You can try the demo and hear the voices on [this website](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#features).
+
+> [!IMPORTANT]
+> Pricing varies for Prebuilt Neural Voice (see *Neural* on the pricing page) and Custom Neural Voice (see *Custom Neural* on the pricing page). For more information, see the [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page.
+
+Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing.
+
+Please note that the following neural voices are retired.
+
+- The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021. If you're using container Neural TTS, [download](speech-container-howto.md#get-the-container-image-with-docker-pull) and deploy the latest version. Starting from October 30, 2021, all requests with previous versions will not succeed.
+- The `en-US-JessaNeural` voice is retired and replaced by `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria."
+ ### Custom Neural Voice Custom Neural Voice lets you create synthetic voices that are rich in speaking styles. You can create a unique brand voice in multiple languages and styles by using a small set of recording data. There are two Custom Neural Voice (CNV) project types: CNV Pro and CNV Lite (preview).
Select the right locale that matches your training data to train a custom neural
With the cross-lingual feature (preview), you can transfer your custom neural voice model to speak a second language. For example, with the `zh-CN` data, you can create a voice that speaks `en-AU` or any of the languages with Cross-lingual support.
-### Get locales via API and SDK
-
-You can also get a list of locales and voices supported for each specific region or endpoint through the [Speech SDK](speech-sdk.md), [Speech-to-text REST API](rest-speech-to-text.md), [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) and [Text-to-speech REST API](rest-text-to-speech.md#get-a-list-of-voices).
# [Pronunciation assessment](#tab/pronunciation-assessment)
cognitive-services Migrate To Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-to-batch-synthesis.md
The Long Audio API is limited to the following regions:
## Voices list
-Batch synthesis API supports all [text-to-speech voices and styles](language-support.md?tabs=stt-tts).
+Batch synthesis API supports all [text-to-speech voices and styles](language-support.md?tabs=tts).
The Long Audio API is limited to the set of voices returned by a GET request to `https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/voices`.
cognitive-services Migration Overview Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migration-overview-neural-voice.md
Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-s
## Prebuilt standard voice > [!IMPORTANT]
-> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md?tabs=stt-tts). After August 31, the standard voices won't be supported with any Speech resource.
+> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md?tabs=tts). After August 31, the standard voices won't be supported with any Speech resource.
Go to [this article](how-to-migrate-to-prebuilt-neural-voice.md) to learn how to migrate to prebuilt neural voice.
cognitive-services Resiliency And Recovery Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/resiliency-and-recovery-plan.md
Custom Voice doesn't support automatic failover. Handle real-time synthesis fail
When custom voice real-time synthesis fails, fail over to a public voice (client sample code: [GitHub: custom voice failover to public voice](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L899)).
-Check the [public voices available](language-support.md?tabs=stt-tts). You can also change the sample code above if you would like to fail over to a different voice or in a different region.
+Check the [public voices available](language-support.md?tabs=tts). You can also change the sample code above if you would like to fail over to a different voice or in a different region.
**Option 2: Fail over to custom voice on another region.**
cognitive-services Rest Speech To Text Short https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-short.md
These parameters might be included in the query string of the REST request.
| Parameter | Description | Required or optional | |--|-||
-| `language` | Identifies the spoken language that's being recognized. See [Supported languages](language-support.md?tabs=stt-tts). | Required |
+| `language` | Identifies the spoken language that's being recognized. See [Supported languages](language-support.md?tabs=stt). | Required |
| `format` | Specifies the result format. Accepted values are `simple` and `detailed`. Simple results include `RecognitionStatus`, `DisplayText`, `Offset`, and `Duration`. Detailed responses include four different representations of display text. The default setting is `simple`. | Optional | | `profanity` | Specifies how to handle profanity in recognition results. Accepted values are: <br><br>`masked`, which replaces profanity with asterisks. <br>`removed`, which removes all profanity from the result. <br>`raw`, which includes profanity in the result. <br><br>The default setting is `masked`. | Optional | | `cid` | When you're using the [Speech Studio](speech-studio-overview.md) to create [custom models](./custom-speech-overview.md), you can take advantage of the **Endpoint ID** value from the **Deployment** page. Use the **Endpoint ID** value as the argument to the `cid` query string parameter. | Optional |
cognitive-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text.md
See [Train a model](how-to-custom-speech-train-model.md?pivots=rest-api) and [Cu
## Projects
-Projects are applicable for [Custom Speech](custom-speech-overview.md). Custom Speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a [locale](language-support.md?tabs=stt-tts). For example, you might create a project for English in the United States.
+Projects are applicable for [Custom Speech](custom-speech-overview.md). Custom Speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a [locale](language-support.md?tabs=stt). For example, you might create a project for English in the United States.
See [Create a project](how-to-custom-speech-create-project.md?pivots=rest-api) for examples of how to create projects. This table includes all the operations that you can perform on projects.
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
The Speech service allows you to [convert text into synthesized speech](#convert
The text-to-speech REST API supports neural text-to-speech voices, which support specific languages and dialects that are identified by locale. Each available endpoint is associated with a region. A Speech resource key for the endpoint or region that you plan to use is required. Here are links to more information: -- For a complete list of voices, see [Language and voice support for the Speech service](language-support.md?tabs=stt-tts).
+- For a complete list of voices, see [Language and voice support for the Speech service](language-support.md?tabs=tts).
- For information about regional availability, see [Speech service supported regions](regions.md#speech-service). - For Azure Government and Azure China endpoints, see [this article about sovereign clouds](sovereign-clouds.md).
Before you use the text-to-speech REST API, understand that you need to complete
You can use the `tts.speech.microsoft.com/cognitiveservices/voices/list` endpoint to get a full list of voices for a specific region or endpoint. Prefix the voices list endpoint with a region to get a list of voices for that region. For example, to get a list of voices for the `westus` region, use the `https://westus.tts.speech.microsoft.com/cognitiveservices/voices/list` endpoint. For a list of all supported regions, see the [regions](regions.md) documentation. > [!NOTE]
-> [Voices and styles in preview](language-support.md?tabs=stt-tts) are only available in three service regions: East US, West Europe, and Southeast Asia.
+> [Voices and styles in preview](language-support.md?tabs=tts) are only available in three service regions: East US, West Europe, and Southeast Asia.
### Request headers
This table lists required and optional headers for text-to-speech requests:
### Request body
-If you're using a custom neural voice, the body of a request can be sent as plain text (ASCII or UTF-8). Otherwise, the body of each `POST` request is sent as [SSML](speech-synthesis-markup.md). SSML allows you to choose the voice and language of the synthesized speech that the text-to-speech feature returns. For a complete list of supported voices, see [Language and voice support for the Speech service](language-support.md?tabs=stt-tts).
+If you're using a custom neural voice, the body of a request can be sent as plain text (ASCII or UTF-8). Otherwise, the body of each `POST` request is sent as [SSML](speech-synthesis-markup.md). SSML allows you to choose the voice and language of the synthesized speech that the text-to-speech feature returns. For a complete list of supported voices, see [Language and voice support for the Speech service](language-support.md?tabs=tts).
### Sample request
cognitive-services Speech Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-configuration.md
# Configure Speech service containers
-Speech containers enable customers to build one speech application architecture that is optimized to take advantage of both robust cloud capabilities and edge locality. The five speech containers we support now are, **speech-to-text**, **custom-speech-to-text**, **text-to-speech**, **neural-text-to-speech** and **custom-text-to-speech**.
+Speech containers enable customers to build one speech application architecture that is optimized to take advantage of both robust cloud capabilities and edge locality. The supported speech containers are **speech-to-text**, **Custom speech-to-text**, **speech language identification** and **Neural text-to-speech**.
The **Speech** container runtime environment is configured using the `docker run` command arguments. This container has several required settings, along with a few optional settings. Several [examples](#example-docker-run-commands) of the command are available. The container-specific settings are the billing settings.
The volume mount setting consists of three color `:` separated fields:
This command mounts the host machine _C:\input_ directory to the containers _/usr/local/models_ directory. > [!IMPORTANT]
-> The volume mount settings are only applicable to **Custom Speech-to-text** and **Custom Text-to-speech** containers. The **Speech-to-text**, **Neural Text-to-speech** and **Text-to-speech** containers do not use volume mounts.
+> The volume mount settings are only applicable to **Custom Speech-to-text** containers. The **Speech-to-text**, **Neural Text-to-speech** and **Speech language identification** containers do not use volume mounts.
## Example docker run commands
ApiKey={API_KEY} \
Logging:Console:LogLevel:Default=Information ```
-## [Text-to-speech](#tab/tss)
-
-### Basic example for Text-to-speech
-
-```Docker
-docker run --rm -it -p 5000:5000 --memory 2g --cpus 1 \
-mcr.microsoft.com/azure-cognitive-services/speechservices/text-to-speech \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-### Logging example for Text-to-speech
-
-```Docker
-docker run --rm -it -p 5000:5000 --memory 2g --cpus 1 \
-mcr.microsoft.com/azure-cognitive-services/speechservices/text-to-speech \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY} \
-Logging:Console:LogLevel:Default=Information
-```
-
-## [Custom Text-to-speech](#tab/ctts)
-
-### Basic example for Custom Text-to-speech
-
-```Docker
-docker run --rm -it -p 5000:5000 --memory 2g --cpus 1 \
--v {VOLUME_MOUNT}:/usr/local/models \
-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-text-to-speech \
-ModelId={MODEL_ID} \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-### Logging example for Custom Text-to-speech
-
-```Docker
-docker run --rm -it -p 5000:5000 --memory 2g --cpus 1 \
--v {VOLUME_MOUNT}:/usr/local/models \
-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-text-to-speech \
-ModelId={MODEL_ID} \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY} \
-Logging:Console:LogLevel:Default=Information
-```
- ## [Neural Text-to-speech](#tab/ntts) ### Basic example for Neural Text-to-speech
Logging:Console:LogLevel:Default=Information
``` - ## Next steps - Review [How to install and run containers](speech-container-howto.md)+
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
The following tag is an example of the format:
For all the supported locales and corresponding voices of the neural text-to-speech container, see [Neural text-to-speech image tags](../containers/container-image-tags.md#neural-text-to-speech). > [!IMPORTANT]
-> When you construct a neural text-to-speech HTTP POST, the [SSML](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The value is the corresponding container [locale and voice](language-support.md?tabs=stt-tts). For example, the `latest` tag would have a voice name of `en-US-AriaNeural`.
+> When you construct a neural text-to-speech HTTP POST, the [SSML](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The value is the corresponding container [locale and voice](language-support.md?tabs=tts). For example, the `latest` tag would have a voice name of `en-US-AriaNeural`.
# [Speech language identification](#tab/lid)
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-studio-overview.md
In Speech Studio, the following Speech service features are available as project
* [Pronunciation assessment](https://aka.ms/speechstudio/pronunciationassessment): Evaluate speech pronunciation and give speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly, without code. To use the feature with the Speech SDK in your applications, see the [Pronunciation assessment](how-to-pronunciation-assessment.md) article.
-* [Voice Gallery](https://aka.ms/speechstudio/voicegallery): Build apps and services that speak naturally. Choose from a broad portfolio of [languages, voices, and variants](language-support.md?tabs=stt-tts). Bring your scenarios to life with highly expressive and human-like neural voices.
+* [Voice Gallery](https://aka.ms/speechstudio/voicegallery): Build apps and services that speak naturally. Choose from a broad portfolio of [languages, voices, and variants](language-support.md?tabs=tts). Bring your scenarios to life with highly expressive and human-like neural voices.
* [Custom Voice](https://aka.ms/speechstudio/customvoice): Create custom, one-of-a-kind voices for text-to-speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. To create and use custom voices via endpoints, see [Create and use your voice model](how-to-custom-voice-create-voice.md).
cognitive-services Speech Synthesis Markup Pronunciation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-pronunciation.md
The `phoneme` element is used for phonetic pronunciation in SSML documents. Alwa
Phonetic alphabets are composed of phones, which are made up of letters, numbers, or characters, sometimes in combination. Each phone describes a unique sound of speech. This is in contrast to the Latin alphabet, where any letter might represent multiple spoken sounds. Consider the different `en-US` pronunciations of the letter "c" in the words "candy" and "cease" or the different pronunciations of the letter combination "th" in the words "thing" and "those." > [!NOTE]
-> For a list of locales that support phonemes, see footnotes in the [language support](language-support.md?tabs=stt-tts) table.
+> For a list of locales that support phonemes, see footnotes in the [language support](language-support.md?tabs=tts) table.
Usage of the `phoneme` element's attributes are described in the following table.
The supported values for attributes of the `phoneme` element were [described pre
You can define how single entities (such as company, a medical term, or an emoji) are read in SSML by using the [phoneme](#phoneme-element) and [sub](#sub-element) elements. To define how multiple entities are read, create an XML structured custom lexicon file. Then you upload the custom lexicon XML file and reference it with the SSML `lexicon` element. > [!NOTE]
-> For a list of locales that support custom lexicon, see footnotes in the [language support](language-support.md?tabs=stt-tts) table.
+> For a list of locales that support custom lexicon, see footnotes in the [language support](language-support.md?tabs=tts) table.
> > The `lexicon` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text-to-speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead.
The text-to-speech output for this example is "a squared plus b squared equals c
- [SSML overview](speech-synthesis-markup.md) - [SSML document structure and events](speech-synthesis-markup-structure.md)-- [Language support: Voices, locales, languages](language-support.md?tabs=stt-tts)
+- [Language support: Voices, locales, languages](language-support.md?tabs=tts)
cognitive-services Speech Synthesis Markup Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-structure.md
This SSML snippet illustrates how to request blend shapes with your synthesized
- [SSML overview](speech-synthesis-markup.md) - [Voice and sound with SSML](speech-synthesis-markup-voice.md)-- [Language support: Voices, locales, languages](language-support.md?tabs=stt-tts)
+- [Language support: Voices, locales, languages](language-support.md?tabs=tts)
cognitive-services Speech Synthesis Markup Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-voice.md
Usage of the `voice` element's attributes are described in the following table.
| Attribute | Description | Required or optional | | - | - | - |
-| `name` | The voice used for text-to-speech output. For a complete list of supported prebuilt voices, see [Language support](language-support.md?tabs=stt-tts).| Required|
+| `name` | The voice used for text-to-speech output. For a complete list of supported prebuilt voices, see [Language support](language-support.md?tabs=tts).| Required|
### Voice examples
This example uses a custom voice named "my-custom-voice".
By default, neural voices have a neutral speaking style. You can adjust the speaking style, style degree, and role at the sentence level. > [!NOTE]
-> Styles, style degree, and roles are supported for a subset of neural voices as described in the [voice styles and roles](language-support.md?tabs=stt-tts#voice-styles-and-roles) documentation. To determine what styles and roles are supported for each voice, you can also use the [list voices](rest-text-to-speech.md#get-a-list-of-voices) API and the [Audio Content Creation](https://aka.ms/audiocontentcreation) web application.
+> Styles, style degree, and roles are supported for a subset of neural voices as described in the [voice styles and roles](language-support.md?tabs=tts#voice-styles-and-roles) documentation. To determine what styles and roles are supported for each voice, you can also use the [list voices](rest-text-to-speech.md#get-a-list-of-voices) API and the [Audio Content Creation](https://aka.ms/audiocontentcreation) web application.
Usage of the `mstts:express-as` element's attributes are described in the following table.
The supported values for attributes of the `mstts:backgroundaudio` element were
- [SSML overview](speech-synthesis-markup.md) - [SSML document structure and events](speech-synthesis-markup-structure.md)-- [Language support: Voices, locales, languages](language-support.md?tabs=stt-tts)
+- [Language support: Voices, locales, languages](language-support.md?tabs=tts)
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
You can use SSML in the following ways:
- [SSML document structure and events](speech-synthesis-markup-structure.md) - [Voice and sound with SSML](speech-synthesis-markup-voice.md) - [Pronunciation with SSML](speech-synthesis-markup-pronunciation.md)-- [Language support: Voices, locales, languages](language-support.md?tabs=stt-tts)
+- [Language support: Voices, locales, languages](language-support.md?tabs=tts)
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md
keywords: speech to text, speech to text software
In this overview, you learn about the benefits and capabilities of the speech-to-text feature of the Speech service, which is part of Azure Cognitive Services.
-Speech-to-text, also known as speech recognition, enables real-time or offline transcription of audio streams into text. For a full list of available speech-to-text languages, see [Language and voice support for the Speech service](language-support.md?tabs=stt-tts).
+Speech-to-text, also known as speech recognition, enables real-time or offline transcription of audio streams into text. For a full list of available speech-to-text languages, see [Language and voice support for the Speech service](language-support.md?tabs=stt).
> [!NOTE] > Microsoft uses the same recognition technology for Cortana and Office products.
The Azure speech-to-text service analyzes audio in real-time or batch to transcr
The base model may not be sufficient if the audio contains ambient noise or includes a lot of industry and domain-specific jargon. In these cases, building a custom speech model makes sense by training with additional data associated with that specific domain. You can create and train custom acoustic, language, and pronunciation models. For more information, see [Custom Speech](./custom-speech-overview.md) and [Speech-to-text REST API](rest-speech-to-text.md).
-Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md?tabs=stt-tts).
+Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md?tabs=stt).
## Next steps
cognitive-services Spx Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-basics.md
You can also save the synthesized output to a file. In this example, let's creat
spx synthesize --text "Enjoy using the Speech CLI." --audio output my-sample.wav ```
-These examples presume that you're testing in English. However, Speech service supports speech synthesis in many languages. You can pull down a full list of voices either by running the following command or by visiting the [language support page](./language-support.md?tabs=stt-tts).
+These examples presume that you're testing in English. However, Speech service supports speech synthesis in many languages. You can pull down a full list of voices either by running the following command or by visiting the [language support page](./language-support.md?tabs=tts).
```console spx synthesize --voices
spx translate --file /some/file/path/input.wav --source en-US --target ru-RU --o
``` > [!NOTE]
-> For a list of all supported languages and their corresponding locale codes, see [Language and voice support for the Speech service](language-support.md?tabs=stt-tts).
+> For a list of all supported languages and their corresponding locale codes, see [Language and voice support for the Speech service](language-support.md?tabs=tts).
> [!TIP] > If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help translate```.
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md
keywords: text to speech
In this overview, you learn about the benefits and capabilities of the text-to-speech feature of the Speech service, which is part of Azure Cognitive Services.
-Text-to-speech enables your applications, tools, or devices to convert text into humanlike synthesized speech. The text-to-speech capability is also known as speech synthesis. Use humanlike prebuilt neural voices out of the box, or create a custom neural voice that's unique to your product or brand. For a full list of supported voices, languages, and locales, see [Language and voice support for the Speech service](language-support.md?tabs=stt-tts).
+Text-to-speech enables your applications, tools, or devices to convert text into humanlike synthesized speech. The text-to-speech capability is also known as speech synthesis. Use humanlike prebuilt neural voices out of the box, or create a custom neural voice that's unique to your product or brand. For a full list of supported voices, languages, and locales, see [Language and voice support for the Speech service](language-support.md?tabs=tts).
## Core features
The patterns of stress and intonation in spoken language are called _prosody_. T
Here's more information about neural text-to-speech features in the Speech service, and how they overcome the limits of traditional text-to-speech systems:
-* **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text-to-speech by using [prebuilt neural voices](language-support.md?tabs=stt-tts) or [custom neural voices](custom-neural-voice.md).
+* **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text-to-speech by using [prebuilt neural voices](language-support.md?tabs=tts) or [custom neural voices](custom-neural-voice.md).
* **Asynchronous synthesis of long audio**: Use the [batch synthesis API](batch-synthesis.md) (Preview) to asynchronously synthesize text-to-speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or speech-to-text REST API, responses aren't returned in real time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
Here's more information about neural text-to-speech features in the Speech servi
- Convert digital texts such as e-books into audiobooks. - Enhance in-car navigation systems.
- For a full list of platform neural voices, see [Language and voice support for the Speech service](language-support.md?tabs=stt-tts).
+ For a full list of platform neural voices, see [Language and voice support for the Speech service](language-support.md?tabs=tts).
* **Fine-tuning text-to-speech output with SSML**: Speech Synthesis Markup Language (SSML) is an XML-based markup language that's used to customize text-to-speech outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, change speaking rate, adjust volume, and attribute multiple voices to a single document.
Here's more information about neural text-to-speech features in the Speech servi
* **Visemes**: [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw, and tongue in producing a particular phoneme. Visemes have a strong correlation with voices and phonemes.
- By using viseme events in Speech SDK, you can generate facial animation data. This data can be used to animate faces in lip-reading communication, education, entertainment, and customer service. Viseme is currently supported only for the `en-US` (US English) [neural voices](language-support.md?tabs=stt-tts).
+ By using viseme events in Speech SDK, you can generate facial animation data. This data can be used to animate faces in lip-reading communication, education, entertainment, and customer service. Viseme is currently supported only for the `en-US` (US English) [neural voices](language-support.md?tabs=tts).
> [!NOTE] > We plan to retire the traditional/standard voices and non-neural custom voice in 2024. After that, we'll no longer support them.
cognitive-services Tutorial Voice Enable Your Bot Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/tutorial-voice-enable-your-bot-speech-sdk.md
In this section, you'll learn how to change the language that your bot will list
### Change the language
-You can choose from any of the languages mentioned in the [speech-to-text](language-support.md?tabs=stt-tts) table. The following example changes the language to German.
+You can choose from any of the languages mentioned in the [speech-to-text](language-support.md?tabs=stt) table. The following example changes the language to German.
-1. Open the Windows Voice Assistant Client app, select the **Settings** button (upper-right gear icon), and enter **de-de** in the **Language** field. This is the locale value mentioned in the [speech-to-text](language-support.md?tabs=stt-tts) table.
+1. Open the Windows Voice Assistant Client app, select the **Settings** button (upper-right gear icon), and enter **de-de** in the **Language** field. This is the locale value mentioned in the [speech-to-text](language-support.md?tabs=stt) table.
This step sets the spoken language to be recognized, overriding the default **en-us**. It also instructs the Direct Line Speech channel to use a default German voice for the bot reply. 1. Close the **Settings** page, and then select the **Reconnect** button to establish a new connection to your echo bot.
You can choose from any of the languages mentioned in the [speech-to-text](langu
You can select the text-to-speech voice and control pronunciation if the bot specifies the reply in the form of a [Speech Synthesis Markup Language](speech-synthesis-markup.md) (SSML) instead of simple text. The echo bot doesn't use SSML, but you can easily modify the code to do that.
-The following example adds SSML to the echo bot reply so that the German voice `de-DE-RalfNeural` (a male voice) is used instead of the default female voice. See the [list of standard voices](how-to-migrate-to-prebuilt-neural-voice.md) and [list of neural voices](language-support.md?tabs=stt-tts) that are supported for your language.
+The following example adds SSML to the echo bot reply so that the German voice `de-DE-RalfNeural` (a male voice) is used instead of the default female voice. See the [list of standard voices](how-to-migrate-to-prebuilt-neural-voice.md) and [list of neural voices](language-support.md?tabs=tts) that are supported for your language.
1. Open **samples\csharp_dotnetcore\02.echo-bot\echo-bot.cs**. 1. Find these lines:
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-support.md
These Cognitive Services are language agnostic and don't have limitations based
## Speech
-* [Speech Service: Speech-to-Text](./speech-service/language-support.md?tabs=stt-tts)
-* [Speech Service:Text-to-Speech](./speech-service/language-support.md?tabs=stt-tts)
+* [Speech Service: Speech-to-Text](./speech-service/language-support.md?tabs=stt)
+* [Speech Service:Text-to-Speech](./speech-service/language-support.md?tabs=tts)
* [Speech Service: Speech Translation](./speech-service/language-support.md?tabs=speech-translation) ## Decision
communication-services Get Started Chat Ui Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/ui-library/get-started-chat-ui-library.md
+
+ Title: Quickstart - Integrate chat experiences in your app by using UI Library
+
+description: Get started with Azure Communication Services UI Library composites to add Chat communication experiences to your applications.
++ Last updated : 11/29/2022++
+zone_pivot_groups: acs-plat-web-ios-android
+++
+# Quickstart: Add chat with UI Library
+
+Get started with Azure Communication Services UI Library to quickly integrate communication experiences into your applications. In this quickstart, learn how to integrate UI Library chat composites into an application and set up the experience for your app users.
+
+Communication Services UI Library renders a full chat experience right in your application. It takes care of connecting to ACS chat services, and updates participant's presence automatically. As a developer, you need to worry about where in your app's user experience you want the chat experience to launch and only create the ACS resources as required.
++++++++++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group.
+
+Deleting the resource group also deletes any other resources associated with it.
+
+Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
communication-services Get Started Composites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/ui-library/get-started-composites.md
Title: Quickstart - Integrate experiences in your app by using UI Library
-description: Get started with Azure Communication Services UI Library composites to add communication experiences to your applications.
+description: Get started with Azure Communication Services UI Library composites to add Calling communication experiences to your applications.
Last updated 10/10/2021
confidential-computing Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-portal.md
For more information about connecting to Linux VMs, see [Create a Linux VM on Az
## Install Azure DCAP Client
-> [!NOTE]
-> Trusted Hardware Identity Management (THIM) is a free Azure service that helps you manage the hardware identities of different Trusted Execution Environments (TEEs). It fetches collateral from Intel Provisioning Certification Service (PCS) and caches it. The service enforces a minimum Trusted Compute Base (TCB) level as Azure security baseline, for attestation purposes. For DCsv3 and DCdsv3-series Azure VMs, the Intel certificates can only be fetched from THIM, as it is not possible to make direct calls to Intel service from the VMs.
+[Azure Data Center Attestation Primitives (DCAP)](https://learn.microsoft.com/azure/security/fundamentals/trusted-hardware-identity-management#what-is-the-azure-dcap-library), a replacement for Intel Quote Provider Library (QPL), fetches quote generation collateral and quote validation collateral directly from the THIM Service.
+
+The [Trusted Hardware Identity Management (THIM)](https://learn.microsoft.com/azure/security/fundamentals/trusted-hardware-identity-management) service handles cache management of certificates for all trusted execution environments (TEE) residing in Azure and provides trusted computing base (TCB) information to enforce a minimum baseline for attestation solutions.
-With the release of the Intel® Xeon Scalable Processors, remote attestation support is changing. DCsv3 and DCdsv3 only support [ECDSA-based Attestation](https://www.intel.com/content/www/us/en/developer/tools/software-guard-extensions/attestation-services.html) and the users are required to install [Azure DCAP](https://github.com/Microsoft/Azure-DCAP-Client) client to interact with THIM and fetch TEE collateral for quote generation during attestation process. DCsv2 continues to support [EPID-based Attestation](https://www.intel.com/content/www/us/en/developer/tools/software-guard-extensions/attestation-services.html).
+DCsv3 and DCdsv3 only support [ECDSA-based Attestation](https://www.intel.com/content/www/us/en/developer/tools/software-guard-extensions/attestation-services.html) and the users are required to install [Azure DCAP](https://github.com/Microsoft/Azure-DCAP-Client) client to interact with THIM and fetch TEE collateral for quote generation during attestation process. DCsv2 continues to support [EPID-based Attestation](https://www.intel.com/content/www/us/en/developer/tools/software-guard-extensions/attestation-services.html).
## Clean up resources
container-instances Container Instances Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-github-action.md
In the GitHub workflow, you need to supply Azure credentials to authenticate to
First, get the resource ID of your resource group. Substitute the name of your group in the following [az group show][az-group-show] command: ```azurecli
-$groupId=$(az group show \
+groupId=$(az group show \
--name <resource-group-name> \ --query id --output tsv) ```
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md
container = database.createContainerIfNotExists(containerProperties, 400).block(
#### Python V4 SDK
-The following Java code creates a Synapse Link enabled container by setting the `analytical_storage_ttl` property. To update an existing container, use the `replace_container` method.
+The following Python code creates a Synapse Link enabled container by setting the `analytical_storage_ttl` property. To update an existing container, use the `replace_container` method.
```python # Client
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Previously updated : 12/06/2022 Last updated : 01/11/2023 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g
Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters.
+### December 2022
+
+* General availability: Azure Cosmos DB for PostgreSQL is now available in the Sweden Central and Switzerland West regions.
+ * See [full list of supported Azure regions](resources-regions.md).
+* PostgreSQL 15 is now the default Postgres version for Azure Cosmos DB for PostgreSQL in Azure portal.
+ * See [all supported PostgreSQL versions](reference-versions.md).
+ * See [this guidance](howto-upgrade.md) for the steps to upgrade your Azure Cosmos DB for PostgreSQL cluster to PostgreSQL 15.
+++ ### November 2022 * General availability: [Cross-region cluster read replicas](concepts-read-replicas.md) for improved read scalability and cross-region disaster recovery (DR).
cost-management-billing Direct Ea Azure Usage Charges Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
Title: View your Azure usage summary details and download reports for EA enrollm
description: This article explains how enterprise administrators of direct and indirect Enterprise Agreement (EA) enrollments can view a summary of their usage data, Azure Prepayment consumed, and charges associated with other usage in the Azure portal. Previously updated : 01/04/2023 Last updated : 01/11/2023
Azure is enhancing its invoicing experience. The enhanced experience includes an
There are no changes to invoices generated before November 18, 2022.
-The invoice notification email address is changing from `msftinv@microsoft.com` to `no-reply@microsoft.com` for customers and partners under the enhanced invoicing experience.
+The invoice notification email address is changing from `msftinv@microsoft.com` to `microsoft-noreply@microsoft.com` for customers and partners under the enhanced invoicing experience.
We recommend that you add the new email address to your address book or safe sender list to ensure that you receive the emails.
data-factory Choose The Right Integration Runtime Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/choose-the-right-integration-runtime-configuration.md
Title: Choose the right integration-runtime configuration for your scenario
+ Title: Choose the right integration runtime configuration for your scenario
description: Some recommended architectures for each integration runtime.
Previously updated : 12/14/2022 Last updated : 01/10/2023
-# Choose the right integration-runtime configuration for your scenario
+# Choose the right integration runtime configuration for your scenario
+ The integration runtime is a very important part of the infrastructure for the data integration solution provided by Azure Data Factory. This requires you to fully consider how to adapt to the existing network structure and data source at the beginning of designing the solution, as well as consider performance, security and cost.
defender-for-cloud Defender For Devops Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-devops-introduction.md
On this part of the screen you see:
- **Code scanning findings** ΓÇô Shows the number of code vulnerabilities and misconfigurations identified in the repositories.
+ > [!NOTE]
+ > Currently, this information is available only for GitHub repositories.
+ ## Learn more - You can learn more about DevOps from our [DevOps resource center](/devops/).
defender-for-cloud Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/get-started.md
This quickstart section will walk you through all the recommended steps to enabl
## Prerequisites To get started with Defender for Cloud, you must have a subscription to Microsoft Azure. If you don't have a subscription, you can sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/).
-To enable enhanced security features on a subscription, you must be assigned the role of Subscription Owner, Subscription Contributor, or Security Admin.
+In Defender for Cloud, you only see information related to a resource when you're assigned the Owner, Contributor, or Reader role for the subscription or for the resource group the resource is in.
## Enable Defender for Cloud on your Azure subscription
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
Last updated 12/14/2022
With Microsoft Defender for Servers, you gain access to and can deploy [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) to your server resources. Microsoft Defender for Endpoint is a holistic, cloud-delivered, endpoint security solution. The main features include: -- Risk-based vulnerability management and assessment
+- Risk-based vulnerability management and assessment
- Attack surface reduction - Behavioral based and cloud-powered protection - Endpoint detection and response (EDR)
For more information about migrating servers from Defender for Endpoint to Defen
## Benefits of integrating Microsoft Defender for Endpoint with Defender for Cloud
-[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) protects your Windows and Linux machines whether they're hosted in Azure, hybrid clouds (on-premises), or multicloud environments.
+[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) protects your Windows and Linux machines whether they're hosted in Azure, hybrid clouds (on-premises), or multicloud environments.
The protections include:
Before you can enable the Microsoft Defender for Endpoint integration with Defen
- Ensure the machine is connected to Azure and the internet as required:
- - **Azure virtual machines (Windows or Linux)** - Configure the network settings described in configure device proxy and internet connectivity settings: [Windows](/windows/security/threat-protection/microsoft-defender-atp/configure-proxy-internet) or [Linux](/microsoft-365/security/defender-endpoint/linux-static-proxy-configuration).
+ - **Azure virtual machines (Windows or Linux)** - Configure the network settings described in configure device proxy and internet connectivity settings: [Windows](/microsoft-365/security/defender-endpoint/configure-proxy-internet) or [Linux](/microsoft-365/security/defender-endpoint/linux-static-proxy-configuration).
- **On-premises machines** - Connect your target machines to Azure Arc as explained in [Connect hybrid machines with Azure Arc-enabled servers](../azure-arc/servers/learn/quick-enable-hybrid-vm.md).
Before you can enable the Microsoft Defender for Endpoint integration with Defen
- For Linux servers, you must have Python installed. Python 3 is recommended for all distros, but is required for RHEL 8.x and Ubuntu 20.04 or higher. If needed, see Step-by-step Instructions for Installing Python on Linux. -- If you've moved your subscription between Azure tenants, some manual preparatory steps are also required. For details, [contact Microsoft support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
+- If you've moved your subscription between Azure tenants, some manual preparatory steps are also required. For details, [contact Microsoft support](https://portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
### Enable the integration
To deploy the MDE unified solution, you'll need to use the [REST API call](#enab
1. Select **Enable unified solution**. 1. Select **Save**.
- 1. In the confirmation prompt, verify the information and select **Enable** to continue.
+ 1. In the confirmation prompt, verify the information and select **Enable** to continue.
:::image type="content" source="./mediE unified solution for Windows Server 2012 R2 and 2016 machines":::
You'll deploy Defender for Endpoint to your Linux machines in one of two ways -
- [Existing users with Defender for Cloud's enhanced security features enabled and Microsoft Defender for Endpoint for Windows](#existing-users-with-defender-for-clouds-enhanced-security-features-enabled-and-microsoft-defender-for-endpoint-for-windows) - [New users who never enabled the integration with Microsoft Defender for Endpoint for Windows](#new-users-who-never-enabled-the-integration-with-microsoft-defender-for-endpoint-for-windows) - ##### Existing users with Defender for Cloud's enhanced security features enabled and Microsoft Defender for Endpoint for Windows If you've already enabled the integration with **Defender for Endpoint for Windows**, you have complete control over when and whether to deploy Defender for Endpoint to your **Linux** machines.
If you've already enabled the integration with **Defender for Endpoint for Windo
1. Select **Enable for Linux machines**. 1. Select **Save**.
- 1. In the confirmation prompt, verify the information and select **Enable** to continue.
+ 1. In the confirmation prompt, verify the information and select **Enable** to continue.
:::image type="content" source="./media/integration-defender-for-endpoint/enable-for-linux-result.png" alt-text="Confirming the integration between Defender for Cloud and Microsoft's EDR solution, Microsoft Defender for Endpoint for Linux":::
URI: `https://management.azure.com/subscriptions/<subscriptionId>providers/Micro
## Access the Microsoft Defender for Endpoint portal
-1. Ensure the user account has the necessary permissions. Learn more in [Assign user access to Microsoft Defender Security Center](/windows/security/threat-protection/microsoft-defender-atp/assign-portal-access).
-
-1. Check whether you have a proxy or firewall that is blocking anonymous traffic. The Defender for Endpoint sensor connects from the system context, so anonymous traffic must be permitted. To ensure unhindered access to the Defender for Endpoint portal, follow the instructions in [Enable access to service URLs in the proxy server](/windows/security/threat-protection/microsoft-defender-atp/configure-proxy-internet#enable-access-to-microsoft-defender-atp-service-urls-in-the-proxy-server).
+1. Ensure the user account has the necessary permissions. Learn more in [Assign user access to Microsoft Defender Security Center](/microsoft-365/security/defender-endpoint/assign-portal-access).
-1. Open the [Defender for Endpoint Security Center portal](https://securitycenter.windows.com/). Learn more about the portal's features and icons, in [Defender for Endpoint Security Center portal overview](/windows/security/threat-protection/microsoft-defender-atp/portal-overview).
+1. Check whether you have a proxy or firewall that is blocking anonymous traffic. The Defender for Endpoint sensor connects from the system context, so anonymous traffic must be permitted. To ensure unhindered access to the Defender for Endpoint portal, follow the instructions in [Enable access to service URLs in the proxy server](/microsoft-365/security/defender-endpoint/configure-proxy-internet#enable-access-to-microsoft-defender-for-endpoint-service-urls-in-the-proxy-server).
+1. Open the [Microsoft 365 Defender portal](https://security.microsoft.com/). Learn about [Microsoft Defender for Endpoint in Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-security-center-mde).
## Send a test alert
For endpoints running Windows:
For endpoints running Linux:
-1. Download the test alert tool from https://aka.ms/LinuxDIY
+1. Download the test alert tool from: <https://aka.ms/LinuxDIY>
1. Extract the contents of the zip file and execute this shell script: `./mde_linux_edr_diy`
To remove the Defender for Endpoint solution from your machines:
### What's this "MDE.Windows" / "MDE.Linux" extension running on my machine?
-In the past, Microsoft Defender for Endpoint was provisioned by the Log Analytics agent. When [we expanded support to include Windows Server 2019](release-notes-archive.md#microsoft-defender-for-endpoint-integration-with-azure-defender-now-supports-windows-server-2019-and-windows-10-on-windows-virtual-desktop-released-for-general-availability-ga) and Linux, we also added an extension to perform the automatic onboarding.
+In the past, Microsoft Defender for Endpoint was provisioned by the Log Analytics agent. When [we expanded support to include Windows Server 2019](release-notes-archive.md#microsoft-defender-for-endpoint-integration-with-azure-defender-now-supports-windows-server-2019-and-windows-10-on-windows-virtual-desktop-released-for-general-availability-ga) and Linux, we also added an extension to perform the automatic onboarding.
Defender for Cloud automatically deploys the extension to machines running:
Defender for Cloud automatically deploys the extension to machines running:
> [!IMPORTANT] > If you delete the MDE.Windows/MDE.Linux extension, it will not remove Microsoft Defender for Endpoint. to 'offboard', see [Offboard Windows servers.](/microsoft-365/security/defender-endpoint/configure-server-endpoints). - ### I enabled the solution but the `MDE.Windows`/`MDE.Linux` extension isn't showing on my machine If you enabled the integration, but still don't see the extension running on your machines:
-1. You need to wait at least 12 hours to be sure there's an issue to investigate.
+1. You need to wait at least 12 hours to be sure there's an issue to investigate.
1. If after 12 hours you still don't see the extension running on your machines, check that you've met [Prerequisites](#prerequisites) for the integration. 1. Ensure you've enabled the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for the subscriptions related to the machines you're investigating. 1. If you've moved your Azure subscription between Azure tenants, some manual preparatory steps are required before Defender for Cloud will deploy Defender for Endpoint. For full details, [contact Microsoft support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). - ### What are the licensing requirements for Microsoft Defender for Endpoint? Licenses for Defender for Endpoint for servers are included with **Microsoft Defender for Servers**.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in January include: - [New version of the recommendation to find missing system updates (Preview)](#new-version-of-the-recommendation-to-find-missing-system-updates-preview)
+- [Cleanup of deleted Azure Arc machines in connected AWS and GCP accounts](#cleanup-of-deleted-azure-arc-machines-in-connected-aws-and-gcp-accounts)
### New version of the recommendation to find missing system updates (Preview)
To use the new recommendation you need to:
The existing "System updates should be installed on your machines" recommendation, which relies on the Log Analytics agent, is still available under the same control.
+### Cleanup of deleted Azure Arc machines in connected AWS and GCP accounts
+
+A machine connected to an AWS and GCP account and covered by Defender for Servers or Defender for SQL on machines is represented in Defender for Cloud as an Azure Arc machine. Until now, that machine wasn't deleted from the inventory when the machine was deleted from the AWS or GCP account. This leads to unnecessary Azure Arc resources left in Defender for Cloud that represent deleted machines.
+
+Defender for Cloud will now automatically delete Azure Arc machines when those machines are deleted in connected AWS or GCP account.
+ ## December 2022 Updates in December include:
defender-for-cloud Update Regulatory Compliance Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/update-regulatory-compliance-packages.md
Title: The regulatory compliance dashboard in Microsoft Defender for Cloud description: Learn how to add and remove regulatory standards from the regulatory compliance dashboard in Defender for Cloud Previously updated : 12/26/2022 Last updated : 01/11/2023 + # Customize the set of standards in your regulatory compliance dashboard Microsoft Defender for Cloud continually compares the configuration of your resources with requirements in industry standards, regulations, and benchmarks. The **regulatory compliance dashboard** provides insights into your compliance posture based on how you're meeting specific compliance requirements.
Available regulatory standards:
**AWS**: When users onboard, every AWS account has the AWS Foundational Security Best Practices assigned. This is the AWS-specific guideline for security and compliance best practices based on common compliance frameworks.
-Users that have one Defender bundle enabled can enable additional standards.
+Users that have one Defender bundle enabled can enable other standards.
Available AWS regulatory standards:
More standards will be added to the dashboard and included in the information on
**GCP**: When users onboard, every GCP project has the "GCP Default" standard assigned.
-Users that have one Defender bundle enabled can enable additional standards.
+Users that have one Defender bundle enabled can enable other standards.
Available GCP regulatory standards:
To add standards to your dashboard:
> [!NOTE] > It may take a few hours for a newly added standard to appear in the compliance dashboard.
- :::image type="content" source="media/concept-regulatory-compliance/compliance-dashboard.png" alt-text="Screenshot showing regulatory compliance dashboard." lightbox="media/release-notes/audit-reports-list-regulatory-compliance-dashboard.png":::
+ :::image type="content" source="media/concept-regulatory-compliance/compliance-dashboard.png" alt-text="Screenshot showing regulatory compliance dashboard." lightbox="media/concept-regulatory-compliance/compliance-dashboard.png":::
### Add a standard to your AWS resources
defender-for-iot Install Software Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/install-software-ot-sensor.md
This procedure describes how to install OT monitoring software on a sensor.
:::image type="content" source="../media/tutorial-install-components/install-complete.png" alt-text="Screenshot of the sign-in confirmation.":::
-Make sure that your sensor is connected to your network, and then you can sign in to your sensor via a network-connected browser. For more information, see [Activate and set up your sensor](../how-to-activate-and-set-up-your-sensor.md#activate-and-set-up-your-sensor)
+Make sure that your sensor is connected to your network, and then you can sign in to your sensor via a network-connected browser. For more information, see [Activate and set up your sensor](../how-to-activate-and-set-up-your-sensor.md#activate-and-set-up-your-sensor).
## Next steps
dns Private Dns Virtual Network Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-virtual-network-links.md
From the virtual network perspective, private DNS zone becomes the registration
## Resolution virtual network
-If you choose to link your virtual network with the private DNS zone without autoregistration. The virtual network is treated as a resolution virtual network only. DNS records for virtual machines deployed this virtual network won't be created automatically in the private zone. However, virtual machines deployed in the virtual network can successfully query for DNS records in the private zone. These records include manually created and auto registered records from other virtual networks linked to the private DNS zone.
+If you choose to link your virtual network with the private DNS zone without autoregistration, the virtual network is treated as a resolution virtual network only. DNS records for virtual machines deployed this virtual network won't be created automatically in the private zone. However, virtual machines deployed in the virtual network can successfully query for DNS records in the private zone. These records include manually created and auto registered records from other virtual networks linked to the private DNS zone.
One private DNS zone can have multiple resolution virtual networks and a virtual network can have multiple resolution zones associated to it.
healthcare-apis Deploy New Choose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-choose.md
Previously updated : 1/5/2023 Last updated : 1/10/2023
The MedTech service provides multiple methods for deployment into Azure. Each de
In this quickstart, you'll learn about these deployment methods: > [!div class="checklist"]
-> - Azure Resource Manager template (ARM template) using the **Deploy to Azure** button.
+> - Azure Resource Manager template (ARM template) including an Azure Iot Hub using the **Deploy to Azure** button.
+> - ARM template using the **Deploy to Azure** button.
> - ARM template using Azure PowerShell or the Azure CLI.
-> - Azure portal manually.
+> - Manually in the Azure portal.
+
+## ARM template including an Azure Iot Hub using the Deploy to Azure button
+
+ Using an ARM template with the **Deploy to Azure** button is an easy and fast deployment method because it automates the deployment, most configuration steps, and uses the Azure portal. The deployed MedTech service and Azure IoT Hub are fully functional including conforming and valid device and Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings. Use the Azure IoT Hub to create devices and send device messages to the MedTech service.
+
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors-with-iothub%2Fazuredeploy.json)
+
+To learn more about deploying the MedTech service including an Azure IoT Hub using an ARM template and the **Deploy to Azure** button, see [Receive device messages through Azure IoT Hub](device-data-through-iot-hub.md).
## ARM template using the Deploy to Azure button
-Using an ARM template with the **Deploy to Azure** button is an easy and fast deployment method because it automates the deployment, most configuration steps, and uses the Azure portal.
+Using an ARM template with the **Deploy to Azure** button is an easy and fast deployment method because it automates the deployment, most configuration steps, and uses the Azure portal. The deployed MedTech service will still require conforming and valid device and FHIR destination mappings to be fully functional.
+
+ [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors%2Fazuredeploy.json).
-To learn more about using an ARM template with the **Deploy to Azure button**, see [Deploy the MedTech service using an Azure Resource Manager template](deploy-new-arm.md).
+To learn more about deploying the MedTech service using an ARM template and the **Deploy to Azure** button, see [Deploy the MedTech service using an Azure Resource Manager template](deploy-new-arm.md).
## ARM template using Azure PowerShell or the Azure CLI
-Using an ARM template with Azure PowerShell or the Azure CLI is a more advanced deployment method. This deployment method can be useful for adding automation and repeatability so that you can scale and customize your deployments.
+Using an ARM template with Azure PowerShell or the Azure CLI is a more advanced deployment method. This deployment method can be useful for adding automation and repeatability so that you can scale and customize your deployments. The deployed MedTech service will still require conforming and valid device and FHIR destination mappings to be fully functional.
-To learn more about using an ARM template with Azure PowerShell or the Azure CLI, see [Deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI](deploy-new-powershell-cli.md).
+To learn more about deploying the MedTech service using an ARM template and Azure PowerShell or the Azure CLI, see [Deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI](deploy-new-powershell-cli.md).
-## Azure portal manually
+## Manually in the Azure portal
Using the Azure portal manual deployment will allow you to see the details of each deployment step. The manual deployment has many steps, but it provides valuable technical information that may be useful for customizing and troubleshooting your MedTech service.
-To learn more about using a manual deployment with the Azure portal, see [Deploy the MedTech service manually using the Azure portal](deploy-new-manual.md).
+To learn more about deploying the MedTech service manually using the Azure portal, see [Deploy the MedTech service manually using the Azure portal](deploy-new-manual.md).
## Deployment architecture overview
iot-edge Configure Connect Verify Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/configure-connect-verify-gpu.md
description: Configure your environment to connect and verify your GPU to proces
Previously updated : 7/22/2022 Last updated : 9/22/2022
iot-edge Deploy Modbus Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/deploy-modbus-gateway.md
Previously updated : 09/21/2022 Last updated : 09/22/2022
iot-edge How To Configure Proxy Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-proxy-support.md
Title: Configure devices for network proxies - Azure IoT Edge | Microsoft Docs
description: How to configure the Azure IoT Edge runtime and any internet-facing IoT Edge modules to communicate through a proxy server. Previously updated : 07/26/2022 Last updated : 11/1/2022
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
description: Step by step adaptable manual instructions on how to create a hiera
Previously updated : 01/05/2023 Last updated : 10/5/2022
iot-edge How To Create Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-transparent-gateway.md
description: Use an Azure IoT Edge device as a transparent gateway that can proc
Previously updated : 03/01/2021 Last updated : 11/1/2022
iot-edge How To Deploy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-at-scale.md
keywords:
Previously updated : 10/13/2020 Last updated : 9/22/2022
iot-edge How To Deploy Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-blob.md
Title: Deploy blob storage on module to your device - Azure IoT Edge
description: Deploy an Azure Blob Storage module to your IoT Edge device to store data at the edge. Previously updated : 3/10/2020 Last updated : 9/22/2022
iot-edge How To Deploy Modules Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-portal.md
description: Use your IoT Hub in the Azure portal to push an IoT Edge module fro
Previously updated : 10/13/2020 Last updated : 9/22/2022
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
description: How to install and manage certificates on an Azure IoT Edge device
Previously updated : 12/06/2022 Last updated : 10/20/2022
iot-edge How To Monitor Iot Edge Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-monitor-iot-edge-deployments.md
description: High-level monitoring including edgeHub and edgeAgent reported prop
Previously updated : 04/21/2020 Last updated : 9/22/2022
iot-edge How To Monitor Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-monitor-module-twins.md
description: How to interpret device twins and module twins to determine connect
Previously updated : 05/29/2020 Last updated : 9/22/2022
iot-edge How To Share Windows Folder To Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-share-windows-folder-to-vm.md
Previously updated : 07/28/2022 Last updated : 11/1/2022
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
description: Use Visual Studio Code to develop, build, and debug a module for Az
Previously updated : 10/27/2022 Last updated : 9/30/2022
iot-edge Iot Edge Limits And Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-limits-and-restrictions.md
Title: Limits and restrictions - Azure IoT Edge | Microsoft Docs
description: Description of the limits and restrictions when using IoT Edge. Previously updated : 11/17/2022 Last updated : 11/7/2022
iot-edge Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/quickstart.md
Previously updated : 07/05/2022 Last updated : 10/20/2022
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
Title: IoT Edge supported platforms
description: Azure IoT Edge supported operating systems, runtimes, and container engines. Previously updated : 07/26/2022 Last updated : 8/24/2022
Modules built as Linux containers can be deployed to either Linux or Windows dev
| Red Hat Enterprise Linux 8 | ![Red Hat Enterprise Linux 8 + AMD64](./media/support/green-check.png) | | | | Ubuntu Server 20.04 | ![Ubuntu Server 20.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 20.04 + ARM64](./media/support/green-check.png) | | Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 18.04 + ARM64](./media/support/green-check.png) |
-| Windows 10/11 Pro | ![Windows 10/11 Pro + AMD64](./media/support/green-check.png) | | ![Win 10 Pro + ARM64](./media/support/green-check.png)<sup>1</sup> |
-| Windows 10/11 Enterprise | ![Windows 10/11 Enterprise + AMD64](./media/support/green-check.png) | | ![Win 10 Enterprise + ARM64](./media/support/green-check.png)<sup>1</sup> |
-| Windows 10/11 IoT Enterprise | ![Windows 10/11 IoT Enterprise + AMD64](./media/support/green-check.png) | | ![Win 10 IoT Enterprise + ARM64](./media/support/green-check.png)<sup>1</sup> |
+| Windows 10/11 Pro | ![Windows 10/11 Pro + AMD64](./media/support/green-check.png) | | ![Win 10 Pro + ARM64](./media/support/green-check.png) |
+| Windows 10/11 Enterprise | ![Windows 10/11 Enterprise + AMD64](./media/support/green-check.png) | | ![Win 10 Enterprise + ARM64](./media/support/green-check.png) |
+| Windows 10/11 IoT Enterprise | ![Windows 10/11 IoT Enterprise + AMD64](./media/support/green-check.png) | | ![Win 10 IoT Enterprise + ARM64](./media/support/green-check.png) |
| Windows Server 2019/2022 | ![Windows Server 2019/2022 + AMD64](./media/support/green-check.png) | | |
-<sup>1</sup> Support for this platform using IoT Edge for Linux on Windows is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
:::moniker-end <!-- end iotedge-2020-11 -->
iot-edge Troubleshoot Iot Edge For Linux On Windows Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-iot-edge-for-linux-on-windows-networking.md
description: Learn about troubleshooting and diagnostics for Azure IoT Edge for
Previously updated : 08/03/2022 Last updated : 11/15/2022
iot-edge Troubleshoot Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-iot-edge-for-linux-on-windows.md
Title: Troubleshoot your IoT Edge for Linux on Windows device | Microsoft Docs
description: Learn standard diagnostic skills for troubleshooting Azure IoT Edge for Linux on Windows (EFLOW) like retrieving component status and logs. Previously updated : 08/03/2022 Last updated : 11/15/2022
iot-edge Tutorial Develop For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux-on-windows.md
description: This tutorial walks through setting up your development machine and
Previously updated : 03/01/2020 Last updated : 11/15/2022
iot-edge Tutorial Monitor With Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-monitor-with-workbooks.md
description: Learn how to monitor IoT Edge modules and devices using Azure Monit
Previously updated : 08/13/2021 Last updated : 9/22/2022
iot-edge Tutorial Nested Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-nested-iot-edge-for-linux-on-windows.md
description: This tutorial shows you how to create a hierarchical structure of I
Previously updated : 08/04/2022 Last updated : 11/15/2022 monikerRange: ">=iotedge-2020-11"
iot-hub Iot Hub Devguide Identity Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-identity-registry.md
Device identities are represented as JSON documents with the following propertie
| Property | Options | Description | | | | |
-| deviceId |required, read-only on updates |A case-sensitive string (up to 128 characters long) of ASCII 7-bit alphanumeric characters plus certain special characters: `- . + % _ # * ? ! ( ) , : = @ $ '`. |
+| deviceId |required, read-only on updates |A case-sensitive string (up to 128 characters long) of ASCII 7-bit alphanumeric characters plus certain special characters: `- . % _ * ? ! ( ) , : = @ $ '`. The special characters: `+ #` are not supported. |
| generationId |required, read-only |An IoT hub-generated, case-sensitive string up to 128 characters long. This value is used to distinguish devices with the same **deviceId**, when they have been deleted and re-created. | | etag |required, read-only |A string representing a weak ETag for the device identity, as per [RFC7232](https://tools.ietf.org/html/rfc7232). | | authentication |optional |A composite object containing authentication information and security materials. For more information, see [Authentication Mechanism](/rest/api/iothub/service/devices/get-identity#authenticationmechanism) in the REST API documentation. |
To try out some of the concepts described in this article, see the following IoT
To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see:
-* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
+* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
load-testing Reference Test Config Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/reference-test-config-yaml.md
A test configuration uses the following keys:
| Key | Type | Default value | Description | | -- | -- | -- | - | | `version` | string | | Version of the YAML configuration file that the service uses. Currently, the only valid value is `v0.1`. |
-| `testName` | string | | *Required*. Name of the test to run. The results of various test runs will be collected under this test name in the Azure portal. |
+| `testId` | string | | *Required*. Id of the test to run. For a new test, enter an Id with characters [a-z0-9_-]. For an existing test, you can get the test Id from the test details page in Azure portal. This field was called `testName` earlier, which has been deprecated. You can still run existing tests with `testName`field. |
+| `displayName` | string | | Display name of the test. This will be shown in the list of tests in Azure portal. If not provided, testId is used as the display name. |
| `testPlan` | string | | *Required*. Relative path to the Apache JMeter test script to run. | | `engineInstances` | integer | | *Required*. Number of parallel instances of the test engine to execute the provided test plan. You can update this property to increase the amount of load that the service can generate. | | `configurationFiles` | array | | List of relevant configuration files or other files that you reference in the Apache JMeter script. For example, a CSV data set file, images, or any other data file. These files will be uploaded to the Azure Load Testing resource alongside the test script. If the files are in a subfolder on your local machine, use file paths that are relative to the location of the test script. <BR><BR>Azure Load Testing currently doesn't support the use of file paths in the JMX file. When you reference an external file in the test script, make sure to only specify the file name. |
-| `description` | string | | Short description of the test run. |
+| `description` | string | | Short description of the test. |
| `subnetId` | string | | Resource ID of the subnet for testing privately hosted endpoints (VNET injection). This subnet will host the injected test engine VMs. For more information, see [how to load test privately hosted endpoints](./how-to-test-private-endpoint.md). | | `failureCriteria` | object | | Criteria that indicate when a test should fail. The structure of a fail criterion is: `Request: Aggregate_function (client_metric) condition threshold`. For more information on the supported values, see [Define load test fail criteria](./how-to-define-test-criteria.md#load-test-fail-criteria). | | `properties` | object | | List of properties to configure the load test. |
The following YAML snippet contains an example load test configuration:
```yaml version: v0.1
-testName: SampleTest
+testId: SampleTest
+displayName: Sample Test
testPlan: SampleTest.jmx description: Load test website home page engineInstances: 1
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Azure portal users will always find the latest image available for provisioning
See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds.
+## January 10, 2023
+[Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
+
+Version: `23.01.06`
+
+Main changes:
+
+- Added R package "ranger"
+- Pinned `pandas==1.1.5` and `numpy==1.23.0` in `azureml_py38` environment
+ ## November 30, 2022 [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
machine-learning How To Attach Kubernetes To Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-to-workspace.md
Otherwise, if a user-assigned managed identity is specified in Azure Machine Lea
|Azure resource name |Role to be assigned|Description| |--|--|--| |Azure Relay|Azure Relay Owner|Only applicable for Arc-enabled Kubernetes cluster. Azure Relay isn't created for AKS cluster without Arc connected.|
-|Azure Arc-enabled Kubernetes or AKS|Reader|Applicable for both Arc-enabled Kubernetes cluster and AKS cluster.|
+|Kubernetes - Azure Arc or Azure Kubernetes Service|Reader|Applicable for both Arc-enabled Kubernetes cluster and AKS cluster.|
Azure Relay resource is created during the extension deployment under the same Resource Group as the Arc-enabled Kubernetes cluster.
Attaching a Kubernetes cluster makes it available to your workspace for training
1. Navigate to [Azure Machine Learning studio](https://ml.azure.com). 1. Under **Manage**, select **Compute**.
-1. Select the **Attached computes** tab.
+1. Select the **Kubernetes clusters** tab.
1. Select **+New > Kubernetes**
- :::image type="content" source="media/how-to-attach-kubernetes-to-workspace/attach-kubernetes-cluster.png" alt-text="Screenshot of settings for Kubernetes cluster to make available in your workspace.":::
+ :::image type="content" source="media/how-to-attach-arc-kubernetes/kubernetes-attach.png" alt-text="Screenshot of settings for Kubernetes cluster to make available in your workspace.":::
1. Enter a compute name and select your Kubernetes cluster from the dropdown.
Attaching a Kubernetes cluster makes it available to your workspace for training
1. Select **Attach**
- In the Attached compute tab, the initial state of your cluster is *Creating*. When the cluster is successfully attached, the state changes to *Succeeded*. Otherwise, the state changes to *Failed*.
+ In the Kubernetes clusters tab, the initial state of your cluster is *Creating*. When the cluster is successfully attached, the state changes to *Succeeded*. Otherwise, the state changes to *Failed*.
- :::image type="content" source="media/how-to-attach-kubernetes-to-workspace/provision-resources.png" alt-text="Screenshot of attached settings for configuration of Kubernetes cluster.":::
+ :::image type="content" source="media/how-to-attach-arc-kubernetes/kubernetes-creating.png" alt-text="Screenshot of attached settings for configuration of Kubernetes cluster.":::
machine-learning How To Create Attach Compute Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-studio.md
To see all compute targets for your workspace, use the following steps:
1. Select tabs at the top to show each type of compute target.
- :::image type="content" source="media/how-to-create-attach-studio/view-compute-targets.png" alt-text="View list of compute targets":::
+ :::image type="content" source="media/how-to-create-attach-studio/compute-targets.png" alt-text="View list of compute targets":::
## Compute instance and clusters
You can create compute instances and compute clusters in your workspace, using t
In addition, you can use the [VS Code extension](how-to-manage-resources-vscode.md#compute-clusters) to create compute instances and compute clusters in your workspace.
-## Kubernetes cluster
+## Kubernetes clusters
-For information on configuring and attaching a Kubrnetes cluster to your workspace, see [Configure Kubernetes cluster for Azure Machine Learning](how-to-attach-kubernetes-anywhere.md).
+For information on configuring and attaching a Kubernetes cluster to your workspace, see [Configure Kubernetes cluster for Azure Machine Learning](how-to-attach-kubernetes-anywhere.md).
## Other compute targets
To use VMs created outside the Azure Machine Learning workspace, you must first
1. Under __Manage__, select __Compute__.
-1. In the tabs at the top, select **Attached compute** to attach a compute target for **training**. Or select **Inference clusters** to attach an AKS cluster for **inferencing**.
+1. In the tabs at the top, select **Attached compute** to attach a compute target for **training**.
+ 1. Select +New, then select the type of compute to attach. Not all compute types can be attached from Azure Machine Learning studio. 1. Fill out the form and provide values for the required properties.
To use VMs created outside the Azure Machine Learning workspace, you must first
1. Select __Attach__. -
-> [!IMPORTANT]
-> To attach an Azure Kubernetes Services (AKS) or Azure Arc-enabled Kubernetes cluster, you must be subscription owner or have permission to access AKS cluster resources under the subscription. Otherwise, the cluster list on "attach new compute" page will be blank.
To detach your compute use the following steps:
machine-learning How To Deploy Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-kubernetes-extension.md
In this article, you can learn:
## Prerequisites
-* An AKS cluster is up and running in Azure.
- * If you have not previously used cluster extensions, you need to [register the KubernetesConfiguration service provider](../aks/dapr.md#register-the-kubernetesconfiguration-service-provider).
+* An AKS cluster running in Azure. If you have not previously used cluster extensions, you need to [register the KubernetesConfiguration service provider](../aks/dapr.md#register-the-kubernetesconfiguration-service-provider).
* Or an Arc Kubernetes cluster is up and running. Follow instructions in [connect existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md). * If the cluster is an Azure RedHat OpenShift Service (ARO) cluster or OpenShift Container Platform (OCP) cluster, you must satisfy other prerequisite steps as documented in the [Reference for configuring Kubernetes cluster](./reference-kubernetes.md#prerequisites-for-aro-or-ocp-clusters) article. * For production purposes, the Kubernetes cluster must have a minimum of **4 vCPU cores and 14-GB memory**. For more information on resource detail and cluster size recommendations, see [Recommended resource planning](./reference-kubernetes.md).
In this article, you can learn:
- Azure Machine Learning does not guarantee support for all preview stage features in AKS. For example, [Azure AD pod identity](../aks/use-azure-ad-pod-identity.md) is not supported. - If you've previously followed the steps from [AzureML AKS v1 document](./v1/how-to-create-attach-kubernetes.md) to create or attach your AKS as inference cluster, use the following link to [clean up the legacy azureml-fe related resources](./v1/how-to-create-attach-kubernetes.md#delete-azureml-fe-related-resources) before you continue the next step. + ## Review AzureML extension configuration settings You can use AzureML CLI command `k8s-extension create` to deploy AzureML extension. CLI `k8s-extension create` allows you to specify a set of configuration settings in `key=value` format using `--config` or `--config-protected` parameter. Following is the list of available configuration settings to be specified during AzureML extension deployment.
Upon AzureML extension deployment completes, you can use `kubectl get deployment
> [!IMPORTANT] > * Azure Relay resource is under the same resource group as the Arc cluster resource. It is used to communicate with the Kubernetes cluster and modifying them will break attached compute targets.
- > * By default, the kubernetes deployment resources are randomly deployed to 1 or more nodes of the cluster, and daemonset resources are deployed to ALL nodes. If you want to restrict the extension deployment to specific nodes, use `nodeSelector` configuration setting described as below.
+ > * By default, the kubernetes deployment resources are randomly deployed to 1 or more nodes of the cluster, and daemonset resources are deployed to ALL nodes. If you want to restrict the extension deployment to specific nodes, use `nodeSelector` configuration setting described in [configuration settings table](#review-azureml-extension-configuration-settings).
> [!NOTE] > * **{EXTENSION-NAME}:** is the extension name specified with `az k8s-extension create --name` CLI command.
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-inferencing-vnet.md
In this article, you learn how to secure inferencing environments (online endpoi
For information on securing managed online endpoints, see the [Use network isolation with managed online endpoints](how-to-secure-online-endpoint.md) article.
-## Secure Azure Kubernetes Service
+## Secure Azure Kubernetes Service online endpoints
-To configure and attach an Azure Kubernetes Service cluster for secure inference, use the following steps:
+To use Azure Kubernetes Service cluster for secure inference, use the following steps:
1. Create or configure a [secure Kubernetes inferencing environment](how-to-secure-kubernetes-inferencing-environment.md).
-1. [Attach the Kubernetes cluster to the workspace](how-to-attach-kubernetes-anywhere.md).
+2. Deploy [AzureML extension](how-to-deploy-kubernetes-extension.md).
+3. [Attach the Kubernetes cluster to the workspace](how-to-attach-kubernetes-anywhere.md).
+4. Model deployment with Kubernetes online endpoint can be done using CLI v2, Python SDK v2 and Studio UI.
+
+ * CLI v2 - https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/kubernetes
+ * Python SDK V2 - https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/kubernetes
+ * Studio UI - Follow the steps in [managed online endpoint deployment](how-to-use-managed-online-endpoint-studio.md) through the Studio. After entering the __Endpoint name__ select __Kubernetes__ as the compute type instead of __Managed__
-Afterwards, you can use the cluster for inference deployments to online endpoints. For more information, see [How to deploy an online endpoint](how-to-deploy-online-endpoints.md).
## Limit outbound connectivity from the virtual network
This article is part of a series on securing an Azure Machine Learning workflow.
* [Secure the training environment](how-to-secure-training-vnet.md) * [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md)
-* [Use a firewall](how-to-access-azureml-behind-firewall.md)
+* [Use a firewall](how-to-access-azureml-behind-firewall.md)
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-tune-hyperparameters.md
sweep_job.early_termination = None
Control your resource budget by setting limits for your sweep job. * `max_total_trials`: Maximum number of trial jobs. Must be an integer between 1 and 1000.
-* `max_concurrent_trials`: (optional) Maximum number of trial jobs that can run concurrently. If not specified, all jobs launch in parallel. If specified, must be an integer between 1 and 100.
+* `max_concurrent_trials`: (optional) Maximum number of trial jobs that can run concurrently. If not specified, max_total_trials number of jobs launch in parallel. If specified, must be an integer between 1 and 1000.
* `timeout`: Maximum time in seconds the entire sweep job is allowed to run. Once this limit is reached the system will cancel the sweep job, including all its trials. * `trial_timeout`: Maximum time in seconds each trial job is allowed to run. Once this limit is reached the system will cancel the trial. >[!NOTE]
->If both max_total_trials and max_concurrent_trials are specified, the hyperparameter tuning experiment terminates when the first of these two thresholds is reached.
+>If both max_total_trials and timeout are specified, the hyperparameter tuning experiment terminates when the first of these two thresholds is reached.
>[!NOTE] >The number of concurrent trial jobs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency.
machine-learning Reference Automl Images Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-schema.md
In instance segmentation, output consists of multiple boxes with their scaled to
> These settings are currently in public preview. They are provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). > [!WARNING]
-> **Explainability** is supported only for **multi-class classification** and **multi-label classification**. While generating explanations on online endpoint, if you encounter timeout issues, use [batch scoring notebook](https://github.com/Azure/azureml-examples/tree/rvadthyavath/xai_vision_notebooks/sdk/python/jobs/automl-standalone-jobs/automl-image-classification-multiclass-batch-scoring) to generate explanations.
+> **Explainability** is supported only for **multi-class classification** and **multi-label classification**. While generating explanations on online endpoint, if you encounter timeout issues, use [batch scoring notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring) to generate explanations.
In this section, we document the input data format required to make predictions and generate explanations for the predicted class/classes using a deployed model. There's no separate deployment needed for explainability. The same endpoint for online scoring can be utilized to generate explanations. We just need to pass some extra explainability related parameters in input schema and get either visualizations of explanations and/or attribution score matrices (pixel level explanations).
If `model_explainability`, `visualizations`, `attributions` are set to `True` in
> [!WARNING]
-> While generating explanations on online endpoint, make sure to select only few classes based on confidence score in order to avoid timeout issues on the endpoint or use the endpoint with GPU instance type. To generate explanations for large number of classes in multi-label classification, refer to [batch scoring notebook](https://github.com/Azure/azureml-examples/tree/rvadthyavath/xai_vision_notebooks/sdk/python/jobs/automl-standalone-jobs/automl-image-classification-multiclass-batch-scoring).
+> While generating explanations on online endpoint, make sure to select only few classes based on confidence score in order to avoid timeout issues on the endpoint or use the endpoint with GPU instance type. To generate explanations for large number of classes in multi-label classification, refer to [batch scoring notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring).
```json [
machine-learning Reference Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-kubernetes.md
For AzureML extension deployment on ARO or OCP cluster, grant privileged access
> * `{EXTENSION-NAME}`: is the extension name specified with the `az k8s-extension create --name` CLI command. >* `{KUBERNETES-COMPUTE-NAMESPACE}`: is the namespace of the Kubernetes compute specified when attaching the compute to the Azure Machine Learning workspace. Skip configuring `system:serviceaccount:{KUBERNETES-COMPUTE-NAMESPACE}:default` if `KUBERNETES-COMPUTE-NAMESPACE` is `default`.
-## AzureML extension components
-
-For Arc-connected cluster, AzureML extension deployment will create [Azure Relay](../azure-relay/relay-what-is-it.md) in Azure cloud, used to route traffic between Azure services and the Kubernetes cluster. For AKS cluster without Arc connected, Azure Relay resource won't be created.
-
-Upon AzureML extension deployment completes, it will create following resources in Kubernetes cluster, depending on each AzureML extension deployment scenario:
-
- |Resource name |Resource type |Training |Inference |Training and Inference| Description | Communication with cloud|
- |--|--|--|--|--|--|--|
- |relayserver|Kubernetes deployment|**&check;**|**&check;**|**&check;**|relay server is only needed in arc-connected cluster, and won't be installed in AKS cluster. Relay server works with Azure Relay to communicate with the cloud services.|Receive the request of job creation, model deployment from cloud service; sync the job status with cloud service.|
- |gateway|Kubernetes deployment|**&check;**|**&check;**|**&check;**|The gateway is used to communicate and send data back and forth.|Send nodes and cluster resource information to cloud services.|
- |aml-operator|Kubernetes deployment|**&check;**|N/A|**&check;**|Manage the lifecycle of training jobs.| Token exchange with the cloud token service for authentication and authorization of Azure Container Registry.|
- |metrics-controller-manager|Kubernetes deployment|**&check;**|**&check;**|**&check;**|Manage the configuration for Prometheus|N/A|
- |{EXTENSION-NAME}-kube-state-metrics|Kubernetes deployment|**&check;**|**&check;**|**&check;**|Export the cluster-related metrics to Prometheus.|N/A|
- |{EXTENSION-NAME}-prometheus-operator|Kubernetes deployment|Optional|Optional|Optional| Provide Kubernetes native deployment and management of Prometheus and related monitoring components.|N/A|
- |amlarc-identity-controller|Kubernetes deployment|N/A|**&check;**|**&check;**|Request and renew Azure Blob/Azure Container Registry token through managed identity.|Token exchange with the cloud token service for authentication and authorization of Azure Container Registry and Azure Blob used by inference/model deployment.|
- |amlarc-identity-proxy|Kubernetes deployment|N/A|**&check;**|**&check;**|Request and renew Azure Blob/Azure Container Registry token through managed identity.|Token exchange with the cloud token service for authentication and authorization of Azure Container Registry and Azure Blob used by inference/model deployment.|
- |azureml-fe-v2|Kubernetes deployment|N/A|**&check;**|**&check;**|The front-end component that routes incoming inference requests to deployed services.|Send service logs to Azure Blob.|
- |inference-operator-controller-manager|Kubernetes deployment|N/A|**&check;**|**&check;**|Manage the lifecycle of inference endpoints. |N/A|
- |volcano-admission|Kubernetes deployment|Optional|N/A|Optional|Volcano admission webhook.|N/A|
- |volcano-controllers|Kubernetes deployment|Optional|N/A|Optional|Manage the lifecycle of Azure Machine Learning training job pods.|N/A|
- |volcano-scheduler |Kubernetes deployment|Optional|N/A|Optional|Used to perform in-cluster job scheduling.|N/A|
- |fluent-bit|Kubernetes daemonset|**&check;**|**&check;**|**&check;**|Gather the components' system log.| Upload the components' system log to cloud.|
- |{EXTENSION-NAME}-dcgm-exporter|Kubernetes daemonset|Optional|Optional|Optional|dcgm-exporter exposes GPU metrics for Prometheus.|N/A|
- |nvidia-device-plugin-daemonset|Kubernetes daemonset|Optional|Optional|Optional|nvidia-device-plugin-daemonset exposes GPUs on each node of your cluster| N/A|
- |prometheus-prom-prometheus|Kubernetes statefulset|**&check;**|**&check;**|**&check;**|Gather and send job metrics to cloud.|Send job metrics like cpu/gpu/memory utilization to cloud.|
-
-> [!IMPORTANT]
- > * Azure Relay resource is under the same resource group as the Arc cluster resource. It is used to communicate with the Kubernetes cluster and modifying them will break attached compute targets.
- > * By default, the kubernetes deployment resources are randomly deployed to 1 or more nodes of the cluster, and daemonset resources are deployed to ALL nodes. If you want to restrict the extension deployment to specific nodes, use `nodeSelector` configuration setting described as below.
-
-> [!NOTE]
- > * **{EXTENSION-NAME}:** is the extension name specified with ```az k8s-extension create --name``` CLI command.
## AzureML jobs connect with custom data storage
machine-learning Reference Yaml Job Sweep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-sweep.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| Key | Type | Description | Default value | | | - | -- | - |
-| `max_total_trials` | integer | The maximum time in seconds the job is allowed to run. Once this limit is reached, the system will cancel the job. | `1000` |
-| `max_concurrent_trials` | integer | | Defaults to `max_total_trials`. |
-| `timeout` | integer | The maximum time in seconds the entire sweep job is allowed to run. Once this limit is reached, the system will cancel the sweep job, including all its trials. | `604800` |
+| `max_total_trials` | integer | The maximum number of trial jobs. | `1000` |
+| `max_concurrent_trials` | integer | The maximum number of trial jobs that can run concurrently. | Defaults to `max_total_trials`. |
+| `timeout` | integer | The maximum time in seconds the entire sweep job is allowed to run. Once this limit is reached, the system will cancel the sweep job, including all its trials. | `5184000` |
| `trial_timeout` | integer | The maximum time in seconds each trial job is allowed to run. Once this limit is reached, the system will cancel the trial. | | ### Attributes of the `trial` key
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
You can use a self-signed or a custom certificate to secure access to the [distr
If you don't want to provide a custom HTTPS certificate at this stage, you don't need to collect anything. You'll be able to change this configuration later by following [Modify the local access configuration in a site](modify-local-access-configuration.md).
-If you want to provide a custom HTTPS certificate at site creation, follow the steps below. You'll need a certificate signed by a globally known and trusted CA. Your certificate must use a private key of type RSA or EC to ensure it's exportable (see [Exportable or non-exportable key](/azure/key-vault/certificates/about-certificates) for more information).
+If you want to provide a custom HTTPS certificate at site creation, follow the steps below. You'll need a certificate signed by a globally known and trusted CA. Your certificate must use a private key of type RSA or EC to ensure it's exportable (see [Exportable or non-exportable key](../key-vault/certificates/about-certificates.md) for more information).
- 1. Either [create an Azure Key Vault](/azure/key-vault/general/quick-create-portal) or choose an existing one to host your certificate. Ensure the Azure Key Vault is configured with **Azure Virtual Machines for deployment** resource access.
- 1. [Add the certificate to your Key Vault](/azure/key-vault/certificates/quick-create-portal). If you want to configure your certificate to renew automatically, see [Tutorial: Configure certificate auto-rotation in Key Vault](/azure/key-vault/certificates/tutorial-rotate-certificates) for information on enabling auto-rotation.
+ 1. Either [create an Azure Key Vault](../key-vault/general/quick-create-portal.md) or choose an existing one to host your certificate. Ensure the Azure Key Vault is configured with **Azure Virtual Machines for deployment** resource access.
+ 1. [Add the certificate to your Key Vault](../key-vault/certificates/quick-create-portal.md). If you want to configure your certificate to renew automatically, see [Tutorial: Configure certificate auto-rotation in Key Vault](../key-vault/certificates/tutorial-rotate-certificates.md) for information on enabling auto-rotation.
> [!NOTE] > Certificate validation will always be performed against the latest version of the local access certificate in the Key Vault. > > If you enable auto-rotation, it might take up to four hours for certificate updates in the Key Vault to synchronize with the edge location. 1. Decide how you want to provide access to your certificate. You can use a Key Vault access policy or Azure role-based access control (Azure RBAC).
- - [Assign a Key Vault access policy](/azure/key-vault/general/assign-access-policy?tabs=azure-portal). Provide **Get** and **List** permissions under **Secret permissions** and **Certificate permissions** to the **Private Mobile Network** service principal.
- - [Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control](/azure/key-vault/general/rbac-guide?tabs=azure-cli). Provide **Key Vault Reader** and **Key Vault Secrets User** permissions to the **Private Mobile Network** service principal.
+ - [Assign a Key Vault access policy](../key-vault/general/assign-access-policy.md?tabs=azure-portal). Provide **Get** and **List** permissions under **Secret permissions** and **Certificate permissions** to the **Private Mobile Network** service principal.
+ - [Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control](../key-vault/general/rbac-guide.md?tabs=azure-cli). Provide **Key Vault Reader** and **Key Vault Secrets User** permissions to the **Private Mobile Network** service principal.
1. Collect the values in the following table.
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
Azure Private 5G Core Preview private mobile networks include one or more *sites
- Carry out the steps in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md) for your new site. - Collect all of the information in [Collect the required information for a site](collect-required-information-for-a-site.md).-- Refer to the release notes for the current version of packet core, and whether it's supported by the version your Azure Stack Edge (ASE) is currently running. If your ASE version is incompatible with the latest packet core, [update your Azure Stack Edge Pro GPU](/azure/databox-online/azure-stack-edge-gpu-install-update).
+- Refer to the release notes for the current version of packet core, and whether it's supported by the version your Azure Stack Edge (ASE) is currently running. If your ASE version is incompatible with the latest packet core, [update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md).
- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope. ## Create the mobile network site resource
In this step, you'll create the mobile network site resource representing the ph
- Select the recommended packet core version in the **Version** field. > [!NOTE]
- > If a warning appears about an incompatibility between the selected packet core version and the current Azure Stack Edge version, you'll need to update ASE first. Select **Upgrade ASE** from the warning prompt and follow the instructions in [Update your Azure Stack Edge Pro GPU](/azure/databox-online/azure-stack-edge-gpu-install-update). Once you've finished updating your ASE, go back to the beginning of this step to create the site resource.
+ > If a warning appears about an incompatibility between the selected packet core version and the current Azure Stack Edge version, you'll need to update ASE first. Select **Upgrade ASE** from the warning prompt and follow the instructions in [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md). Once you've finished updating your ASE, go back to the beginning of this step to create the site resource.
- Ensure **AKS-HCI** is selected in the **Platform** field.
In this step, you'll create the mobile network site resource representing the ph
If you haven't already done so, you should now design the policy control configuration for your private mobile network. This allows you to customize how your packet core instances apply quality of service (QoS) characteristics to traffic. You can also block or limit certain flows. -- [Learn more about designing the policy control configuration for your private mobile network](policy-control.md)
+- [Learn more about designing the policy control configuration for your private mobile network](policy-control.md)
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
- [Collect the required information to deploy a private mobile network](collect-required-information-for-private-mobile-network.md). If you want to provision SIMs, you'll need to prepare a JSON file containing your SIM information, as described in [JSON file format for provisioning SIMs](collect-required-information-for-private-mobile-network.md#json-file-format-for-provisioning-sims). - Identify the names of the interfaces corresponding to ports 5 and 6 on the Azure Stack Edge Pro device in the site. - [Collect the required information for a site](collect-required-information-for-a-site.md).-- Refer to the release notes for the current version of packet core, and whether it's supported by the version your Azure Stack Edge (ASE) is currently running. If your ASE version is incompatible with the latest packet core, [update your Azure Stack Edge Pro GPU](/azure/databox-online/azure-stack-edge-gpu-install-update).
+- Refer to the release notes for the current version of packet core, and whether it's supported by the version your Azure Stack Edge (ASE) is currently running. If your ASE version is incompatible with the latest packet core, [update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md).
## Review the template
private-5g-core Modify Local Access Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-local-access-configuration.md
In this how-to guide, you'll learn how to use the Azure portal to change the cer
## Prerequisites -- If you want to add or update a custom HTTPS certificate for accessing your local monitoring tools, you'll need a certificate signed by a globally known and trusted CA. Your certificate must use a private key of type RSA or EC to ensure it's exportable (see [Exportable or non-exportable key](/azure/key-vault/certificates/about-certificates) for more information).
+- If you want to add or update a custom HTTPS certificate for accessing your local monitoring tools, you'll need a certificate signed by a globally known and trusted CA. Your certificate must use a private key of type RSA or EC to ensure it's exportable (see [Exportable or non-exportable key](../key-vault/certificates/about-certificates.md) for more information).
- Refer to [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) to collect the required values and make sure they're in the correct format. - Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.
In this step, you'll navigate to the **Packet Core Control Plane** resource repr
## Next steps - [Learn more about the distributed tracing web GUI](distributed-tracing.md)-- [Learn more about the packet core dashboards](packet-core-dashboards.md)
+- [Learn more about the packet core dashboards](packet-core-dashboards.md)
private-5g-core Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/security.md
As these credentials are highly sensitive, Azure Private 5G Core won't allow use
Access to the [distributed tracing](distributed-tracing.md) and [packet core dashboards](packet-core-dashboards.md) is secured by HTTPS. You can provide your own HTTPS certificate to attest access to your local diagnostics tools. Providing a certificate signed by a globally known and trusted certificate authority (CA) grants additional security to your deployment; we recommend this option over using a certificate signed by its own private key (self-signed).
-If you decide to provide your own certificates for local monitoring access, you'll need to add the certificate to an [Azure Key Vault](/azure/key-vault/) and set up the appropriate access permissions. See [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) for additional information on configuring custom HTTPS certificates for local monitoring access.
+If you decide to provide your own certificates for local monitoring access, you'll need to add the certificate to an [Azure Key Vault](../key-vault/index.yml) and set up the appropriate access permissions. See [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) for additional information on configuring custom HTTPS certificates for local monitoring access.
You can configure how access to your local monitoring tools is attested while [creating a site](create-a-site.md). For existing sites, you can modify the local access configuration by following [Modify the local access configuration in a site](modify-local-access-configuration.md). We recommend that you replace certificates at least once per year, including removing the old certificates from your system. This is known as rotating certificates. You might need to rotate your certificates more frequently if they expire after less than one year, or if organizational policies require it.
-For more information on how to generate a Key Vault certificate, see [Certificate creation methods](/azure/key-vault/certificates/create-certificate).
+For more information on how to generate a Key Vault certificate, see [Certificate creation methods](../key-vault/certificates/create-certificate.md).
## Next steps
private-5g-core Upgrade Packet Core Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-arm-template.md
We recommend upgrading your packet core instance during a maintenance window to
When planning for your upgrade, make sure you're allowing sufficient time for an upgrade and a possible rollback in the event of any issues. In addition, consider the following points for pre- and post-upgrade steps you may need to plan for when scheduling your maintenance window: - Refer to the packet core release notes for the version of packet core you're upgrading to and whether it's supported by the version your Azure Stack Edge (ASE) is currently running.-- If your ASE version is incompatible with the packet core version you're upgrading to, you'll need to upgrade ASE first. Refer to [Update your Azure Stack Edge Pro GPU](/azure/databox-online/azure-stack-edge-gpu-install-update) for the latest available version of ASE.
+- If your ASE version is incompatible with the packet core version you're upgrading to, you'll need to upgrade ASE first. Refer to [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md) for the latest available version of ASE.
- If you're currently running a packet core version that the ASE version you're upgrading to supports, you can upgrade ASE and packet core independently.
- - If you're currently running a packet core version that the ASE version you're upgrading to doesn't support, it's possible that packet core won't operate normally with the new ASE version. In this case, we recommend planning a maintenance window that allows you time to upgrade both ASE and packet core. Refer to [Update your Azure Stack Edge Pro GPU](/azure/databox-online/azure-stack-edge-gpu-install-update) for how long the ASE upgrade will take.
+ - If you're currently running a packet core version that the ASE version you're upgrading to doesn't support, it's possible that packet core won't operate normally with the new ASE version. In this case, we recommend planning a maintenance window that allows you time to upgrade both ASE and packet core. Refer to [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md) for how long the ASE upgrade will take.
- Prepare a testing plan with any steps you'll need to follow to validate your deployment post-upgrade. This plan should include testing some registered devices and sessions, and you'll execute it as part of [Verify upgrade](#verify-upgrade). - Review [Restore backed up deployment information](#restore-backed-up-deployment-information) and [Verify upgrade](#verify-upgrade) for the post-upgrade steps you'll need to follow to ensure your deployment is fully operational. Make sure your upgrade plan allows sufficient time for these steps.
The following list contains data that will get lost over a packet core upgrade.
### Upgrade ASE
-If you determined in [Plan for your upgrade](#plan-for-your-upgrade) that you need to upgrade your ASE, follow the steps in [Update your Azure Stack Edge Pro GPU](/azure/databox-online/azure-stack-edge-gpu-install-update).
+If you determined in [Plan for your upgrade](#plan-for-your-upgrade) that you need to upgrade your ASE, follow the steps in [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md).
### Upgrade packet core
If you determined in [Plan for your upgrade](#plan-for-your-upgrade) that you ne
:::image type="content" source="media/upgrade-packet-core-arm-template/upgrade-arm-template-configuration-fields.png" alt-text="Screenshot of the Azure portal showing the configuration fields for the upgrade ARM template."::: > [!NOTE]
- > If a warning appears about an incompatibility between the selected packet core version and the current Azure Stack Edge version, you'll need to upgrade ASE first. Select **Upgrade ASE** from the warning prompt and follow the instructions in [Update your Azure Stack Edge Pro GPU](/azure/databox-online/azure-stack-edge-gpu-install-update). Once you've finished updating your ASE, go back to the beginning of this step to upgrade packet core.
+ > If a warning appears about an incompatibility between the selected packet core version and the current Azure Stack Edge version, you'll need to upgrade ASE first. Select **Upgrade ASE** from the warning prompt and follow the instructions in [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md). Once you've finished updating your ASE, go back to the beginning of this step to upgrade packet core.
1. Select **Review + create**. 1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
Note that any configuration you set while your packet core instance was running
You've finished upgrading your packet core instance. - If your deployment contains multiple sites, upgrade the packet core instance in another site.-- Use [Log Analytics](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to monitor your deployment.
+- Use [Log Analytics](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to monitor your deployment.
private-5g-core Upgrade Packet Core Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-azure-portal.md
We recommend upgrading your packet core instance during a maintenance window to
When planning for your upgrade, make sure you're allowing sufficient time for an upgrade and a possible rollback in the event of any issues. In addition, consider the following points for pre- and post-upgrade steps you may need to plan for when scheduling your maintenance window: - Refer to the packet core release notes for the version of packet core you're upgrading to and whether it's supported by the version your Azure Stack Edge (ASE) is currently running.-- If your ASE version is incompatible with the packet core version you're upgrading to, you'll need to upgrade ASE first. Refer to [Update your Azure Stack Edge Pro GPU](/azure/databox-online/azure-stack-edge-gpu-install-update) for the latest available version of ASE.
+- If your ASE version is incompatible with the packet core version you're upgrading to, you'll need to upgrade ASE first. Refer to [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md) for the latest available version of ASE.
- If you're currently running a packet core version that the ASE version you're upgrading to supports, you can upgrade ASE and packet core independently.
- - If you're currently running a packet core version that the ASE version you're upgrading to doesn't support, it's possible that packet core won't operate normally with the new ASE version. In this case, we recommend planning a maintenance window that allows you time to upgrade both ASE and packet core. Refer to [Update your Azure Stack Edge Pro GPU](/azure/databox-online/azure-stack-edge-gpu-install-update) for how long the ASE upgrade will take.
+ - If you're currently running a packet core version that the ASE version you're upgrading to doesn't support, it's possible that packet core won't operate normally with the new ASE version. In this case, we recommend planning a maintenance window that allows you time to upgrade both ASE and packet core. Refer to [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md) for how long the ASE upgrade will take.
- Prepare a testing plan with any steps you'll need to follow to validate your deployment post-upgrade. This plan should include testing some registered devices and sessions, and you'll execute it as part of [Verify upgrade](#verify-upgrade). - Review [Restore backed up deployment information](#restore-backed-up-deployment-information) and [Verify upgrade](#verify-upgrade) for the post-upgrade steps you'll need to follow to ensure your deployment is fully operational. Make sure your upgrade plan allows sufficient time for these steps.
The following list contains data that will get lost over a packet core upgrade.
### Upgrade ASE
-If you determined in [Plan for your upgrade](#plan-for-your-upgrade) that you need to upgrade your ASE, follow the steps in [Update your Azure Stack Edge Pro GPU](/azure/databox-online/azure-stack-edge-gpu-install-update).
+If you determined in [Plan for your upgrade](#plan-for-your-upgrade) that you need to upgrade your ASE, follow the steps in [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md).
### Upgrade packet core
If you determined in [Plan for your upgrade](#plan-for-your-upgrade) that you ne
:::image type="content" source="media/upgrade-packet-core-azure-portal/upgrade-packet-core-version.png" alt-text="Screenshot of the Azure portal showing the New version field on the Upgrade packet core version screen. The recommended up-level version is selected."::: > [!NOTE]
- > If a warning appears about an incompatibility between the selected packet core version and the current Azure Stack Edge version, you'll need to upgrade ASE first. Select **Upgrade ASE** from the warning prompt and follow the instructions in [Update your Azure Stack Edge Pro GPU](/azure/databox-online/azure-stack-edge-gpu-install-update). Once you've finished updating your ASE, go back to the beginning of this step to upgrade packet core.
+ > If a warning appears about an incompatibility between the selected packet core version and the current Azure Stack Edge version, you'll need to upgrade ASE first. Select **Upgrade ASE** from the warning prompt and follow the instructions in [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md). Once you've finished updating your ASE, go back to the beginning of this step to upgrade packet core.
1. Select **Modify**. 1. Azure will now redeploy the packet core instance at the new software version. The Azure portal will display the following confirmation screen when this deployment is complete.
Note that any configuration you set while your packet core instance was running
You've finished upgrading your packet core instance. - If your deployment contains multiple sites, upgrade the packet core instance in another site.-- Use [Log Analytics](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to monitor your deployment.
+- Use [Log Analytics](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to monitor your deployment.
purview Available Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/available-metadata.md
Previously updated : 08/02/2022 Last updated : 12/14/2022 # Available metadata
This article has a list of the metadata that is available for a Power BI tenant
## Power BI | Metadata | Population method | Source of truth | Asset type | Editable | Upstream metadata |
-|--|-|-|--|-||
+| | -- | -- | | -- | - |
| Classification | Manual | Microsoft Purview | All types | Yes | N/A | | Sensitivity Labels | Automatic | Microsoft Purview | All types | No | | | Glossary terms | Manual | Microsoft Purview | All types | Yes | N/A |
This article has a list of the metadata that is available for a Power BI tenant
| EmbedUrl | Automatic | Power BI | Power BI Dashboard | No | dashboard.EmbedUrl | | tileNames | Automatic | Power BI | Power BI Dashboard | No | TileTitles | | Lineage | Automatic | Power BI | Power BI Dashboard | No | N/A |
+| users | Automatic | Power BI | Power BI Dashboard | No | dashboard.Users |
| name | Automatic | Power BI | Power BI dataflow | Yes | dataflow.Name | | configured by | Automatic | Power BI | Power BI dataflow | No | dataflow.ConfiguredBy | | description | Automatic | Power BI | Power BI dataflow | Yes | dataflow.Description |
-| ModelUrl | Automatic | Power BI | Power BI dataflow | No | dataflow.ModelUrl |
| ModifiedBy | Automatic | Power BI | Power BI dataflow | No | dataflow.ModifiedBy | | ModifiedDateTime | Automatic | Power BI | Power BI dataflow | No | dataflow.ModifiedDateTime | | Endorsement | Automatic | Power BI | Power BI dataflow | No | dataflow.EndorsementDetails |
+| users | Automatic | Power BI | Power BI dataflow | No | dataflow.Users |
| name | Automatic | Power BI | Power BI Dataset | Yes | dataset.Name | | IsRefreshable | Automatic | Power BI | Power BI Dataset | No | dataset.IsRefreshable | | configuredBy | Automatic | Power BI | Power BI Dataset | No | dataset.ConfiguredBy |
This article has a list of the metadata that is available for a Power BI tenant
| Lineage | Automatic | Microsoft Purview | Power BI Dataset | No | | | description | Automatic | Power BI | Power BI Dataset | Yes | dataset.Description | | Endorsement | Automatic | Power BI | Power BI Dataset | No | dataset.EndorsementDetails |
+| users | Automatic | Power BI | Power BI Dataset | No | dataset.Users |
| name | Automatic | Power BI | Power BI Report | Yes | report.Name | | description | Automatic | Power BI | Power BI Report | Yes | report.Description | | createdDateTime | Automatic | Power BI | Power BI Report | No | report.CreatedDateTime |
This article has a list of the metadata that is available for a Power BI tenant
| reportType | Automatic | Power BI | Power BI Report | No | report.ReportType | | Endorsement | Automatic | Power BI | Power BI Report | No | report.EndorsementDetails | | Lineage | Automatic | Microsoft Purview | Power BI Report | No | N/A |
+| users | Automatic | Power BI | Power BI Report | No | workspace.Users |
| name | Automatic | Power BI | Power BI Workspace | Yes | workspace.Name | | Description | Automatic | Power BI | Power BI Workspace | Yes | workspace.Description | | state | Automatic | Power BI | Power BI Workspace | No | workspace.State |
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-integration-runtimes.md
Previously updated : 12/05/2022 Last updated : 01/10/2023 # Create and manage a self-hosted integration runtime
To create and set up a self-hosted integration runtime, use the following proced
:::image type="content" source="media/manage-integration-runtimes/successfully-registered.png" alt-text="successfully registered.":::
+You can register multiple nodes for a self-hosted integration runtime using the same key. Learn more from [High availability and scalability](#high-availability-and-scalability).
+ ## Manage a self-hosted integration runtime
-You can edit a self-hosted integration runtime by navigating to **Integration runtimes** in the **Management center**, selecting the IR and then selecting edit. You can now update the description, copy the key, or regenerate new keys.
+You can edit a self-hosted integration runtime by navigating to **Integration runtimes** in the Microsoft Purview governance portal, hover on the IR then click the **Edit** button.
+In the **Settings** tab, you can update the description, copy the key, or regenerate new keys. In the **Nodes** tab, you can manage the registered nodes. And in the **Version** tab, you can see the IR version status.
:::image type="content" source="media/manage-integration-runtimes/edit-integration-runtime-settings.png" alt-text="edit IR details.":::
-You can delete a self-hosted integration runtime by navigating to **Integration runtimes** in the Management center, selecting the IR and then selecting **Delete**. Once an IR is deleted, any ongoing scans relying on it will fail.
+You can delete a self-hosted integration runtime by navigating to **Integration runtimes**, hover on the IR then click the **Delete** button.
### Notification area icons and notifications
Make sure the account has the permission of Log-on as a service. Otherwise self-
:::image type="content" source="../data-factory/media/create-self-hosted-integration-runtime/shir-service-account-permission-2.png" alt-text="Screenshot of Log on as a service user rights assignment":::
+## High availability and scalability
+
+You can associate a self-hosted integration runtime with multiple on-premises machines or virtual machines in Azure. These machines are called nodes. You can have up to four nodes associated with a self-hosted integration runtime. The benefits of having multiple nodes are:
+
+- Higher availability of the self-hosted integration runtime so that it's no longer the single point of failure for scan. This availability helps ensure continuity when you use up to four nodes.
+- Run more concurrent scans. Each self-hosted integration runtime can empower a number of scans at the same time, auto determined based on the machine's CPU/memory. You can install additional nodes if you have more concurrency need. Each scan will be executed on one of the nodes. Having more nodes doesn't improve the performance of a single scan execution.
+
+You can associate multiple nodes by installing the self-hosted integration runtime software from [Download Center](https://www.microsoft.com/download/details.aspx?id=39717). Then, register it by using the same authentication key.
+
+> [!NOTE]
+> Before you add another node for high availability and scalability, ensure that the **Remote access to intranet** option is enabled on the first node. To do so, select **Microsoft Integration Runtime Configuration Manager** > **Settings** > **Remote access to intranet**.
+ ## Networking requirements Your self-hosted integration runtime machine needs to connect to several resources to work correctly:
Here are the domains and outbound ports that you need to allow at both **corpora
| Domain names | Outbound ports | Description | | -- | -- | - | | `*.frontend.clouddatahub.net` | 443 | Required to connect to the Microsoft Purview service. Currently wildcard is required as there's no dedicated resource. |
-| `*.servicebus.windows.net` | 443 | Required for setting up scan in the Microsoft Purview governance portal. This endpoint is used for interactive authoring from UI, for example, test connection, browse folder list and table list to scope scan. Currently wildcard is required as there's no dedicated resource. |
+| `*.servicebus.windows.net` | 443 | Required for setting up scan in the Microsoft Purview governance portal. This endpoint is used for interactive authoring from UI, for example, test connection, browse folder list and table list to scope scan. To avoid using wildcard, see [Get URL of Azure Relay](#get-url-of-azure-relay).|
| `<purview_account>.purview.azure.com` | 443 | Required to connect to Microsoft Purview service. If you use Purview [Private Endpoints](catalog-private-link.md), this endpoint is covered by *account private endpoint*. | | `<managed_storage_account>.blob.core.windows.net` | 443 | Required to connect to the Microsoft Purview managed Azure Blob storage account. If you use Purview [Private Endpoints](catalog-private-link.md), this endpoint is covered by *ingestion private endpoint*. | | `<managed_storage_account>.queue.core.windows.net` | 443 | Required to connect to the Microsoft Purview managed Azure Queue storage account. If you use Purview [Private Endpoints](catalog-private-link.md), this endpoint is covered by *ingestion private endpoint*. |
Here are the domains and outbound ports that you need to allow at both **corpora
| `login.windows.net`<br>`login.microsoftonline.com` | 443 | Required to sign in to the Azure Active Directory. | > [!NOTE]
-> As currently Azure Relay doesn't support service tag, you have to use service tag AzureCloud or Internet in NSG rules for the communication to Azure Relay. For the communication to Microsoft Purview.
+> As currently Azure Relay doesn't support service tag, you have to use service tag AzureCloud or Internet in NSG rules for the communication to Azure Relay.
Depending on the sources you want to scan, you also need to allow other domains and outbound ports for other Azure or external sources. A few examples are provided here:
For some cloud data stores such as Azure SQL Database and Azure Storage, you may
> [!IMPORTANT] > In most environments, you will also need to make sure that your DNS is correctly configured. To confirm, you can use **nslookup** from your SHIR machine to check connectivity to each of the domains. Each nslookup should return the IP of the resource. If you are using [Private Endpoints](catalog-private-link.md), the private IP should be returned and not the Public IP. If no IP is returned, or if when using Private Endpoints the public IP is returned, you need to address your DNS/VNet association, or your Private Endpoint/VNet peering.
+### Get URL of Azure Relay
+
+One required domain and port that need to be put in the allowlist of your firewall is for the communication to Azure Relay. The self-hosted integration runtime uses it for interactive authoring such as test connection and browse folder/table list. If you don't want to allow **.servicebus.windows.net** and would like to have more specific URLs, then you can see all the FQDNs that are required by your self-hosted integration runtime. Follow these steps:
+
+1. Go to the Microsoft Purview governance portal -> Data map -> Integration runtimes, and edit your self-hosted integration runtime.
+2. In Edit page, select **Nodes** tab.
+3. Select **View Service URLs** to get all FQDNs.
+
+ :::image type="content" source="media/manage-integration-runtimes/get-azure-relay-urls.png" alt-text="Screenshot that shows how to get Azure Relay URLs for an integration runtime.":::
+
+4. You can add these FQDNs in the allowlist of firewall rules.
+
+> [!NOTE]
+> For the details related to Azure Relay connections protocol, see [Azure Relay Hybrid Connections protocol](../azure-relay/relay-hybrid-connections-protocol.md).
+ ## Proxy server considerations If your corporate network environment uses a proxy server to access the internet, configure the self-hosted integration runtime to use appropriate proxy settings. You can set the proxy during the initial registration phase or after it's being registered.
purview Microsoft Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/microsoft-purview-connector-overview.md
The following is a list of all the Azure data source (data center) regions where
- West Europe - West US - West US 2
+- West US 3
## File types supported for scanning
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
Previously updated : 12/05/2022 Last updated : 01/10/2023 # Discover and govern Azure SQL Database in Microsoft Purview
You can [browse through the data catalog](how-to-browse-catalog.md) or [search t
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-lineage.png" alt-text="Screenshot that shows lineage details from stored procedures.":::
+ When applicable, you can further drill down to see the lineage at SQL statement level within a stored procedure, along with column level lineage. When using Self-hosted Integration Runtime for scan, retrieving the lineage drilldown information during scan is supported since version 5.25.8374.1.
+
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-lineage-drilldown.png" alt-text="Screenshot that shows stored procedure lineage drilldown.":::
+ For information about supported Azure SQL Database lineage scenarios, refer to the [Supported capabilities](#supported-capabilities) section of this article. For more information about lineage in general, see [Data lineage in Microsoft Purview](concept-data-lineage.md) and [Microsoft Purview Data Catalog lineage user guide](catalog-lineage-user-guide.md). 2. Go to the stored procedure asset. On the **Properties** tab, go to **Related assets** to get the latest run details of stored procedures.
reliability Cross Region Replication Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/cross-region-replication-azure.md
Regions are paired for cross-region replication based on proximity and other fac
| United Arab Emirates | UAE North | UAE Central\* | | US Department of Defense |US DoD East\* |US DoD Central\* | | US Government |US Gov Arizona\* |US Gov Texas\* |
-| US Government |US Gov Iowa\* |US Gov Virginia\* |
| US Government |US Gov Virginia\* |US Gov Texas\* | (\*) Certain regions are access restricted to support specific customer scenarios, such as in-country disaster recovery. These regions are available only upon request by [creating a new support request](https://learn.microsoft.com/troubleshoot/azure/general/region-access-request-process#reserved-access-regions).
reliability Reliability Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-containers.md
When an entire Azure region or datacenter experiences downtime, your mission-cri
> [Azure Cache for Redis Premium service tiers](../container-instances/availability-zones.md#next-steps) > [!div class="nextstepaction"]
-> [Reliability in Azure](/azure/reliability/overview)
---
+> [Reliability in Azure](./overview.md)
route-server Tutorial Protect Route Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/tutorial-protect-route-server.md
This article helps you create an Azure Route Server with a DDoS protected virtual network. Azure DDoS protection protects your publicly accessible route server from Distributed Denial of Service attacks. > [!IMPORTANT]
-> Azure DDoS protection Standard incurs a cost per public IP address in the virtual network where you enable the service. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](/azure/ddos-protection/ddos-protection-overview).
+> Azure DDoS protection Standard incurs a cost per public IP address in the virtual network where you enable the service. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md).
In this tutorial, you learn how to:
If you're not going to continue to use this application, delete the virtual netw
Advance to the next article to learn how to: > [!div class="nextstepaction"]
-> [Configure peering between Azure Route Server and network virtual appliance](tutorial-configure-route-server-with-quagga.md)
-
+> [Configure peering between Azure Route Server and network virtual appliance](tutorial-configure-route-server-with-quagga.md)
search Search Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage.md
You can also use the management client libraries in the Azure SDKs for .NET, Pyt
## Data collection and retention
-Because Azure Cognitive Search is a [monitored resource](/azure/azure-monitor/monitor-reference), you can review the built-in [**activity logs**](/azure/azure-monitor/essentials/activity-log) and [**platform metrics**](/azure/azure-monitor/essentials/data-platform-metrics#types-of-metrics) for insights into service operations. Activity logs and the data used to report on platform metrics are retained for the periods described in the following table.
+Because Azure Cognitive Search is a [monitored resource](../azure-monitor/monitor-reference.md), you can review the built-in [**activity logs**](../azure-monitor/essentials/activity-log.md) and [**platform metrics**](../azure-monitor/essentials/data-platform-metrics.md#types-of-metrics) for insights into service operations. Activity logs and the data used to report on platform metrics are retained for the periods described in the following table.
-If you opt in for [**resource logging**](/azure/azure-monitor/essentials/resource-logs), you'll specify durable storage over which you'll have full control over data retention and data access through Kusto queries. For more information on how to set up resource logging in Cognitive Search, see [Collect and analyze log data](monitor-azure-cognitive-search.md).
+If you opt in for [**resource logging**](../azure-monitor/essentials/resource-logs.md), you'll specify durable storage over which you'll have full control over data retention and data access through Kusto queries. For more information on how to set up resource logging in Cognitive Search, see [Collect and analyze log data](monitor-azure-cognitive-search.md).
Internally, Microsoft collects telemetry data about your service and the platform. It's stored internally in Microsoft data centers and made globally available to Microsoft support engineers when you open a support ticket.
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-api-keys.md
API keys are frequently used when making REST API calls to a search service. You
> [!NOTE] > A quick note about "key" terminology in Cognitive Search. An "API key", which is described in this article, refers to a GUID used for authenticating a request. A "document key" refers to a unique string in your indexed content that's used to uniquely identify documents in a search index. API keys and document keys are unrelated.
-## What's an API key?
+## Types of API keys
There are two kinds of keys used for authenticating a request:
There are two kinds of keys used for authenticating a request:
| Admin | Full access (read-write) for all content operations | 2 <sup>1</sup>| Two admin keys, referred to as *primary* and *secondary* keys in the portal, are generated when the service is created and can be individually regenerated on demand. | | Query | Read-only access, scoped to the documents collection of a search index | 50 | One query key is generated with the service. More can be created on demand by a search service administrator. |
-<sup>1</sup> Having two allows you to roll over one key while using the second key for continued access to the service.
+<sup>1</sup> Having two allows you to roll over one key while using the second key for continued access to the service.
Visually, there's no distinction between an admin key or query key. Both keys are strings composed of 52 randomly generated alpha-numeric characters. If you lose track of what type of key is specified in your application, you can [check the key values in the portal](#find-existing-keys).
Visually, there's no distinction between an admin key or query key. Both keys ar
API keys are specified on client requests to a search service. Passing a valid API key on the request is considered proof that the request is from an authorized client. If you're creating, modifying, or deleting objects, you'll need an admin API key. Otherwise, query keys are typically distributed to client applications that issue queries.
-You can specify API keys in a request header for REST API calls, or in code that calls the azure.search.documents client libraries in the Azure SDKs. If you're using the Azure portal to perform tasks, your role assignment determines the level of access.
+You can specify API keys in a request header for REST API calls, or in code that calls the azure.search.documents client libraries in the Azure SDKs. If you're using the Azure portal to perform tasks, your role assignment determines the [level of access](#permissions-to-view-or-manage-api-keys).
-Best practices for using hard-coded in source files include:
+Best practices for using hard-coded keys in source files include:
+ During early development and proof-of-concept testing when security is looser, use sample or public data.
-+ After advancing into deeper development or production scenarios, switch to [Azure Active Directory and role-based access](search-security-rbac.md) to eliminate the need for having hard-coded keys. Or, if you want to continue using API keys, be sure to always monitor who has access to your API keys and regenerate API keys on a regular cadence.
++ For mature solutions or production scenarios, switch to [Azure Active Directory and role-based access](search-security-rbac.md) to eliminate the need for having hard-coded keys. Or, if you want to continue using API keys, be sure to always monitor [who has access to your API keys](#secure-api-keys) and [regenerate API keys](#regenerate-admin-keys) on a regular cadence.
-### [**REST**](#tab/rest-use)
+### [**Portal**](#tab/portal-use)
+
+In Cognitive Search, most tasks can be performed in Azure portal, including object creation, indexing through the Import data wizard, and queries through Search explorer.
+
+Authentication is built in so no action is required. By default, the portal uses API keys to authenticate the request automatically. However, if you [disable API keys](search-security-rbac.md#disable-api-key-authentication) and set up role assignments, the portal uses role assignments instead.
+
+### [**PowerShell**](#tab/azure-ps-use)
+
+A script example showing API key usage for various operations can be found at [Quickstart: Create an Azure Cognitive Search index in PowerShell using REST APIs](search-get-started-powershell.md).
+
+### [**REST API**](#tab/rest-use)
+ Admin keys are only specified in HTTP request headers. You can't place an admin API key in a URL. See [Connect to Azure Cognitive Search using REST APIs](search-get-started-rest.md#connect-to-azure-cognitive-search) for an example that specifies an admin API key on a REST call.
Best practices for using hard-coded in source files include:
Alternatively, you can pass a query key as a parameter on a URL if you're using GET: `GET /indexes/hotels/docs?search=*&$orderby=lastRenovationDate desc&api-version=2020-06-30&api-key=[query key]`
-### [**Azure PowerShell**](#tab/azure-ps-use)
-
-A script example showing API key usage can be found at [Quickstart: Create an Azure Cognitive Search index in PowerShell using REST APIs](search-get-started-powershell.md).
-
-### [**.NET**](#tab/dotnet-use)
+### [**C#**](#tab/dotnet-use)
In search solutions, a key is often specified as a configuration setting and then passed as an [AzureKeyCredential](/dotnet/api/azure.azurekeycredential). See [How to use Azure.Search.Documents in a C# .NET Application](search-howto-dotnet-sdk.md) for an example. > [!NOTE]
-> It's considered a poor security practice to pass sensitive data such as an `api-key` in the request URI. For this reason, Azure Cognitive Search only accepts a query key as an `api-key` in the query string. As a general rule, we recommend passing your `api-key` as a request header.
+> It's considered a poor security practice to pass sensitive data such as an `api-key` in the request URI. For this reason, Azure Cognitive Search only accepts a query key as an `api-key` in the query string. As a general rule, we recommend passing your `api-key` as a request header.
+
+## Permissions to view or manage API keys
+
+Permissions for viewing and managing API keys are conveyed through [role assignments](search-security-rbac.md). Members of the following roles can view and regenerate keys:
+++ Owner++ Contributor++ [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor)++ Administrator and co-administrator (classic)+
+The following roles don't have access to API keys:
+++ Reader++ Search Index Data Contributor++ Search Index Data Reader ## Find existing keys You can view and manage API keys in the [Azure portal](https://portal.azure.com), or through [PowerShell](/powershell/module/az.search), [Azure CLI](/cli/azure/search), or [REST API](/rest/api/searchmanagement/).
-### [**Azure portal**](#tab/portal-find)
+### [**Portal**](#tab/portal-find)
1. Sign in to the [Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
You can view and manage API keys in the [Azure portal](https://portal.azure.com)
:::image type="content" source="media/search-manage/azure-search-view-keys.png" alt-text="Screenshot of a portal page showing API keys." border="true":::
-### [**REST**](#tab/rest-find)
+### [**PowerShell**](#tab/azure-ps-find)
+
+1. Install the Az.Search module:
+
+ ```azurepowershell
+ Install-Module Az.Search
+ ```
+
+1. Return admin keys:
+
+ ```azurepowershell
+ Get-AzSearchAdminKeyPair -ResourceGroupName <resource-group-name> -ServiceName <search-service-name>
+ ```
+
+1. Return query keys:
+
+ ```azurepowershell
+ Get-AzSearchQueryKey -ResourceGroupName <resource-group-name> -ServiceName <search-service-name>
+ ```
-Use [ListAdminKeys](/rest/api/searchmanagement/2020-08-01/admin-keys) or [ListQueryKeys](/rest/api/searchmanagement/2020-08-01/query-keys/list-by-search-service) in the Management REST API to return API keys.
+### [**Azure CLI**](#tab/azure-cli-find)
+
+Use the following commands to return admin and query API keys, respectively:
+
+```azurecli
+az search admin-key show --resource-group <myresourcegroup> --service-name <myservice>
+
+az search query-key list --resource-group <myresourcegroup> --service-name <myservice>
+```
+
+### [**REST API**](#tab/rest-find)
+
+Use [List Admin Keys](/rest/api/searchmanagement/2020-08-01/admin-keys) or [List Query Keys](/rest/api/searchmanagement/2020-08-01/query-keys/list-by-search-service) in the Management REST API to return API keys.
You must have a [valid role assignment](#permissions-to-view-or-manage-api-keys) to return or update API keys. See [Manage your Azure Cognitive Search service with REST APIs](search-manage-rest.md) for guidance on meeting role requirements using the REST APIs.
Query keys are used for read-only access to documents within an index for operat
Restricting access and operations in client apps is essential to safeguarding the search assets on your service. Always use a query key rather than an admin key for any query originating from a client app.
-### [**Azure portal**](#tab/portal-query)
+### [**Portal**](#tab/portal-query)
1. Sign in to the [Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
Restricting access and operations in client apps is essential to safeguarding th
:::image type="content" source="media/search-security-overview/create-query-key.png" alt-text="Screenshot of the query key management options." border="true":::
+### [**PowerShell**](#tab/azure-ps-query)
+
+A script example showing API key usage can be found at [Create or delete query keys](search-manage-powershell.md#create-or-delete-query-keys).
+ ### [**Azure CLI**](#tab/azure-cli-query) A script example showing query key usage can be found at [Create or delete query keys](search-manage-azure-cli.md#create-or-delete-query-keys).
-### [**.NET**](#tab/dotnet-query)
+### [**REST API**](#tab/rest-query)
+
+Use [Create Query Keys](/rest/api/searchmanagement/2020-08-01/query-keys/create) in the Management REST API.
+
+You must have a [valid role assignment](#permissions-to-view-or-manage-api-keys) to create or manage API keys. See [Manage your Azure Cognitive Search service with REST APIs](search-manage-rest.md) for guidance on meeting role requirements using the REST APIs.
-A code example showing query key usage can be found in [DotNetHowTo](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowTo).
+```rest
+POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Search/searchServices/{searchServiceName}/createQueryKey/{name}?api-version=2020-08-01
+```
A code example showing query key usage can be found in [DotNetHowTo](https://git
Two admin keys are created for each service so that you can rotate a primary key while using the secondary key for business continuity.
-1. In the **Settings** > **Keys** page, copy the secondary key.
+1. Under **Settings**, select **Keys**, then copy the secondary key.
1. For all applications, update the API key settings to use the secondary key.
You can still access the service through the portal or programmatically. Managem
After you create new keys via portal or management layer, access is restored to your content (indexes, indexers, data sources, synonym maps) once you provide those keys on requests.
-## Permissions to view or manage API keys
-
-Permissions for viewing and managing API keys is conveyed through [role assignments](search-security-rbac.md). Members of the following roles can view and regenerate keys:
-
-+ Administrator and co-administrator (classic)
-+ Owner
-+ Contributor
-+ [Search Service Contributors](../role-based-access-control/built-in-roles.md#search-service-contributor)
-
-The following roles don't have access to API keys:
-
-+ Reader
-+ Search Index Data Contributor
-+ Search Index Data Reader
-
-## Secure API key access
+## Secure API keys
Use role assignments to restrict access to API keys.
Note that it's not possible to use [customer-managed key encryption](search-secu
1. In the **Role** filter, select the roles that have permission to view or manage keys (Owner, Contributor, Search Service Contributor). The resulting security principals assigned to those roles have key permissions on your search service.
-1. As a precaution, also check the **Classic administrators** tab for administrators and co-administrators.
+1. As a precaution, also check the **Classic administrators** tab to determine whether administrators and co-administrators have access.
## See also
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md
Telemetry logs are retained for one and a half years. During that period, suppor
Upon request, Microsoft can shorten the retention interval or remove references to specific objects in the telemetry logs. Remember that if you request data removal, Microsoft won't have a full history of your service, which could impede troubleshooting of the object in question.
-To remove references to specific objects, or to change the data retention period, [file a support ticket](/azure/azure-portal/supportability/how-to-create-azure-support-request) for your search service.
+To remove references to specific objects, or to change the data retention period, [file a support ticket](../azure-portal/supportability/how-to-create-azure-support-request.md) for your search service.
1. In **Problem details**, tag your request using the following selections:
In Azure Cognitive Search, encryption starts with connections and transmissions.
### Data at rest
-For data handled internally by the search service, the following table describes the [data encryption models](../security/fundamentals/encryption-models.md). Some features, such as knowledge store, incremental enrichment, and indexer-based indexing, read from or write to data structures in other Azure Services. Services that have a dependency on Azure Storage can use the [encryption features](/azure/storage/common/storage-service-encryption) of that technology.
+For data handled internally by the search service, the following table describes the [data encryption models](../security/fundamentals/encryption-models.md). Some features, such as knowledge store, incremental enrichment, and indexer-based indexing, read from or write to data structures in other Azure Services. Services that have a dependency on Azure Storage can use the [encryption features](../storage/common/storage-service-encryption.md) of that technology.
| Model | Keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Requirements&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Restrictions | Applies to | ||-|-|--||
Watch this fast-paced video for an overview of the security architecture and eac
+ [Azure security fundamentals](../security/fundamentals/index.yml) + [Azure Security](https://azure.microsoft.com/overview/security)
-+ [Microsoft Defender for Cloud](../security-center/index.yml)
++ [Microsoft Defender for Cloud](../security-center/index.yml)
search Tutorial Csharp Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-create-load-index.md
Create a new search resource using PowerShell and the **Az.Search** module.
``` > [!NOTE]
- > You might need to provide a tenant ID, which you can find in the Azure portal in [Portal settings > Directories + subscriptions](/azure/azure-portal/set-preferences).
+ > You might need to provide a tenant ID, which you can find in the Azure portal in [Portal settings > Directories + subscriptions](../azure-portal/set-preferences.md).
1. Before creating a new search service, you can list existing search services for your subscription to see if there's one you want to use:
The script uses the Azure SDK for Cognitive Search:
## Next steps
-[Deploy your Static Web App](tutorial-csharp-deploy-static-web-app.md)
+[Deploy your Static Web App](tutorial-csharp-deploy-static-web-app.md)
search Tutorial Javascript Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-create-load-index.md
Create a new search resource using PowerShell and the **Az.Search** module.
``` > [!NOTE]
- > You might need to provide a tenant ID, which you can find in the Azure portal in [Portal settings > Directories + subscriptions](/azure/azure-portal/set-preferences).
+ > You might need to provide a tenant ID, which you can find in the Azure portal in [Portal settings > Directories + subscriptions](../azure-portal/set-preferences.md).
1. Before creating a new search service, you can list existing search services for your subscription to see if there's one you want to use:
The script uses the Azure SDK for Cognitive Search:
## Next steps
-[Deploy your Static Web App](tutorial-javascript-deploy-static-web-app.md)
+[Deploy your Static Web App](tutorial-javascript-deploy-static-web-app.md)
search Tutorial Python Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-create-load-index.md
Create a new search resource using PowerShell and the **Az.Search** module.
``` > [!NOTE]
- > You might need to provide a tenant ID, which you can find in the Azure portal in [Portal settings > Directories + subscriptions](/azure/azure-portal/set-preferences).
+ > You might need to provide a tenant ID, which you can find in the Azure portal in [Portal settings > Directories + subscriptions](../azure-portal/set-preferences.md).
1. Before creating a new search service, you can list existing search services for your subscription to see if there's one you want to use:
The script uses the Azure SDK for Cognitive Search:
## Next steps
-[Deploy your Static Web App](tutorial-python-deploy-static-web-app.md)
+[Deploy your Static Web App](tutorial-python-deploy-static-web-app.md)
sentinel Normalization Ingest Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-ingest-time.md
Learn more about writing parsers in [Developing ASIM parsers](normalization-deve
## Implementing ingest time normalization
-To normalize data at ingest, you will need to use a [Data Collection Rule (DCR)](/azure/azure-monitor/essentials/data-collection-rule-overview.md). The procedure for implementing the DCR depends on the method used to ingest the data. For more information, refer to the article [Transform or customize data at ingestion time in Microsoft Sentinel](configure-data-transformation.md).
+To normalize data at ingest, you will need to use a [Data Collection Rule (DCR)](../azure-monitor/essentials/data-collection-rule-overview.md). The procedure for implementing the DCR depends on the method used to ingest the data. For more information, refer to the article [Transform or customize data at ingestion time in Microsoft Sentinel](configure-data-transformation.md).
A [KQL](kusto-overview.md) transformation query is the core of a DCR. The KQL version used in DCRs is slightly different than the version used elsewhere in Microsoft Sentinel to accommodate for requirements of pipeline event processing. Therefore, you will need to modify any query-time parser to use it in a DCR. For more information on the differences, and how to convert a query-time parser to an ingest-time parser, read about the [DCR KQL limitations](../azure-monitor/essentials/data-collection-transformations-structure.md#kql-limitations).
sentinel Sentinel Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solution.md
The Microsoft Sentinel solution for **Zero Trust (TIC 3.0)** is useful for any o
Before installing the **Zero Trust (TIC 3.0)** solution, make sure you have the following prerequisites: -- **Onboard Microsoft services**: Make sure that you have both [Microsoft Sentinel](quickstart-onboard.md) and [Microsoft Defender for Cloud](/azure/defender-for-cloud/get-started) enabled in your Azure subscription.
+- **Onboard Microsoft services**: Make sure that you have both [Microsoft Sentinel](quickstart-onboard.md) and [Microsoft Defender for Cloud](../defender-for-cloud/get-started.md) enabled in your Azure subscription.
- **Microsoft Defender for Cloud requirements**: In Microsoft Defender for Cloud: - Add required regulatory standards to your dashboard. Make sure to add both the *Azure Security Benchmark* and *NIST SP 800-53 R5 Assessments* to your Microsoft Defender for Cloud dashboard. For more information, see [add a regulatory standard to your dashboard](/azure/security-center/update-regulatory-compliance-packages?WT.mc_id=Portal-fx#add-a-regulatory-standard-to-your-dashboard) in the Microsoft Defender for Cloud documentation.
- - Continuously export Microsoft Defender for Cloud data to your Log Analytics workspace. For more information, see [Continuously export Microsoft Defender for Cloud data](/azure/defender-for-cloud/continuous-export?tabs=azure-portal).
+ - Continuously export Microsoft Defender for Cloud data to your Log Analytics workspace. For more information, see [Continuously export Microsoft Defender for Cloud data](../defender-for-cloud/continuous-export.md?tabs=azure-portal).
-- **Required user permissions**. To install the **Zero Trust (TIC 3.0)** solution, you must have access to your Microsoft Sentinel workspace with [Security Reader](/azure/active-directory/roles/permissions-reference#security-reader) permissions.
+- **Required user permissions**. To install the **Zero Trust (TIC 3.0)** solution, you must have access to your Microsoft Sentinel workspace with [Security Reader](../active-directory/roles/permissions-reference.md#security-reader) permissions.
## Install the Zero Trust (TIC 3.0) solution
For more information, see [Use Azure Monitor workbooks to visualize and monitor
### Is this available in government regions?
-Yes. The **Zero Trust (TIC 3.0)** solution is in Public Preview and deployable to Commercial/Government regions. For more information, see [Cloud feature availability for commercial and US Government customers](/azure/security/fundamentals/feature-availability).
+Yes. The **Zero Trust (TIC 3.0)** solution is in Public Preview and deployable to Commercial/Government regions. For more information, see [Cloud feature availability for commercial and US Government customers](../security/fundamentals/feature-availability.md).
### Which permissions are required to use this content? -- [Microsoft Sentinel Contributor](/azure/role-based-access-control/built-in-roles#microsoft-sentinel-contributor) users can create and edit workbooks, analytics rules, and other Microsoft Sentinel resources.
+- [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) users can create and edit workbooks, analytics rules, and other Microsoft Sentinel resources.
-- [Microsoft Sentinel Reader](/azure/role-based-access-control/built-in-roles#microsoft-sentinel-reader) users can view data, incidents, workbooks, and other Microsoft Sentinel resources.
+- [Microsoft Sentinel Reader](../role-based-access-control/built-in-roles.md#microsoft-sentinel-reader) users can view data, incidents, workbooks, and other Microsoft Sentinel resources.
For more information, see [Permissions in Microsoft Sentinel](roles.md).
Read our blogs!
- [Announcing the Microsoft Sentinel: Zero Trust (TIC3.0) Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/announcing-the-microsoft-sentinel-zero-trust-tic3-0-solution/ba-p/3031685) - [Building and monitoring Zero Trust (TIC 3.0) workloads for federal information systems with Microsoft Sentinel](https://devblogs.microsoft.com/azuregov/building-and-monitoring-zero-trust-tic-3-0-workloads-for-federal-information-systems-with-microsoft-sentinel/) - [Zero Trust: 7 adoption strategies from security leaders](https://www.microsoft.com/security/blog/2021/03/31/zero-trust-7-adoption-strategies-from-security-leaders/)-- [Implementing Zero Trust with Microsoft Azure: Identity and Access Management (6 Part Series)](https://devblogs.microsoft.com/azuregov/implementing-zero-trust-with-microsoft-azure-identity-and-access-management-1-of-6/)
+- [Implementing Zero Trust with Microsoft Azure: Identity and Access Management (6 Part Series)](https://devblogs.microsoft.com/azuregov/implementing-zero-trust-with-microsoft-azure-identity-and-access-management-1-of-6/)
service-fabric Service Fabric Cluster Upgrade Version Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-upgrade-version-azure.md
Last updated 07/14/2022
An Azure Service Fabric cluster is a resource you own, but it's partly managed by Microsoft. Here's how to manage when and how Microsoft updates your Azure Service Fabric cluster.
-For further background on cluster upgrade concepts and processes, see [Upgrading and updating Azure Service Fabric clusters](service-fabric-cluster-upgrade.md)
+For further background on cluster upgrade concepts and processes, see [Upgrading and updating Azure Service Fabric clusters](service-fabric-cluster-upgrade.md).
## Set upgrade mode
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md
Last updated 07/14/2022
# Service Fabric supported versions The tables in this article outline the Service Fabric and platform versions that are actively supported.
+If you want to find a list of all the available Service Fabric runtime versions in available for your subscription, follow the guidance in the [Check for supported cluster versions section of the Manage Service Fabric Cluster Upgrades guide](service-fabric-cluster-upgrade-version-azure.md#check-for-supported-cluster-versions).
+ ## Windows ### Current versions
The tables in this article outline the Service Fabric and platform versions that
| Service Fabric runtime |Can upgrade directly from|Can downgrade to*|Compatible SDK or NuGet package version|Supported .NET runtimes** |OS Version |End of support | | | | | | | | |
-| 8.2 CU7<br>8.2.1692.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2022 |
-| 8.2 CU6<br>8.2.1686.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2022 |
-| 8.2 CU4<br>8.2.1659.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2022 |
-| 8.2 CU3<br>8.2.1620.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2022 |
-| 8.2 CU2.1<br>8.2.1571.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2022 |
-| 8.2 CU2<br>8.2.1486.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 6.0 (Preview), .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2022 |
-| 8.2 CU1<br>8.2.1363.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2022 |
-| 8.2 RTO<br>8.2.1235.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2022 |
+| 8.2 CU7<br>8.2.1692.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 |
+| 8.2 CU6<br>8.2.1686.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 |
+| 8.2 CU4<br>8.2.1659.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 |
+| 8.2 CU3<br>8.2.1620.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023|
+| 8.2 CU2.1<br>8.2.1571.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 |
+| 8.2 CU2<br>8.2.1486.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 6.0 (Preview), .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 |
+| 8.2 CU1<br>8.2.1363.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 |
+| 8.2 RTO<br>8.2.1235.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 |
| 8.1 CU4<br>8.1.388.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 | | 8.1 CU3.1<br>8.1.337.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 | | 8.1 CU3<br>8.1.335.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 |
Support for Service Fabric on a specific OS ends when support for the OS version
| 9.0 CU2<br>9.0.1056.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 | | 9.0 CU1<br>9.0.1035.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 | | 9.0 RTO<br>9.0.1018.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
-| 8.2 CU6<br>8.2.1485.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
-| 8.2 CU5.1<br>8.2.1483.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
-| 8.2 CU4<br>8.2.1458.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
-| 8.2 CU3<br>8.2.1434.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
-| 8.2 CU2.1<br>8.2.1397.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
-| 8.2 CU2<br>8.2.1285.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
-| 8.2 CU1<br>8.2.1204.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
-| 8.2 RTO<br>8.2.1124.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022|
+| 8.2 CU6<br>8.2.1485.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
+| 8.2 CU5.1<br>8.2.1483.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
+| 8.2 CU4<br>8.2.1458.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
+| 8.2 CU3<br>8.2.1434.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
+| 8.2 CU2.1<br>8.2.1397.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
+| 8.2 CU2<br>8.2.1285.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
+| 8.2 CU1<br>8.2.1204.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
+| 8.2 RTO<br>8.2.1124.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
| 8.1 CU4<br>8.1.360.1 | 7.2 CU7<br>7.2.476.1 | 8.0 | Less than or equal to version 5.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | June 30, 2022 | | 8.1 CU3.1<br>8.1.340.1 | 7.2 CU7<br>7.2.476.1 | 8.0 | Less than or equal to version 5.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | June 30, 2022 | | 8.1 CU3<br>8.1.334.1 | 7.2 CU7<br>7.2.476.1 | 8.0 | Less than or equal to version 5.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | June 30, 2022 |
site-recovery Azure To Azure How To Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication.md
Use the following procedure to replicate Azure VMs to another Azure region. As a
- **Replica-managed disk**: Site Recovery creates new replica-managed disks in the target region to mirror the source VM's managed disks with the same storage type (Standard or premium) as the source VM's managed disk. - **Cache storage**: Site Recovery needs extra storage account called cache storage in the source region. All the changes happening on the source VMs are tracked and sent to cache storage account before replicating them to the target location. This storage account should be Standard. >[!Note]
- >Azure Site Recovery supports High churn (Public Preview) where you can choose to use **High Churn** for the VM. With this, you can use a *Premium Block Blob* type of storage account. By default, **Normal Churn** is selected. For more information, see [Azure VM Disaster Recovery - High Churn Support](/azure/site-recovery/concepts-azure-to-azure-high-churn-support).
+ >Azure Site Recovery supports High churn (Public Preview) where you can choose to use **High Churn** for the VM. With this, you can use a *Premium Block Blob* type of storage account. By default, **Normal Churn** is selected. For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md).
:::image type="Cache storage" source="./media/azure-to-azure-how-to-enable-replication/cache-storage.png" alt-text="Screenshot of customize target settings.":::
After the enable replication job runs, and the initial replication finishes, the
## Next steps
-[Learn more](site-recovery-test-failover-to-azure.md) about running a test failover.
+[Learn more](site-recovery-test-failover-to-azure.md) about running a test failover.
site-recovery Azure To Azure How To Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-reprotect.md
You can customize the following properties of the target VM during reprotection.
|Capacity reservation | Configure a capacity reservation for the VM. You can create a new capacity reservation group to reserve capacity or select an existing capacity reservation group. [Learn more](azure-to-azure-how-to-enable-replication.md#enable-replication) about capacity reservation. | |Target storage (Secondary VM doesn't use managed disks) | You can change the storage account that the VM uses after failover. | |Replica managed disks (Secondary VM uses managed disks) | Site Recovery creates replica managed disks in the primary region to mirror the secondary VM's managed disks. |
-|Cache storage | You can specify a cache storage account to be used during replication. By default, a new cache storage account is created, if it doesn't exist. </br>By default, type of storage account (Standard storage account or Premium Block Blob storage account) that you have selected for the source VM in original primary location is used. For example, during replication from original source to target, if you have selected *High Churn*, during re-protection back from target to original source, Premium Block blob will be used by default. You can configure and change it for re-protection. For more information, see [Azure VM Disaster Recovery - High Churn Support](/azure/site-recovery/concepts-azure-to-azure-high-churn-support).|
+|Cache storage | You can specify a cache storage account to be used during replication. By default, a new cache storage account is created, if it doesn't exist. </br>By default, type of storage account (Standard storage account or Premium Block Blob storage account) that you have selected for the source VM in original primary location is used. For example, during replication from original source to target, if you have selected *High Churn*, during re-protection back from target to original source, Premium Block blob will be used by default. You can configure and change it for re-protection. For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md).|
|Availability set | If the VM in the secondary region is part of an availability set, you can choose an availability set for the target VM in the primary region. By default, Site Recovery tries to find the existing availability set in the primary region, and use it. During customization, you can specify a new availability set. | ### What happens during reprotection?
When the VM is re-protected from the DR region to the primary region, we do not
## Next steps
-After the VM is protected, you can initiate a failover. The failover shuts down the VM in the secondary region and creates and boots the VM in the primary region, with brief downtime during this process. We recommend you choose an appropriate time for this process and that you run a test failover before initiating a full failover to the primary site. [Learn more](site-recovery-failover.md) about Azure Site Recovery failover.
+After the VM is protected, you can initiate a failover. The failover shuts down the VM in the secondary region and creates and boots the VM in the primary region, with brief downtime during this process. We recommend you choose an appropriate time for this process and that you run a test failover before initiating a full failover to the primary site. [Learn more](site-recovery-failover.md) about Azure Site Recovery failover.
site-recovery Azure To Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-powershell.md
$WusToEusPCMapping = Get-AzRecoveryServicesAsrProtectionContainerMapping -Protec
## Create cache storage account and target storage account
-A cache storage account is a standard storage account in the same Azure region as the virtual machine being replicated. The cache storage account is used to hold replication changes temporarily, before the changes are moved to the recovery Azure region. High churn support (Public Preview) is now available in Azure Site Recovery using which you can create a Premium Block Blob type of storage accounts that can be used as cache storage account to get high churn limits. You can choose to, but it's not necessary, to specify different cache storage accounts for the different disks of a virtual machine. If you use different cache storage accounts, ensure they are of the same type (Standard or Premium Block Blobs). For more information, see [Azure VM Disaster Recovery - High Churn Support](/azure/site-recovery/concepts-azure-to-azure-high-churn-support).
+A cache storage account is a standard storage account in the same Azure region as the virtual machine being replicated. The cache storage account is used to hold replication changes temporarily, before the changes are moved to the recovery Azure region. High churn support (Public Preview) is now available in Azure Site Recovery using which you can create a Premium Block Blob type of storage accounts that can be used as cache storage account to get high churn limits. You can choose to, but it's not necessary, to specify different cache storage accounts for the different disks of a virtual machine. If you use different cache storage accounts, ensure they are of the same type (Standard or Premium Block Blobs). For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md).
```azurepowershell #Create Cache storage account for replication logs in the primary region
Remove-AzRecoveryServicesAsrReplicationProtectedItem -ReplicationProtectedItem $
## Next steps
-View the [Azure Site Recovery PowerShell reference](/powershell/module/az.RecoveryServices) to learn how you can do other tasks such as creating recovery plans and testing failover of recovery plans with PowerShell.
+View the [Azure Site Recovery PowerShell reference](/powershell/module/az.RecoveryServices) to learn how you can do other tasks such as creating recovery plans and testing failover of recovery plans with PowerShell.
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
This table summarizes support for the cache storage account used by Site Recover
**Setting** | **Support** | **Details** | | General purpose V2 storage accounts (Hot and Cool tier) | Supported | Usage of GPv2 is recommended because GPv1 does not support ZRS (Zonal Redundant Storage).
-Premium storage | Supported | Use Premium Block Blob storage accounts to get High Churn support (in Public Preview). For more information, see [Azure VM Disaster Recovery - High Churn Support](/azure/site-recovery/concepts-azure-to-azure-high-churn-support).
+Premium storage | Supported | Use Premium Block Blob storage accounts to get High Churn support (in Public Preview). For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md).
Region | Same region as virtual machine | Cache storage account should be in the same region as the virtual machine being protected. Subscription | Can be different from source virtual machines | Cache storage account need not be in the same subscription as the source virtual machine(s). Azure Storage firewalls for virtual networks | Supported | If you are using firewall enabled cache storage account or target storage account, ensure you ['Allow trusted Microsoft services'](../storage/common/storage-network-security.md#exceptions).<br></br>Also, ensure that you allow access to at least one subnet of source Vnet.<br></br>Note: Do not restrict virtual network access to your storage accounts used for Site Recovery. You should allow access from 'All networks'.
Premium P20 or P30 or P40 or P50 disk | 16 KB or greater |20 MB/s | 1684 GB per
>[!Note]
->High churn support is now available in Azure Site Recovery where churn limit per virtual machine has increased up to 100 MB/s. For more information, see [Azure VM Disaster Recovery - High Churn Support](/azure/site-recovery/concepts-azure-to-azure-high-churn-support).
+>High churn support is now available in Azure Site Recovery where churn limit per virtual machine has increased up to 100 MB/s. For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md).
## Replicated machines - networking
Tags | Supported | User-generated tags on NICs are replicated every 24 hours.
## Next steps - Read [networking guidance](./azure-to-azure-about-networking.md) for replicating Azure VMs.-- Deploy disaster recovery by [replicating Azure VMs](./azure-to-azure-quickstart.md).
+- Deploy disaster recovery by [replicating Azure VMs](./azure-to-azure-quickstart.md).
site-recovery Concepts Azure To Azure High Churn Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-azure-to-azure-high-churn-support.md
The following table summarizes Site Recovery limits:
### From Recovery Service Vault
-1. Select source VMs on which you want to enable replication. To enable replication, follow the steps [here](/azure/site-recovery/azure-to-azure-how-to-enable-replication).
+1. Select source VMs on which you want to enable replication. To enable replication, follow the steps [here](./azure-to-azure-how-to-enable-replication.md).
2. Under **Replication Settings** > **Storage**, select **View/edit storage configuration**. The **Customize target settings** page opens.
The following table summarizes Site Recovery limits:
## Cost Implications - **High Churn** uses *Premium Block Blob* storage accounts which may have higher cost implications as compared to **Normal Churn** which uses *Standard* storage accounts. For more information, see [pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).-- For High churn VMs, more data changes may get replicated to target for **High churn** compared to **Normal churn**. This may lead to more network cost.-
+- For High churn VMs, more data changes may get replicated to target for **High churn** compared to **Normal churn**. This may lead to more network cost.
site-recovery Tutorial Replicate Vms Edge Zone To Another Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/tutorial-replicate-vms-edge-zone-to-another-zone.md
Here the primary location is an Azure Public MEC and secondary location is anoth
### Prerequisites -- Ensure Azure Az PowerShell module is installed. For information on how to install, see [Install the Azure Az PowerShell module](https://learn.microsoft.com/powershell/azure/install-az-ps?view=azps-9.2.0)
+- Ensure Azure Az PowerShell module is installed. For information on how to install, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps?view=azps-9.2.0)
- The minimum Azure Az PowerShell version must be 9.1.0+. Use the following command to see the current version: ``` Get-InstalledModule -Name Az ``` -- Ensure the Linux distro version and kernel is supported by Azure Site Recovery. For more information, see the [support matrix](/azure/site-recovery/azure-to-azure-support-matrix#linux).
+- Ensure the Linux distro version and kernel is supported by Azure Site Recovery. For more information, see the [support matrix](./azure-to-azure-support-matrix.md#linux).
- Ensure the primary VM has a public IP. To validate, go to the VM NIC and check if public IP is attached to the NIC. Ensure that recovery VM has a public IP when you switch to protection direction. >[!Note]
site-recovery Tutorial Replicate Vms Edge Zone To Azure Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/tutorial-replicate-vms-edge-zone-to-azure-region.md
Here the primary location is an Azure Public MEC and secondary location is the p
Get-InstalledModule -Name Az ``` -- Ensure the Linux distro version and kernel is supported by Azure Site Recovery. For more information, see the [support matrix](/azure/site-recovery/azure-to-azure-support-matrix#linux).
+- Ensure the Linux distro version and kernel is supported by Azure Site Recovery. For more information, see the [support matrix](./azure-to-azure-support-matrix.md#linux).
## Replicate Virtual machines running in an Azure Public MEC to Azure region
To replicate VMs running in an Azure Public MEC to Azure region, Follow these st
``` Remove-AzResourceGroup -Name $Name -Force
- ```
+ ```
spring-apps How To Start Stop Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-start-stop-delete.md
Title: Start, stop, and delete an application in Azure Spring Apps | Microsoft Docs
-description: How to start, stop, and delete an application in Azure Spring Apps
+ Title: Start, stop, and delete an application in Azure Spring Apps
+description: Need to start, stop, or delete your Azure Spring Apps application? Learn how to manage the state of an Azure Spring Apps application.
- Previously updated : 10/31/2019+ Last updated : 01/10/2023 -+ # Start, stop, and delete an application in Azure Spring Apps
This guide explains how to change an application's state in Azure Spring Apps by using either the Azure portal or the Azure CLI.
-## Using the Azure portal
+## Prerequisites
-After you deploy an application, you can start, stop, and delete it by using the Azure portal.
+- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- A deployed Azure Spring Apps service instance. Follow the [quickstart on deploying an app via the Azure CLI](./quickstart.md) to get started.
+- At least one application already created in your service instance.
+
+## Application state
+
+Your applications running in Azure Spring Apps may not need to run continuously. For example, an application may not always need to run if it's only used during business hours.
+
+There may be times where you wish to stop or start an application. You can also restart an application as part of general troubleshooting steps or delete an application you no longer require.
+
+## Manage application state
+
+After you deploy an application, you can start, stop, and delete it by using the [Azure portal](https://portal.azure.com) or [Azure CLI](/cli/azure/).
+
+### [Azure portal](#tab/azure-portal)
+
+1. Go to your Azure Spring Apps service instance in the [Azure portal](https://portal.azure.com).
+
+1. Select **Application Dashboard**.
-1. Go to your Azure Spring Apps service instance in the Azure portal.
-1. Select the **Application Dashboard** tab.
1. Select the application whose state you want to change.+ 1. On the **Overview** page for that application, select **Start/Stop**, **Restart**, or **Delete**.
-## Using the Azure CLI
+### [Azure CLI](#tab/azure-cli)
> [!NOTE]
-> You can use optional parameters and configure defaults with the Azure CLI. Learn more about the Azure CLI by reading [our reference documentation](/cli/azure/spring).
+> You can use optional parameters and configure defaults with the Azure CLI. Learn more about the Azure CLI by reading the [az spring](/cli/azure/spring) reference documentation.
+
+1. First, use the following command to install the Azure Spring Apps extension for Azure CLI:
-First, install the Azure Spring Apps extension for the Azure CLI as follows:
+ ```azurecli-interactive
+ az extension add --name spring
+ ```
-```azurecli
-az extension add --name spring
-```
+1. Next, perform any of the following Azure CLI operations:
-Next, select any of these Azure CLI operations:
+ - Start your application:
-* To start your application:
+ ```azurecli-interactive
+ az spring app start \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <application-name>
+ ```
- ```azurecli
- az spring app start -n <application name> -g <resource group> -s <Azure Spring Apps name>
- ```
+ - Stop your application:
-* To stop your application:
+ ```azurecli
+ az spring app stop \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <application-name>
+ ```
- ```azurecli
- az spring app stop -n <application name> -g <resource group> -s <Azure Spring Apps name>
- ```
+ - Restart your application:
-* To restart your application:
+ ```azurecli
+ az spring app restart \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <application-name>
+ ```
- ```azurecli
- az spring app restart -n <application name> -g <resource group> -s <Azure Spring Apps name>
- ```
+ - Delete your application:
+
+ ```azurecli
+ az spring app delete \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <application-name>
+ ```
++
-* To delete your application:
+## Next steps
- ```azurecli
- az spring app delete -n <application name> -g <resource group> -s <Azure Spring Apps name>
- ```
+> [!div class="nextstepaction"]
+> [Start or stop your Azure Spring Apps service instance](how-to-start-stop-service.md)
storage Anonymous Read Access Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-configure.md
az storage account show \
To allow or disallow public access for a storage account with a template, create a template with the **AllowBlobPublicAccess** property set to **true** or **false**. The following steps describe how to create a template in the Azure portal. 1. In the Azure portal, choose **Create a resource**.
-1. In **Search the Marketplace**, type **template deployment**, and then press **ENTER**.
-1. Choose **Template deployment (deploy using custom templates) (preview)**, choose **Create**, and then choose **Build your own template in the editor**.
+1. In **Search services and marketplace**, type **template deployment**, and then press **ENTER**.
+1. Choose **Template deployment (deploy using custom templates)**, choose **Create**, and then choose **Build your own template in the editor**.
1. In the template editor, paste in the following JSON to create a new account and set the **AllowBlobPublicAccess** property to **true** or **false**. Remember to replace the placeholders in angle brackets with your own values. ```json
storage Blob Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-cli.md
az storage blob set-tier
Blob index tags make data management and discovery easier. Blob index tags are user-defined key-value index attributes that you can apply to your blobs. Once configured, you can categorize and find objects within an individual container or across all containers. Blob resources can be dynamically categorized by updating their index tags without requiring a change in container organization. This approach offers a flexible way to cope with changing data requirements. You can use both metadata and index tags simultaneously. For more information on index tags, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md).
-> [!IMPORTANT]
-> Support for blob index tags is in preview status.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- > [!TIP] > The code sample provided below uses pattern matching to obtain text from an XML file having a known structure. The example is used to illustrate a simplified approach for adding blob tags using basic Bash functionality. The use of an actual data parsing tool is always recommended when consuming data for production workloads.
storage Object Replication Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-configure.md
az storage account or-policy show \
If you don't have permissions to the source storage account or if you want to use more than 10 container pairs, then you can configure object replication on the destination account and provide a JSON file that contains the policy definition to another user to create the same policy on the source account. For example, if the source account is in a different Azure AD tenant from the destination account, then you can use this approach to configure object replication. > [!NOTE]
-> Cross-tenant object replication is permitted by default for a storage account. To prevent replication across tenants, you can set the **AllowCrossTenantReplication** property (preview) to disallow cross-tenant object replication for your storage accounts. For more information, see [Prevent object replication across Azure Active Directory tenants](object-replication-prevent-cross-tenant-policies.md).
+> Cross-tenant object replication is permitted by default for a storage account. To prevent replication across tenants, you can set the **AllowCrossTenantReplication** property to disallow cross-tenant object replication for your storage accounts. For more information, see [Prevent object replication across Azure Active Directory tenants](object-replication-prevent-cross-tenant-policies.md).
The examples in this section show how to configure the object replication policy on the destination account, and then get the JSON file for that policy that another user can use to configure the policy on the source account.
storage Storage Blob Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-java-get-started.md
To learn more about each of these authorization mechanisms, see [Authorize acces
## [Azure AD (Recommended)](#tab/azure-ad)
-To authorize with Azure AD, you'll need to use a [security principal](/azure/active-directory/develop/app-objects-and-service-principals). Which type of security principal you need depends on where your application runs. Use the following table as a guide:
+To authorize with Azure AD, you'll need to use a [security principal](../../active-directory/develop/app-objects-and-service-principals.md). Which type of security principal you need depends on where your application runs. Use the following table as a guide:
| Where the application runs | Security principal | Guidance | | | | |
storage Storage Blob Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata.md
In addition to the data they contain, blobs support system properties and user-d
> [!NOTE] > Blob index tags also provide the ability to store arbitrary user-defined key/value attributes alongside an Azure Blob storage resource. While similar to metadata, only blob index tags are automatically indexed and made searchable by the native blob service. Metadata cannot be indexed and queried unless you utilize a separate service such as Azure Search. >
-> To learn more about this feature, see [Manage and find data on Azure Blob storage with blob index (preview)](storage-manage-find-blobs.md).
+> To learn more about this feature, see [Manage and find data on Azure Blob storage with blob index](storage-manage-find-blobs.md).
## Set and retrieve properties
storage Storage Blob Tags Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-java.md
To learn more about this feature along with known issues and limitations, see [M
## Set tags You can set and get index tags if your code has authorized access to blob data through one of the following mechanisms:-- Azure AD built-in role assigned as [Storage Blob Data Owner](/azure/role-based-access-control/built-in-roles#storage-blob-data-owner) or higher-- Azure RBAC action [Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write](/azure/role-based-access-control/resource-provider-operations#microsoftstorage)
+- Azure AD built-in role assigned as [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) or higher
+- Azure RBAC action [Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write](../../role-based-access-control/resource-provider-operations.md#microsoftstorage)
- Shared Access Signature with permission to access the blob's tags (`t` permission) - Account key
You can delete all tags by passing an empty `Map` object into the `setTags` meth
## Get tags You can set and get index tags if your code has authorized access to blob data through one of the following mechanisms:-- Azure AD built-in role assigned as [Storage Blob Data Owner](/azure/role-based-access-control/built-in-roles#storage-blob-data-owner) or higher-- Azure RBAC action [Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read](/azure/role-based-access-control/resource-provider-operations#microsoftstorage)
+- Azure AD built-in role assigned as [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) or higher
+- Azure RBAC action [Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read](../../role-based-access-control/resource-provider-operations.md#microsoftstorage)
- Shared Access Signature with permission to access the blob's tags (`t` permission) - Account key
The following example shows how to retrieve and iterate over the blob's tags:
## Filter and find data with blob index tags You can use index tags to find and filter data if your code has authorized access to blob data through one of the following mechanisms:-- Azure AD built-in role assigned as [Storage Blob Data Owner](/azure/role-based-access-control/built-in-roles#storage-blob-data-owner) or higher-- Azure RBAC action [Microsoft.Storage/storageAccounts/blobServices/containers/blobs/filter/action](/azure/role-based-access-control/resource-provider-operations#microsoftstorage)
+- Azure AD built-in role assigned as [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) or higher
+- Azure RBAC action [Microsoft.Storage/storageAccounts/blobServices/containers/blobs/filter/action](../../role-based-access-control/resource-provider-operations.md#microsoftstorage)
- Shared Access Signature with permission to find blobs by tags (`f` permission) - Account key
storage Infrastructure Encryption Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/infrastructure-encryption-enable.md
Azure Storage automatically encrypts all data in a storage account at the servic
Infrastructure encryption can be enabled for the entire storage account, or for an encryption scope within an account. When infrastructure encryption is enabled for a storage account or an encryption scope, data is encrypted twice &mdash; once at the service level and once at the infrastructure level &mdash; with two different encryption algorithms and two different keys.
-Service-level encryption supports the use of either Microsoft-managed keys or customer-managed keys with Azure Key Vault or Key Vault Managed Hardware Security Model (HSM) (preview). Infrastructure-level encryption relies on Microsoft-managed keys and always uses a separate key. For more information about key management with Azure Storage encryption, see [About encryption key management](storage-service-encryption.md#about-encryption-key-management).
+Service-level encryption supports the use of either Microsoft-managed keys or customer-managed keys with Azure Key Vault or Key Vault Managed Hardware Security Model (HSM). Infrastructure-level encryption relies on Microsoft-managed keys and always uses a separate key. For more information about key management with Azure Storage encryption, see [About encryption key management](storage-service-encryption.md#about-encryption-key-management).
To doubly encrypt your data, you must first create a storage account or an encryption scope that is configured for infrastructure encryption. This article describes how to enable infrastructure encryption.
storage Shared Key Authorization Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/shared-key-authorization-prevent.md
You can create a diagnostic setting for each type of Azure Storage resource in y
After you create the diagnostic setting, requests to the storage account are subsequently logged according to that setting. For more information, see [Create diagnostic setting to collect resource logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
-For a reference of fields available in Azure Storage logs in Azure Monitor, see [Resource logs](../blobs/monitor-blob-storage-reference.md#resource-logs-preview).
+For a reference of fields available in Azure Storage logs in Azure Monitor, see [Resource logs](../blobs/monitor-blob-storage-reference.md#resource-logs).
#### Query logs for requests made with Shared Key or SAS
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
Previously updated : 12/12/2022 Last updated : 01/10/2023
The Azure Storage platform includes the following data
- [Azure Tables](../tables/table-storage-overview.md): A NoSQL store for schemaless storage of structured data. - [Azure Disks](../../virtual-machines/managed-disks-overview.md): Block-level storage volumes for Azure VMs.
-Each service is accessed through a storage account. To get started, see [Create a storage account](storage-account-create.md).
+Each service is accessed through a storage account with a unique address. To get started, see [Create a storage account](storage-account-create.md).
Additionally, Azure provides the following specialized storage: -- [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md): Enterprise files storage, powered by NetApp: makes it easy for enterprise line-of-business (LOB) and storage professionals to migrate and run complex, file-based applications with no code change.
+- [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md): Enterprise files storage, powered by NetApp: makes it easy for enterprise line-of-business (LOB) and storage professionals to migrate and run complex, file-based applications with no code change. Azure NetApp Files is managed via NetApp accounts and can be accessed via NFS, SMB and dual-protocol volumes. To get started, see [Create a NetApp account](../../azure-netapp-files/azure-netapp-files-create-netapp-account.md).
- Azure NetApp Files is managed via NetApp accounts and can be accessed via NFS, SMB and dual-protocol volumes. To get started, see [Create a NetApp account](../../azure-netapp-files/azure-netapp-files-create-netapp-account.md).
+For help in deciding which data services to use for your scenario, see [Review your storage options](/azure/cloud-adoption-framework/ready/considerations/storage-options) in the Microsoft Cloud Adoption Framework.
## Review options for storing data in Azure
storage Storage Use Azcopy Blobs Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-copy.md
The copy operation is synchronous so when the command returns, that indicates th
## Copy blobs and add index tags
-Copy blobs to another storage account and add [blob index tags(preview)](../blobs/storage-manage-find-blobs.md) to the target blob.
+Copy blobs to another storage account and add [blob index tags](../blobs/storage-manage-find-blobs.md) to the target blob.
If you're using Azure AD authorization, your security principal must be assigned the [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) role, or it must be given permission to the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftstorage) via a custom Azure role. If you're using a Shared Access Signature (SAS) token, that token must provide access to the blob's tags via the `t` SAS permission.
storage Storage Use Azcopy Blobs Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-upload.md
For detailed reference, see the [azcopy copy](storage-ref-azcopy-copy.md) refere
## Upload with index tags
-You can upload a file and add [blob index tags(preview)](../blobs/storage-manage-find-blobs.md) to the target blob.
+You can upload a file and add [blob index tags](../blobs/storage-manage-find-blobs.md) to the target blob.
If you're using Azure AD authorization, your security principal must be assigned the [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) role, or it must be given permission to the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftstorage) via a custom Azure role. If you're using a Shared Access Signature (SAS) token, that token must provide access to the blob's tags via the `t` SAS permission.
storage Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/transport-layer-security-configure-minimum-version.md
To log requests to your Azure Storage account and determine the TLS version used
Azure Storage logging in Azure Monitor supports using log queries to analyze log data. To query logs, you can use an Azure Log Analytics workspace. To learn more about log queries, see [Tutorial: Get started with Log Analytics queries](../../azure-monitor/logs/log-analytics-tutorial.md).
-To log Azure Storage data with Azure Monitor and analyze it with Azure Log Analytics, you must first create a diagnostic setting that indicates what types of requests and for which storage services you want to log data. Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public cloud regions. This preview enables logs for blobs (including Azure Data Lake Storage Gen2), files, queues, and tables. To create a diagnostic setting in the Azure portal, follow these steps:
+To log Azure Storage data with Azure Monitor and analyze it with Azure Log Analytics, you must first create a diagnostic setting that indicates what types of requests and for which storage services you want to log data. To create a diagnostic setting in the Azure portal, follow these steps:
1. Create a new Log Analytics workspace in the subscription that contains your Azure Storage account. After you configure logging for your storage account, the logs will be available in the Log Analytics workspace. For more information, see [Create a Log Analytics workspace in the Azure portal](../../azure-monitor/logs/quick-create-workspace.md). 1. Navigate to your storage account in the Azure portal.
-1. In the Monitoring section, select **Diagnostic settings (preview)**.
+1. In the Monitoring section, select **Diagnostic settings**.
1. Select the Azure Storage service for which you want to log requests. For example, choose **Blob** to log requests to Blob storage. 1. Select **Add diagnostic setting**. 1. Provide a name for the diagnostic setting.
To log Azure Storage data with Azure Monitor and analyze it with Azure Log Analy
After you create the diagnostic setting, requests to the storage account are subsequently logged according to that setting. For more information, see [Create diagnostic setting to collect resource logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
-For a reference of fields available in Azure Storage logs in Azure Monitor, see [Resource logs (preview)](../blobs/monitor-blob-storage-reference.md#resource-logs-preview).
+For a reference of fields available in Azure Storage logs in Azure Monitor, see [Resource logs](../blobs/monitor-blob-storage-reference.md#resource-logs).
### Query logged requests by TLS version
To configure the minimum TLS version for a storage account with a template, crea
1. In the Azure portal, choose **Create a resource**. 1. In **Search the Marketplace**, type **template deployment**, and then press **ENTER**.
-1. Choose **Template deployment (deploy using custom templates) (preview)**, choose **Create**, and then choose **Build your own template in the editor**.
+1. Choose **Template deployment (deploy using custom templates)**, choose **Create**, and then choose **Build your own template in the editor**.
1. In the template editor, paste in the following JSON to create a new account and set the minimum TLS version to TLS 1.2. Remember to replace the placeholders in angle brackets with your own values. ```json
storage Table Storage Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design.md
To design scalable and performant tables, you must consider factors such as performance, scalability, and cost. If you have previously designed schemas for relational databases, these considerations are familiar, but while there are some similarities between the Azure Table service storage model and relational models, there are also important differences. These differences typically lead to different designs that may look counter-intuitive or wrong to someone familiar with relational databases, yet make sense if you are designing for a NoSQL key/value store such as the Azure Table service. Many of your design differences reflect the fact that the Table service is designed to support cloud-scale applications that can contain billions of entities (or rows in relational database terminology) of data or for datasets that must support high transaction volumes. Therefore, you must think differently about how you store your data and understand how the Table service works. A well-designed NoSQL data store can enable your solution to scale much further and at a lower cost than a solution that uses a relational database. This guide helps you with these topics. ## About the Azure Table service
-This section highlights some of the key features of the Table service that are especially relevant to designing for performance and scalability. If you're new to Azure Storage and the Table service, first read [Introduction to Microsoft Azure Storage](../../storage/common/storage-introduction.md) and [Get started with Azure Table Storage using .NET](../../cosmos-db/tutorial-develop-table-dotnet.md) before reading the remainder of this article. Although the focus of this guide is on the Table service, it includes discussion of the Azure Queue and Blob services, and how you might use them with the Table service.
+This section highlights some of the key features of the Table service that are especially relevant to designing for performance and scalability. If you're new to Azure Storage and the Table service, first read [Get started with Azure Table Storage using .NET](../../cosmos-db/tutorial-develop-table-dotnet.md) before reading the remainder of this article. Although the focus of this guide is on the Table service, it includes discussion of the Azure Queue and Blob services, and how you might use them with the Table service.
What is the Table service? As you might expect from the name, the Table service uses a tabular format to store data. In the standard terminology, each row of the table represents an entity, and the columns store the various properties of that entity. Every entity has a pair of keys to uniquely identify it, and a timestamp column that the Table service uses to track when the entity was last updated. The timestamp is applied automatically, and you cannot manually overwrite the timestamp with an arbitrary value. The Table service uses this last-modified timestamp (LMT) to manage optimistic concurrency.
synapse-analytics Cognitive Services With Synapseml Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cognitive-services-with-synapseml-overview.md
### Search - [Bing Image search](https://azure.microsoft.com/services/cognitive-services/bing-image-search-api/) ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/BingImageSearch.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.BingImageSearch))-- [Azure Cognitive search](https://docs.microsoft.com/azure/search/search-what-is-azure-search) ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/https://docsupdatetracker.net/index.html#com.microsoft.azure.synapse.ml.cognitive.search.AzureSearchWriter$), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AzureSearchWriter))
+- [Azure Cognitive search](../search/search-what-is-azure-search.md) ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/https://docsupdatetracker.net/index.html#com.microsoft.azure.synapse.ml.cognitive.search.AzureSearchWriter$), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AzureSearchWriter))
## Prerequisites
-1. Follow the steps in [Getting started](https://docs.microsoft.com/azure/cognitive-services/big-data/getting-started) to set up your Azure Databricks and Cognitive Services environment. This tutorial shows you how to install SynapseML and how to create your Spark cluster in Databricks.
+1. Follow the steps in [Getting started](../cognitive-services/big-dat) to set up your Azure Databricks and Cognitive Services environment. This tutorial shows you how to install SynapseML and how to create your Spark cluster in Databricks.
1. After you create a new notebook in Azure Databricks, copy the **Shared code** below and paste into a new cell in your notebook. 1. Choose a service sample, below, and copy paste it into a second new cell in your notebook. 1. Replace any of the service subscription key placeholders with your own key.
display(
## Text Analytics for Health Sample
-The [Text Analytics for Health Service](https://docs.microsoft.com/azure/cognitive-services/language-service/text-analytics-for-health/overview?tabs=ner) extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
+The [Text Analytics for Health Service](../cognitive-services/language-service/text-analytics-for-health/overview.md?tabs=ner) extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
```python
tdf.writeToAzureSearch(
indexName=search_index, keyCol="id", )
-```
+```
synapse-analytics Apache Spark Advisor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitoring/apache-spark-advisor.md
Title: Troubleshoot Spark application issues with Spark Advisor
-description: Learn how to troubleshoot Spark application issues with Spark Advisor. The advisor automatically analyzes queries and commands, and offers advice.
+ Title: Spark Advisor
+description: Spark Advisor is a system to automatically analyze commands/queries, and show the appropriate advice when a customer executes code or query.
Last updated 06/23/2022
-# Troubleshoot Spark application issues with Spark Advisor
+# Spark Advisor
-Spark Advisor is a system that automatically analyzes your code, queries, and commands, and advises you about them. By following this advice, you can improve your execution performance, fix execution failures, and decrease costs. This article helps you solve common problems with Spark Advisor.
+Spark Advisor is a system to automatically analyze commands/queries, and show the appropriate advice when customer executes code or query. After applying the advice, you would have chance to improve your execution performance, decrease cost and fix the execution failures.
-## Advice on query hints
-### May return inconsistent results when using 'randomsplit'
-Verify that the hint is spelled correctly.
+
+## May return inconsistent results when using 'randomSplit'
+Inconsistent or inaccurate results may be returned when working with the results of the 'randomSplit' method. Use Apache Spark (RDD) caching before using the 'randomSplit' method.
+
+Method randomSplit() is equivalent to performing sample() on your data frame multiple times, with each sample refetching, partitioning, and sorting your data frame within partitions. The data distribution across partitions and sorting order is important for both randomSplit() and sample(). If either changes upon data refetch, there may be duplicates, or missing values across splits and the same sample using the same seed may produce different results.
+
+These inconsistencies may not happen on every run, but to eliminate them completely, cache your data frame, repartition on a column(s), or apply aggregate functions such as groupBy.
+
+## Table/view name is already in use
+A view already exists with the same name as the created table, or a table already exists with the same name as the created view.
+When this name is used in queries or applications, only the view will be returned no matter which one created first. To avoid conflicts, rename either the table or the view.
+
+## Hints related advise
+### Unable to recognize a hint
+The selected query contains a hint that isn't recognized. Verify that the hint is spelled correctly.
```scala spark.sql("SELECT /*+ unknownHint */ * FROM t1") ```
-### Unable to find specified relation names
-Verify that the relations are spelled correctly and are accessible within the scope of the hint.
+### Unable to find a specified relation name(s)
+Unable to find the relation(s) specified in the hint. Verify that the relation(s) are spelled correctly and accessible within the scope of the hint.
```scala spark.sql("SELECT /*+ BROADCAST(unknownTable) */ * FROM t1 INNER JOIN t2 ON t1.str = t2.str") ``` ### A hint in the query prevents another hint from being applied
+The selected query contains a hint that prevents another hint from being applied.
```scala spark.sql("SELECT /*+ BROADCAST(t1), MERGE(t1, t2) */ * FROM t1 INNER JOIN t2 ON t1.str = t2.str") ```
-### Reduce rounding error propagation caused by division
-This query contains the expression with the `double` type. We recommend that you enable the configuration `spark.advise.divisionExprConvertRule.enable`, which can help reduce the division expressions and the rounding error propagation.
+## Enable 'spark.advise.divisionExprConvertRule.enable' to reduce rounding error propagation
+This query contains the expression with Double type. We recommend that you enable the configuration 'spark.advise.divisionExprConvertRule.enable', which can help reduce the division expressions and to reduce the rounding error propagation.
```text "t.a/t.b/t.c" convert into "t.a/(t.b * t.c)" ```
-### Improve query performance for non-equal join
-This query contains a time-consuming join because of an `Or` condition within the query. We recommend that you enable the configuration `spark.advise.nonEqJoinConvertRule.enable`. It can help convert the join triggered by the `Or` condition to shuffle sort merge join (SMJ) or broadcast hash join (BHJ) to accelerate this query.
-
-### The use of the randomSplit method might return inconsistent results
-Spark Advisor might return inconsistent or inaccurate results when you work with the results of the `randomSplit` method. Use Apache Spark resilient distributed dataset caching (RDD) before you use the `randomSplit` method.
+## Enable 'spark.advise.nonEqJoinConvertRule.enable' to improve query performance
+This query contains time consuming join due to "Or" condition within query. We recommend that you enable the configuration 'spark.advise.nonEqJoinConvertRule.enable', which can help to convert the join triggered by "Or" condition to SMJ or BHJ to accelerate this query.
-The `randomSplit()` method is equivalent to performing a `sample()` action on your DataFrame multiple times, with each sample refetching, partitioning, and sorting your DataFrame within partitions. The data distribution across partitions and sort order is important for both `randomSplit()` and `sample()` methods. If either changes upon data refetch, there might be duplicates or missing values across splits, and the same sample that uses the same seed might produce different results.
+## Optimize delta table with small files compaction
-These inconsistencies might not happen on every run. To eliminate them completely, cache your DataFrame, repartition on columns, or apply aggregate functions such as `groupBy`.
+This query is on a delta table with many small files. To improve the performance of queries, run the OPTIMIZE command on the delta table. More details could be found within this [article](https://aka.ms/small-file-advise-delta).
-### A table or view name might already be in use
-A view already exists with the same name as the created table, or a table already exists with the same name as the created view. When you use this name in queries or applications, Spark Advisor returns only the view, regardless of which one was created first. To avoid conflicts, rename either the table or the view.
+## Optimize Delta table with ZOrder
+This query is on a Delta table and contains a highly selective filter. To improve the performance of queries, run the OPTIMIZE ZORDER BY command on the delta table. More details could be found within this [article](https://aka.ms/small-file-advise-delta).
## Next steps
-For more information on monitoring pipeline runs, see [Monitor pipeline runs using Synapse Studio](how-to-monitor-pipeline-runs.md).
+
+For more information on monitoring pipeline runs, see the [Monitor pipeline runs using Synapse Studio](how-to-monitor-pipeline-runs.md) article.
synapse-analytics Reservation Of Executors In Dynamic Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/reservation-of-executors-in-dynamic-allocation.md
In scenarios where multiple users try to run multiple spark jobs in a given Syna
## Next steps-- [Quickstart: Create an Apache Spark pool in Azure Synapse Analytics using web tools](/azure/synapse-analytics/quickstart-create-apache-spark-pool-portal)-- [What is Apache Spark in Azure Synapse Analytics](/azure/synapse-analytics/spark/apache-spark-overview)-- [Automatically scale Azure Synapse Analytics Apache Spark pools](/azure/synapse-analytics/spark/apache-spark-autoscale)-- [Azure Synapse Analytics](/azure/synapse-analytics)
+- [Quickstart: Create an Apache Spark pool in Azure Synapse Analytics using web tools](../quickstart-create-apache-spark-pool-portal.md)
+- [What is Apache Spark in Azure Synapse Analytics](./apache-spark-overview.md)
+- [Automatically scale Azure Synapse Analytics Apache Spark pools](./apache-spark-autoscale.md)
+- [Azure Synapse Analytics](../index.yml)
synapse-analytics Memory Concurrency Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/memory-concurrency-limits.md
The service levels range from DW100c to DW30000c.
The maximum service level is DW30000c, which has 60 Compute nodes and one distribution per Compute node. For example, a 600 TB data warehouse at DW30000c processes approximately 10 TB per Compute node. > [!NOTE]
-> Synapse Dedicated SQL pool is an evergreen platform service. Under [shared responsibility model in the cloud](/azure/security/fundamentals/shared-responsibility#division-of-responsibility), Microsoft continues to invest in advancements to underlying software and hardware which host dedicated SQL pool. As a result, the number of nodes or the type of computer hardware which underpins a given performance level (SLO) may change. The number of compute nodes listed here are provided as a reference, and shouldn't be used for sizing or performance purposes. Irrespective of number of nodes or underlying infrastructure, Microsoft's goal is to deliver performance in accordance with SLO; hence, we recommend that all sizing exercises must use cDWU as a guide. For more information on SLO and compute Data Warehouse Units, see [Data Warehouse Units (DWUs) for dedicated SQL pool (formerly SQL DW)](what-is-a-data-warehouse-unit-dwu-cdwu.md#service-level-objective).
+> Synapse Dedicated SQL pool is an evergreen platform service. Under [shared responsibility model in the cloud](../../security/fundamentals/shared-responsibility.md#division-of-responsibility), Microsoft continues to invest in advancements to underlying software and hardware which host dedicated SQL pool. As a result, the number of nodes or the type of computer hardware which underpins a given performance level (SLO) may change. The number of compute nodes listed here are provided as a reference, and shouldn't be used for sizing or performance purposes. Irrespective of number of nodes or underlying infrastructure, Microsoft's goal is to deliver performance in accordance with SLO; hence, we recommend that all sizing exercises must use cDWU as a guide. For more information on SLO and compute Data Warehouse Units, see [Data Warehouse Units (DWUs) for dedicated SQL pool (formerly SQL DW)](what-is-a-data-warehouse-unit-dwu-cdwu.md#service-level-objective).
## Concurrency maximums for workload groups
To learn more about how to leverage resource classes to optimize your workload f
* [Workload management workload groups](sql-data-warehouse-workload-isolation.md) * [CREATE WORKLOAD GROUP](/sql/t-sql/statements/create-workload-group-transact-sql) * [Resource classes for workload management](resource-classes-for-workload-management.md)
-* [Analyzing your workload](analyze-your-workload.md)
+* [Analyzing your workload](analyze-your-workload.md)
synapse-analytics Sql Data Warehouse Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-troubleshoot-connectivity.md
The status of your dedicated SQL pool (formerly SQL DW) will be shown here. If t
![Service Available](./media/sql-data-warehouse-troubleshoot-connectivity/resource-health.png)
-For more information, see [Resource Health](/azure/service-health/resource-health-overview).
+For more information, see [Resource Health](../../service-health/resource-health-overview.md).
## Check for paused or scaling operation
For more information on errors 40914 and 40615, refer to [vNet service endpoint
## Still having connectivity issues?
-Create a [support ticket](sql-data-warehouse-get-started-create-support-ticket.md) so the engineering team can support you.
+Create a [support ticket](sql-data-warehouse-get-started-create-support-ticket.md) so the engineering team can support you.
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
If you have a shared access signature key that you should use to access files, m
### Can't read, list, or access files in Azure Data Lake Storage
-If you use an Azure AD login without explicit credentials, make sure that your Azure AD identity can access the files in storage. To access the files, your Azure AD identity must have the **Blob Data Reader** permission, or permissions to **List** and **Read** [access control lists (ACL) in ADLS](/azure/storage/blobs/data-lake-storage-access-control-model). For more information, see [Query fails because file cannot be opened](#query-fails-because-file-cant-be-opened).
+If you use an Azure AD login without explicit credentials, make sure that your Azure AD identity can access the files in storage. To access the files, your Azure AD identity must have the **Blob Data Reader** permission, or permissions to **List** and **Read** [access control lists (ACL) in ADLS](../../storage/blobs/data-lake-storage-access-control-model.md). For more information, see [Query fails because file cannot be opened](#query-fails-because-file-cant-be-opened).
If you access storage by using [credentials](develop-storage-files-storage-access-control.md#credentials), make sure that your [managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity) or [SPN](develop-storage-files-storage-access-control.md?tabs=service-principal) has the **Data Reader** or **Contributor role** or specific ACL permissions. If you used a [shared access signature token](develop-storage-files-storage-access-control.md?tabs=shared-access-signature), make sure that it has `rl` permission and that it hasn't expired.
You don't need to use separate databases to isolate data for different tenants.
- [Azure Synapse Analytics frequently asked questions](../overview-faq.yml) - [Store query results to storage using serverless SQL pool in Azure Synapse Analytics](create-external-table-as-select.md) - [Synapse Studio troubleshooting](../troubleshoot/troubleshoot-synapse-studio.md)-- [Troubleshoot a slow query on a dedicated SQL Pool](/troubleshoot/azure/synapse-analytics/dedicated-sql/troubleshoot-dsql-perf-slow-query)
+- [Troubleshoot a slow query on a dedicated SQL Pool](/troubleshoot/azure/synapse-analytics/dedicated-sql/troubleshoot-dsql-perf-slow-query)
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
The following table lists the features of Azure Synapse Analytics that have tran
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
-| November 2022 | **Ingest data from Azure Stream Analytics into Synapse Data Explorer** | The ability to use a Streaming Analytics job to collect data from an event hub and send it to your Azure Data Explorer cluster is now generally available. For more information, see [Ingest data from Azure Stream Analytics into Azure Data Explorer](/azure/data-explorer/stream-analytics-connector) and [ADX output from Azure Stream Analytics](/azure/stream-analytics/azure-database-explorer-output).|
+| November 2022 | **Ingest data from Azure Stream Analytics into Synapse Data Explorer** | The ability to use a Streaming Analytics job to collect data from an event hub and send it to your Azure Data Explorer cluster is now generally available. For more information, see [Ingest data from Azure Stream Analytics into Azure Data Explorer](/azure/data-explorer/stream-analytics-connector) and [ADX output from Azure Stream Analytics](../stream-analytics/azure-database-explorer-output.md).|
| November 2022 | **Azure Synapse Link for SQL** | Azure Synapse Link for SQL is now generally available for both SQL Server 2022 and Azure SQL Database. The Azure Synapse Link for SQL feature provides low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. Provide BI reporting on operational data in near real-time, with minimal impact on your operational store. To learn more, visit [What is Azure Synapse Link for SQL?](synapse-link/sql-synapse-link-overview.md)| | October 2022 | **SAP CDC connector GA** | The data connector for SAP Change Data Capture (CDC) is now GA. For more information, see [Announcing Public Preview of the SAP CDC solution in Azure Data Factory and Azure Synapse Analytics](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-the-sap-cdc-solution-in-azure-dat).| | September 2022 | **MERGE T-SQL syntax** | [MERGE T-SQL syntax](/sql/t-sql/statements/merge-transact-sql?view=azure-sqldw-latest&preserve-view=true) has been a highly requested addition to the Synapse T-SQL library. As in SQL Server, the MERGE syntax encapsulates INSERTs/UPDATEs/DELETEs into a single high-performance statement. Available in dedicated SQL pools in version 10.0.17829 and above. For more, see the [MERGE T-SQL announcement blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/merge-t-sql-for-dedicated-sql-pools-is-now-ga/ba-p/3634331).|
This section summarizes recent new features and improvements to machine learning
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
-| November 2022 | **R Support (preview)** | Azure Synapse Analytics [now provides built-in R support for Apache Spark](/azure/synapse-analytics/spark/apache-spark-r-language), currently in preview. For an example, [install an R library from CRAN and CRAN snapshots](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_16). |
+| November 2022 | **R Support (preview)** | Azure Synapse Analytics [now provides built-in R support for Apache Spark](./spark/apache-spark-r-language.md), currently in preview. For an example, [install an R library from CRAN and CRAN snapshots](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_16). |
| August 2022 | **SynapseML v.0.10.0** | New [release of SynapseML v0.10.0](https://github.com/microsoft/SynapseML/releases/tag/v0.10.0) (previously MMLSpark), an open-source library that aims to simplify the creation of massively scalable machine learning pipelines. Learn more about the [latest additions to SynapseML](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/exciting-new-release-of-synapseml/ba-p/3589606) and get started with [SynapseML](https://aka.ms/spark).| | August 2022 | **.NET support** | SynapseML v0.10 [adds full support for .NET languages](https://devblogs.microsoft.com/dotnet/announcing-synapseml-for-dotnet/) like C# and F#. For a .NET SynapseML example, see [.NET Example with LightGBMClassifier](https://microsoft.github.io/SynapseML/docs/getting_started/dotnet_example/).| | August 2022 | **Azure Open AI Service support** | SynapseML now allows users to tap into 175-Billion parameter language models (GPT-3) from OpenAI that can generate and complete text and code near human parity. For more information, see [Azure OpenAI for Big Data](https://microsoft.github.io/SynapseML/docs/features/cognitive_services/CognitiveServices%20-%20OpenAI/).|
Azure Data Explorer (ADX) is a fast and highly scalable data exploration service
|:-- |:-- | :-- | | December 2022 | **Demystifying data consumption using Azure Synapse Data Explorer** | A guide to the various ways of [retrieving, consuming and visualizing data from Azure Synapse Data Explorer](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/demystifying-data-consumption-using-azure-synapse-data-explorer/ba-p/3684265). | | November 2022 | **Table Level Sharing support via Azure Data Share** | We have now [added Table level sharing support](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_10) via the [Azure Data Share interface](https://azure.microsoft.com/products/data-share/#overview) where you can share specific tables in the database. This allows you to easily and securely share your data with people in your company or external partners. |
-| November 2022 | **Ingest data from Azure Stream Analytics into Synapse Data Explorer** | The ability to use a Streaming Analytics job to collect data from an event hub and send it to your Azure Data Explorer cluster is now generally available. For more information, see [Ingest data from Azure Stream Analytics into Azure Data Explorer](/azure/data-explorer/stream-analytics-connector) and [ADX output from Azure Stream Analytics](/azure/stream-analytics/azure-database-explorer-output).|
+| November 2022 | **Ingest data from Azure Stream Analytics into Synapse Data Explorer** | The ability to use a Streaming Analytics job to collect data from an event hub and send it to your Azure Data Explorer cluster is now generally available. For more information, see [Ingest data from Azure Stream Analytics into Azure Data Explorer](/azure/data-explorer/stream-analytics-connector) and [ADX output from Azure Stream Analytics](../stream-analytics/azure-database-explorer-output.md).|
| November 2022 | **Parse-kv operator** | The new [parse-kv operator](/azure/data-explorer/kusto/query/parse-kv-operator) extracts structured information from a string expression and represents the information in a key/value form. You can use a [specified delimeter](/azure/data-explorer/kusto/query/parse-kv-operator#specified-delimeter), a [non-specified delimeter](/azure/data-explorer/kusto/query/parse-kv-operator#non-specified-delimiter), or [Regex](/azure/data-explorer/kusto/query/parse-kv-operator#regex) via a [RE2 regular expression](/azure/data-explorer/kusto/query/re2). | | October 2022 | **Leaders and followers in ADX clusters** | Use the [database page in the Azure portal](https://ms.portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Kusto%2Fclusters) to easily identify all the [follower databases following a leader, and the leader for a given follower](/azure/data-explorer/follower). | | October 2022 | **Aliasing follower databases** |The [follower database feature](/azure/data-explorer/follower) allows you to attach a database located in a different cluster to your Azure Data Explorer cluster. [Now you can override the database name](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_12) while establishing a follower relationship. |
virtual-desktop Create Host Pools Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-azure-marketplace.md
You can create host pools in the following Azure regions:
- West US - West US 2
->[!IMPORTANT]
->This list refers to the list of regions where the _metadata_ for the host pool will be stored. Virtual machines (hosts) in a host pool can be located in any region, as well as [on-premises](azure-stack-hci-overview.md).
+This list refers to the list of regions where the *metadata* for the host pool will be stored. Session hosts added to a host pool can be located in any region, as well as on-premises when using [Azure Virtual Desktop on Azure Stack HCI](azure-stack-hci-overview.md).
## Prerequisites
virtual-desktop Create Host Pools Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-powershell.md
You can create a virtual machine in multiple ways:
- [Create a virtual machine from a managed image](../virtual-machines/windows/create-vm-generalized-managed.md) - [Create a virtual machine from an unmanaged image](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-from-user-image)
->[!NOTE]
->If you're deploying a virtual machine using Windows 7 as the host OS, the creation and deployment process will be a little different. For more details, see [Deploy a Windows 7 virtual machine on Azure Virtual Desktop](./virtual-desktop-fall-2019/deploy-windows-7-virtual-machine.md).
- After you've created your session host virtual machines, [apply a Windows license to a session host VM](apply-windows-license.md#manually-apply-a-windows-license-to-a-windows-client-session-host-vm) to run your Windows or Windows Server virtual machines without paying for another license. ## Prepare the virtual machines for Azure Virtual Desktop agent installations
virtual-desktop Deploy Windows 7 Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-windows-7-virtual-machine.md
- Title: Deploy Windows 7 session host virtual machine Azure Virtual Desktop - Azure
-description: How to configure and deploy a Windows 7 session host virtual machine on Azure Virtual Desktop.
-- Previously updated : 08/08/2022---
-# Deploy a Windows 7 session host virtual machine on Azure Virtual Desktop
-
-The process to deploy a Windows 7 session host virtual machine (VM) for Azure Virtual Desktop is slightly different than for other versions of Windows. This guide will tell you how to deploy Windows 7 session host VMs.
-
-> [!IMPORTANT]
-> Azure Virtual Desktop extended support for Windows 7 session host VMs ends on January 10, 2023. To see which operating systems are supported, review [Operating systems and licenses](prerequisites.md#operating-systems-and-licenses).
-
-## Prerequisites
-
-Before you start, follow the instructions in [Create a host pool with PowerShell](create-host-pools-powershell.md) to create a host pool. If you're using the portal, follow the instructions in steps 1 through 9 of [Create a host pool using the Azure portal](create-host-pools-azure-marketplace.md). After that, select **Review + Create** to create an empty host pool.
-
-## Configure a Windows 7 virtual machine
-
-Once you've done the prerequisites, you're ready to configure your Windows 7 VM for deployment on Azure Virtual Desktop.
-
-To set up a Windows 7 VM on Azure Virtual Desktop:
-
-1. Sign in to the Azure portal and either search for the Windows 7 Enterprise image or upload your own customized Windows 7 Enterprise (x64) image.
-2. Deploy one or multiple virtual machines with Windows 7 Enterprise as its host operating system. Make sure the virtual machines allow Remote Desktop Protocol (RDP) (the TCP/3389 port).
-3. Connect to the Windows 7 Enterprise host using the RDP and authenticate with the credentials you defined while configuring your deployment.
-4. Add the account you used while connecting to the host with RDP to the "Remote Desktop User" group. If you don't add the account, you might not be able to connect to the VM after you join it to your Active Directory domain.
-5. Go to Windows Update on your VM.
-6. Install all Windows Updates in the Important category.
-7. Install all Windows Updates in the Optional category (excluding language packs). This process installs the Remote Desktop Protocol 8.0 update ([KB2592687](https://www.microsoft.com/download/details.aspx?id=35387)) that you need to complete these instructions.
-8. Open the Local Group Policy Editor and navigate to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Remote Session Environment**.
-9. Enable the Remote Desktop Protocol 8.0 policy.
-10. Join this VM to your Active Directory domain.
-11. Restart the virtual machine by running the following command:
-
- ```cmd
- shutdown /r /t 0
- ```
-
-12. Follow the instructions [here](/powershell/module/az.desktopvirtualization/new-azwvdregistrationinfo) to get a registration token.
-
- - If you'd rather use the Azure portal, you can also go to the Overview page of the host pool you want to add the VM to and create a token there.
-
-13. [Download the Azure Virtual Desktop Agent for Windows 7](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3JZCm).
-14. [Download the Azure Virtual Desktop Agent Manager for Windows 7](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3K2e3).
-15. Open the Azure Virtual Desktop Agent installer and follow the instructions. When prompted, give the registration key you created in step 12.
-16. Open the Azure Virtual Desktop Agent Manager and follow the instructions.
-17. Optionally, block the TCP/3389 port to remove direct Remote Desktop Protocol access to the VM.
-18. Optionally, confirm that your .NET framework is at least version 4.7.2. Updating your framework is especially important if you're creating a custom image.
-
-## Next steps
-
-Your Azure Virtual Desktop deployment is now ready to use. [Download the latest version of the Azure Virtual Desktop client](https://aka.ms/wvd/clients/windows) to get started.
-
-For a list of known issues and troubleshooting instructions for Windows 7 on Azure Virtual Desktop, see our troubleshooting article at [Troubleshoot Windows 7 virtual machines in Azure Virtual Desktop](./virtual-desktop-fall-2019/troubleshoot-windows-7-vm.md).
virtual-desktop Deploy Windows Server Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-windows-server-virtual-machine.md
# Deploy Windows Server-based virtual machines on Azure Virtual Desktop >[!IMPORTANT]
->This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](./virtual-desktop-fall-2019/deploy-windows-7-virtual-machine.md).
+>This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects.
The process for deploying Windows Server-based virtual machines (VMs) on Azure Virtual Desktop is slightly different than the one for VMs running other versions of Windows, such as Windows 10 or Windows 11. This guide will walk you through the process.
virtual-desktop Insights Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights-glossary.md
Available sessions shows the number of available sessions in the host pool. The
The client operating system (OS) shows which version of the OS end-users accessing Azure Virtual Desktop resources are currently using. The client OS also shows which version of the web (HTML) client and the full Remote Desktop client the users have. For a full list of Windows OS versions, see [Operating System Version](/windows/win32/sysinfo/operating-system-version).
->[!IMPORTANT]
->Windows 7 support will end on January 10, 2023. The client OS version for Windows 7 is Windows 6.1.
- ## Connection success This item shows connection health. "Connection success" means that the connection could reach the host, as confirmed by the stack on that virtual machine. A failed connection means that the connection couldn't reach the host.
virtual-desktop Key Distribution Center Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/key-distribution-center-proxy.md
This article will show you how to configure the feed in the Azure Virtual Deskto
To configure a Azure Virtual Desktop session host with a KDC proxy, you'll need the following things: - Access to the Azure portal and an Azure administrator account.-- The remote client machines must be running either Windows 10 or Windows 7 and have the [Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop) installed. Currently, the web client is not supported.
+- The remote client machines must be running at least Windows 10 and have the [Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop) installed. The web client isn't currently supported.
- You must have a KDC proxy already installed on your machine. To learn how to do that, see [Set up the RD Gateway role for Azure Virtual Desktop](/windows-server/remote/remote-desktop-services/remote-desktop-gateway-role). - The machine's OS must be Windows Server 2016 or later.
virtual-desktop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/overview.md
Here's what you can do when you run Azure Virtual Desktop on Azure:
- Set up a multi-session Windows 11 or Windows 10 deployment that delivers a full Windows experience with scalability - Present Microsoft 365 Apps for enterprise and optimize it to run in multi-user virtual scenarios-- Provide Windows 7 virtual desktops with free Extended Security Updates - Bring your existing Remote Desktop Services (RDS) and Windows Server desktops and apps to any computer - Virtualize both desktops and apps - Manage desktops and apps from different Windows and Windows Server operating systems with a unified management experience
-> [!IMPORTANT]
-> Azure Virtual Desktop extended support for Windows 7 session host VMs ends on January 10, 2023. To see which operating systems are supported, review [Operating systems and licenses](prerequisites.md#operating-systems-and-licenses).
- ## Introductory video Learn about Azure Virtual Desktop (formerly Windows Virtual Desktop), why it's unique, and what's new in this video:
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
You have a choice of operating systems that you can use for session hosts to pro
|<ul><li>[Windows Server 2022](/lifecycle/products/windows-server-2022)</li><li>[Windows Server 2019](/lifecycle/products/windows-server-2019)</li><li>[Windows Server 2016](/lifecycle/products/windows-server-2016)</li><li>[Windows Server 2012 R2](/lifecycle/products/windows-server-2012-r2)</li></ul>|License entitlement:<ul><li>Remote Desktop Services (RDS) Client Access License (CAL) with Software Assurance (per-user or per-device), or RDS User Subscription Licenses.</li></ul>Per-user access pricing is not available for Windows Server operating systems.| > [!IMPORTANT]
-> - Azure Virtual Desktop doesn't support 32-bit operating systems or SKUs not listed in the previous table. In addition, Windows 7 doesn't support any VHD or VHDX-based profile solutions hosted on managed Azure Storage due to a sector size limitation.
->
-> - Azure Virtual Desktop extended support for Windows 7 session host VMs ends on January 10, 2023. To see which operating systems are supported, review [Operating systems and licenses](prerequisites.md#operating-systems-and-licenses).
->
+> - Azure Virtual Desktop doesn't support 32-bit operating systems or SKUs not listed in the previous table.
> - [Ephemeral OS disks for Azure VMs](../virtual-machines/ephemeral-os-disks.md) are not supported. You can use operating system images provided by Microsoft in the [Azure Marketplace](https://azuremarketplace.microsoft.com), or your own custom images stored in an Azure Compute Gallery, as a managed image, or storage blob. To learn more about how to create custom images, see:
There are different automation and deployment options available depending on whi
|Windows 11 Enterprise|Yes|Yes|No|No| |Windows 10 Enterprise multi-session|Yes|Yes|Yes|Yes| |Windows 10 Enterprise|Yes|Yes|No|No|
-|Windows 7 Enterprise|Yes|Yes|No|No|
|Windows Server 2022|Yes|Yes|No|No| |Windows Server 2019|Yes|Yes|Yes|Yes| |Windows Server 2016|Yes|Yes|No|No|
virtual-desktop Private Link Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-overview.md
The public preview of using Private Link with Azure Virtual Desktop has the foll
- You'll need to [re-register your resource provider](private-link-setup.md#re-register-your-resource-provider) in order to use Private Link. -- You can't use the [manual connection approval method](/azure/private-link/private-endpoint-overview#access-to-a-private-link-resource-using-approval-workflow) when using Private Link with Azure Virtual Desktop. We're aware of this issue and are working on fixing it.
+- You can't use the [manual connection approval method](../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow) when using Private Link with Azure Virtual Desktop. We're aware of this issue and are working on fixing it.
- All Azure Virtual Desktop clients are compatible with Private Link, but we currently only offer troubleshooting support for the web client version of Private Link.
The public preview of using Private Link with Azure Virtual Desktop has the foll
- Learn how to configure Azure Private Endpoint DNS at [Private Link DNS integration](../private-link/private-endpoint-dns.md#virtual-network-and-on-premises-workloads-using-a-dns-forwarder). - For general troubleshooting guides for Private Link, see [Troubleshoot Azure Private Endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md). - Understand how connectivity for the Azure Virtual Desktop service works at[Azure Virtual Desktop network connectivity](network-connectivity.md).-- See the [Required URL list](safe-url-list.md) for the list of URLs you'll need to unblock to ensure network access to the Azure Virtual Desktop service.
+- See the [Required URL list](safe-url-list.md) for the list of URLs you'll need to unblock to ensure network access to the Azure Virtual Desktop service.
virtual-desktop Proxy Server Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/proxy-server-support.md
Azure Virtual Desktop components on the session host run in the context of their
Proxy servers have capacity limits. Unlike regular HTTP traffic, RDP traffic has long running, chatty connections that are bi-directional and consume lots of bandwidth. Before you set up a proxy server, talk to your proxy server vendor about how much throughput your server has. Also make sure to ask them how many proxy sessions you can run at one time. After you deploy the proxy server, carefully monitor its resource use for bottlenecks in Azure Virtual Desktop traffic.
-### Proxy servers for Windows 7 session hosts
-
-Session hosts running on Windows 7 don't support proxy server connections for reverse-connect RDP data. If the session host can't directly connect to the Azure Virtual Desktop gateways, the connection won't work.
-
-> [!IMPORTANT]
-> Azure Virtual Desktop extended support for Windows 7 session host VMs ends on January 10, 2023. To see which operating systems are supported, review [Operating systems and licenses](prerequisites.md#operating-systems-and-licenses).
- ### Proxy servers and Teams optimization Azure Virtual Desktop doesn't support proxy servers for Teams optimization.
bitsadmin /util /setieproxy LOCALSYSTEM AUTOSCRIPT http://server/proxy.pac
The Azure Virtual Desktop client supports proxy servers configured with system settings or a [Network Proxy CSP](/windows/client-management/mdm/networkproxy-csp).
-### Support for clients running on Windows 7
-
-Clients running on Windows 7 don't support proxy server connections for reverse-connect RDP data. If the client can't directly connect to the Azure Virtual Desktop gateways, the connection won't work.
- ### Azure Virtual Desktop client support The following table shows which Azure Virtual Desktop clients support proxy servers:
virtual-desktop Set Up Service Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-service-alerts.md
To configure service alerts:
In this tutorial, you learned how to set up and use Azure Service Health to monitor service issues and health advisories for Azure Virtual Desktop. To learn about how to sign in to Azure Virtual Desktop, continue to the Connect to Azure Virtual Desktop How-tos. > [!div class="nextstepaction"]
-> [Connect to the Remote Desktop client on Windows 7 and Windows 10](./users/connect-windows.md)
+> [Connect to the Remote Desktop client on Windows 10](./users/connect-windows.md)
virtual-desktop Troubleshoot Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-agent.md
You must generate a new registration key that is used to re-register your sessio
By reinstalling the most updated version of the agent and boot loader, the side-by-side stack and Geneva monitoring agent automatically get installed as well. To reinstall the agent and boot loader:
-1. Sign in to your session host VM as an administrator and use the correct version of the agent installer for the operating system of your session host VM:
- 1. For Windows 10 and Windows 11:
- 1. [Azure Virtual Desktop Agent](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWrmXv)
- 1. [Azure Virtual Desktop Agent Bootloader](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWrxrH)
- 1. For Windows 7:
- 1. [Azure Virtual Desktop Agent](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3JZCm)
- 1. [Azure Virtual Desktop Agent Bootloader](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3K2e3)
+1. Sign in to your session host VM as an administrator and run the agent installer and bootloader for your session host VM:
+ - [Azure Virtual Desktop Agent](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWrmXv)
+ - [Azure Virtual Desktop Agent Bootloader](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWrxrH)
> [!TIP] > For each of the the agent and boot loader installers you downloaded, you may need to unblock them. Right-click each file and select **Properties**, then select **Unblock**, and finally select **OK**.
-1. Run the agent installer
1. When the installer asks you for the registration token, paste the registration key from your clipboard. > [!div class="mx-imgBorder"]
By reinstalling the most updated version of the agent and boot loader, the side-
1. Run the boot loader installer. 1. Restart your session VM. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+1. In the search bar, enter **Azure Virtual Desktop** and select the matching service entry.
1. Select **Host pools** and select the name of the host pool that your session host VM is in. 1. Select **Session Hosts** to see the list of all session hosts in that host pool. 1. You should now see the session host registered in the host pool with the status **Available**.
virtual-desktop Troubleshoot Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client-windows.md
Authentication issues can happen because you're using an *N* SKU of Windows on y
### Authentication issues when TLS 1.2 not enabled
-Authentication issues can happen when your local Windows device doesn't have TLS 1.2 enabled. This is most likely with Windows 7 where TLS 1.2 isn't enabled by default. To enable TLS 1.2 on Windows 7, you need to set the following registry values:
+Authentication issues can happen when your local Windows device doesn't have TLS 1.2 enabled. To enable TLS 1.2, you need to set the following registry values:
- **Key**: `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client`
virtual-desktop Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-windows.md
Before you can access your resources, you'll need to meet the prerequisites:
- Windows 11 IoT Enterprise - Windows 10 - Windows 10 IoT Enterprise
- - Windows 7
- Windows Server 2019 - Windows Server 2016 - Windows Server 2012 R2
Before you can access your resources, you'll need to meet the prerequisites:
- [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2098960) - [Windows on Arm](https://go.microsoft.com/fwlink/?linkid=2098961) -- .NET Framework 4.6.2 or later. You may need to install this on Windows 7, Windows Server 2012 R2, Windows Server 2016, and some versions of Windows 10. To download the latest version, see [Download .NET Framework](https://dotnet.microsoft.com/download/dotnet-framework).-
-> [!IMPORTANT]
-> Extended support for using Windows 7 to connect to Azure Virtual Desktop ends on January 10, 2023.
+- .NET Framework 4.6.2 or later. You may need to install this on Windows Server 2012 R2, Windows Server 2016, and some versions of Windows 10. To download the latest version, see [Download .NET Framework](https://dotnet.microsoft.com/download/dotnet-framework).
## Install the Remote Desktop client
virtual-desktop Configure Host Pool Personal Desktop Assignment Type 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/configure-host-pool-personal-desktop-assignment-type-2019.md
If you need to add the session host back into the personal desktop host pool, un
Now that you've configured the personal desktop assignment type, you can sign in to a Azure Virtual Desktop client to test it as part of a user session. These next two How-tos will tell you how to connect to a session using the client of your choice: -- [Connect with the Windows Desktop client](connect-windows-7-10-2019.md)
+- [Connect with the Windows Desktop client](connect-windows-2019.md)
- [Connect with the web client](connect-web-2019.md)
virtual-desktop Connect Windows 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/connect-windows-2019.md
+
+ Title: Connect to Azure Virtual Desktop (classic) Windows 10 or 7 - Azure
+description: How to connect to Azure Virtual Desktop (classic) using the Windows Desktop client.
++ Last updated : 08/08/2022+++
+# Connect with the Windows Desktop (classic) client
+
+> Applies to: Windows 10 and Windows 10 IoT Enterprise
+
+>[!IMPORTANT]
+>This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../users/connect-windows.md).
+
+You can access Azure Virtual Desktop resources on devices with Windows 10, and Windows 10 IoT Enterprise using the Windows Desktop client. The client doesn't support Windows 8 or Windows 8.1.
+
+>[!NOTE]
+>The Windows client automatically defaults to Azure Virtual Desktop (classic). However, if the client detects that the user also has Azure Resource Manager resources, it automatically adds the resources or notifies the user that they are available.
+
+> [!IMPORTANT]
+> - Azure Virtual Desktop doesn't support the RemoteApp and Desktop Connections (RADC) client or the Remote Desktop Connection (MSTSC) client.
+>
+> - Azure Virtual Desktop doesn't currently support the Remote Desktop client from the Windows Store.
+
+## Install the Windows Desktop client
+
+Choose the client that matches your version of Windows:
+
+- [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2068602)
+- [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2098960)
+- [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2098961)
+
+You can install the client for the current user, which doesn't require admin rights, or your admin can install and configure the client so that all users on the device can access it.
+
+Once installed, the client can be launched from the Start menu by searching for **Remote Desktop**.
+
+## Subscribe to a Workspace
+
+There are two ways you can subscribe to a Workspace. The client can try to discover the resources available to you from your work or school account or you can directly specify the URL where your resources are for cases where the client is unable to find them. Once you've subscribed to a Workspace, you can launch resources with one of the following methods:
+
+- Go to the Connection Center and double-click a resource to launch it.
+- You can also go to the Start menu and look for a folder with the Workspace name or enter the resource name in the search bar.
+
+### Subscribe with a user account
+
+1. From the main page of the client, select **Subscribe**.
+2. Sign in with your user account when prompted.
+3. The resources will appear in the Connection Center, and are grouped by workspace.
+
+### Subscribe with a URL
+
+1. From the main page of the client, select **Subscribe with URL**.
+2. Enter the Workspace URL or your email address:
+ - If you use the **Workspace URL**, use the one your admin gave you. If accessing resources from Azure Virtual Desktop, you can use one of the following URLs:
+ - Azure Virtual Desktop (classic): `https://rdweb.wvd.microsoft.com/api/feeddiscovery/webfeeddiscovery.aspx`
+ - Azure Virtual Desktop: `https://rdweb.wvd.microsoft.com/api/arm/feeddiscovery`
+ - If you're using the **Email** field instead, enter your email address. This tells the client to search for a URL associated with your email address if your admin has set up [email discovery](/windows-server/remote/remote-desktop-services/rds-email-discovery).
+3. Select **Next**.
+4. Sign in with your user account when prompted.
+5. The resources should appear in the Connection Center, grouped by workspace.
+
+## Next steps
+
+To learn more about how to use the Windows Desktop client, check out [Get started with the Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop/).
virtual-desktop Create Host Pools Azure Marketplace 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-host-pools-azure-marketplace-2019.md
Users you add to the desktop application group can sign in to Azure Virtual Desk
Here are the current supported clients:
-* [Remote Desktop client for Windows 7 and Windows 10](connect-windows-7-10-2019.md)
-* [Azure Virtual Desktop web client](connect-web-2019.md)
+- [Remote Desktop client for Windows 10](connect-windows-2019.md)
+- [Azure Virtual Desktop web client](connect-web-2019.md)
## Next steps
virtual-desktop Create Host Pools Powershell 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-host-pools-powershell-2019.md
You can create a virtual machine in multiple ways:
- [Create a virtual machine from a managed image](../../virtual-machines/windows/create-vm-generalized-managed.md) - [Create a virtual machine from an unmanaged image](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-from-user-image)
->[!NOTE]
->If you're deploying a virtual machine using Windows 7 as the host OS, the creation and deployment process will be a little different. For more details, see [Deploy a Windows 7 virtual machine on Azure Virtual Desktop](deploy-windows-7-virtual-machine.md).
->
-> Azure Virtual Desktop extended support for Windows 7 session host VMs ends on January 10, 2023. To see which operating systems are supported, review [Operating systems and licenses](../prerequisites.md#operating-systems-and-licenses).
- After you've created your session host virtual machines, [apply a Windows license to a session host VM](../apply-windows-license.md#manually-apply-a-windows-license-to-a-windows-client-session-host-vm) to run your Windows or Windows Server virtual machines without paying for another license. ## Prepare the virtual machines for Azure Virtual Desktop agent installations
virtual-desktop Customize Feed Virtual Desktop Users 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/customize-feed-virtual-desktop-users-2019.md
Set-RdsRemoteDesktop -TenantName <tenantname> -HostPoolName <hostpoolname> -AppG
Now that you've customized the feed for users, you can sign in to a Azure Virtual Desktop client to test it out. To do so, continue to the Connect to Azure Virtual Desktop How-tos:
- * [Connect from Windows 10 or Windows 7](connect-windows-7-10-2019.md)
- * [Connect from a web browser](connect-web-2019.md)
+ - [Connect from the Windows Desktop client](connect-windows-2019.md)
+ - [Connect from a web browser](connect-web-2019.md)
virtual-desktop Customize Rdp Properties 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/customize-rdp-properties-2019.md
Set-RdsHostPool -TenantName <tenantname> -Name <hostpoolname> -CustomRdpProperty
Now that you've customized the RDP properties for a given host pool, you can sign in to a Azure Virtual Desktop client to test them as part of a user session. These next two How-tos will tell you how to connect to a session using the client of your choice: -- [Connect with the Windows Desktop client](connect-windows-7-10-2019.md)
+- [Connect with the Windows Desktop client](connect-windows-2019.md)
- [Connect with the web client](connect-web-2019.md)
virtual-desktop Deploy Windows 7 Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/deploy-windows-7-virtual-machine.md
- Title: Deploy Windows 7 virtual machine Azure Virtual Desktop (classic) - Azure
-description: How to configure and deploy a Windows 7 virtual machine on Azure Virtual Desktop Azure Virtual Desktop (classic).
-- Previously updated : 08/08/2022---
-# Deploy a Windows 7 virtual machine on Azure Virtual Desktop (classic)
-
-The process to deploy a Windows 7 virtual machine (VM) on Azure Virtual Desktop is slightly different than for VMs running later versions of Windows. This guide will tell you how to deploy Windows 7.
-
-> [!IMPORTANT]
-> Azure Virtual Desktop extended support for Windows 7 session host VMs ends on January 10, 2023. To see which operating systems are supported, review [Operating systems and licenses](../prerequisites.md#operating-systems-and-licenses).
-
-## Prerequisites
-
-Before you start, follow the instructions in [Create a host pool with PowerShell](create-host-pools-powershell-2019.md) to create a host pool. After that, follow the instructions in [Create host pools in Azure Marketplace](create-host-pools-azure-marketplace-2019.md#optional-assign-additional-users-to-the-desktop-application-group) to assign one or more users to the desktop application group.
-
-## Configure a Windows 7 virtual machine
-
-Once you've done the prerequisites, you're ready to configure your Windows 7 VM for deployment on Azure Virtual Desktop.
-
-To set up a Windows 7 VM on Azure Virtual Desktop:
-
-1. Sign in to the Azure portal and either search for the Windows 7 Enterprise image or upload your own customized Windows 7 Enterprise (x64) image.
-2. Deploy one or multiple virtual machines with Windows 7 Enterprise as its host operating system. Make sure the virtual machines allow Remote Desktop Protocol (RDP) (the TCP/3389 port).
-3. Connect to the Windows 7 Enterprise host using the RDP and authenticate with the credentials you defined while configuring your deployment.
-4. Add the account you used while connecting to the host with RDP to the "Remote Desktop User" group. If you don't do this, you might not be able to connect to the VM after you join it to your Active Directory domain.
-5. Go to Windows Update on your VM.
-6. Install all Windows Updates in the Important category.
-7. Install all Windows Updates in the Optional category (excluding language packs). This installs the Remote Desktop Protocol 8.0 update ([KB2592687](https://www.microsoft.com/download/details.aspx?id=35387)) that you need to complete these instructions.
-8. Open the Local Group Policy Editor and navigate to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Remote Session Environment**.
-9. Enable the Remote Desktop Protocol 8.0 policy.
-10. Join this VM to your Active Directory domain.
-11. Restart the virtual machine by running the following command:
-
- ```cmd
- shutdown /r /t 0
- ```
-
-12. Follow the instructions [here](/powershell/module/windowsvirtualdesktop/export-rdsregistrationinfo/) to get a registration token.
-13. [Download the Azure Virtual Desktop Agent for Windows 7](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3JZCm).
-14. [Download the Azure Virtual Desktop Agent Manager for Windows 7](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3K2e3).
-15. Open the Azure Virtual Desktop Agent installer and follow the instructions. When prompted, give the registration key you created in step 12.
-16. Open the Azure Virtual Desktop Agent Manager and follow the instructions.
-17. Optionally, block the TCP/3389 port to remove direct Remote Desktop Protocol access to the VM.
-18. Optionally, confirm that your .NET framework is at least version 4.7.2. This is especially important if you're creating a custom image.
-
-## Next steps
-
-Your Azure Virtual Desktop deployment is now ready to use. [Download the latest version of the Azure Virtual Desktop client](https://aka.ms/wvd/clients/windows) to get started.
-
-For a list of known issues and troubleshooting instructions for Windows 7 on Azure Virtual Desktop, see our troubleshooting article at [Troubleshoot Windows 7 virtual machines in Azure Virtual Desktop](troubleshoot-windows-7-vm.md).
virtual-desktop Environment Setup 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/environment-setup-2019.md
To learn how to set up your Azure Virtual Desktop tenant, see [Create a tenant i
To learn how to connect to Azure Virtual Desktop, see one of the following articles: -- [Connect from Windows 10 or Windows 7](connect-windows-7-10-2019.md)
+- [Connect from the Windows Desktop client](connect-windows-2019.md)
- [Connect from a web browser](connect-web-2019.md)
virtual-desktop Expand Existing Host Pool 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/expand-existing-host-pool-2019.md
Follow the instructions in [Run the Azure Resource Manager template for provisio
Now that you've expanded your existing host pool, you can sign in to a Azure Virtual Desktop client to test them as part of a user session. You can connect to a session with any of the following clients: -- [Connect with the Windows Desktop client](connect-windows-7-10-2019.md)
+- [Connect with the Windows Desktop client](connect-windows-2019.md)
- [Connect with the web client](connect-web-2019.md) - [Connect with the Android client](connect-android-2019.md) - [Connect with the macOS client](connect-macos-2019.md)
virtual-desktop Set Up Service Alerts 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/set-up-service-alerts-2019.md
To configure service alerts:
In this tutorial, you learned how to set up and use Azure Service Health to monitor service issues and health advisories for Azure Virtual Desktop. To learn about how to sign in to Azure Virtual Desktop, continue to the Connect to Azure Virtual Desktop How-tos. > [!div class="nextstepaction"]
-> [Connect to the Remote Desktop client on Windows 7 and Windows 10](connect-windows-7-10-2019.md)
+> [Connect to the Remote Desktop client on Windows](connect-windows-2019.md)
virtual-desktop Troubleshoot Windows 7 Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-windows-7-vm.md
- Title: Windows 7 virtual machines Azure Virtual Desktop (classic) - Azure
-description: How to resolve issues for Windows 7 virtual machines (VMs) in a Azure Virtual Desktop (classic) environment.
-- Previously updated : 08/08/2022---
-# Troubleshoot Windows 7 virtual machines in Azure Virtual Desktop (classic)
-
-Use this article to troubleshoot issues you're having when configuring the Azure Virtual Desktop session host virtual machines (VMs).
-
-> [!IMPORTANT]
-> Azure Virtual Desktop extended support for Windows 7 session host VMs ends on January 10, 2023. To see which operating systems are supported, review [Operating systems and licenses](../prerequisites.md#operating-systems-and-licenses).
-
-## Known issues
-
-Windows 7 on Azure Virtual Desktops doesn't support the following features:
--- Virtualized applications (RemoteApps)-- Time zone redirection-- Automatic DPI scaling-
-Azure Virtual Desktop can only virtualize full desktops for Windows 7.
-
-While Automatic DPI scaling isn't supported, you can manually change the resolution on your virtual machine by right-clicking the icon in the Remote Desktop client and selecting **Resolution**.
-
-## Error: Can't access the Remote Desktop User group
-
-If Azure Virtual Desktop can't find you or your users' credentials in the Remote Desktop User group, you may see one of the following error messages:
--- "This user is not a member of the Remote Desktop User group"-- "You must be granted permissions to sign in through Remote Desktop Services"-
-To fix this error, add the user to the Remote Desktop User group:
-
-1. Open the Azure portal.
-2. Select the virtual machine you saw the error message on.
-3. Select **Run a command**.
-4. Run the following command with `<username>` replaced by the name of the user you want to add:
-
- ```cmd
- net localgroup "Remote Desktop Users" <username> /add
- ```
virtual-machines Copy Files To Vm Using Scp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/copy-files-to-vm-using-scp.md
The `-r` flag instructs SCP to recursively copy the files and directories from t
## Next steps
-* [Manage users, SSH, and check or repair disks on Azure Linux VMs using the 'VMAccess' Extension](/azure/virtual-machines/extensions/vmaccess)
+* [Manage users, SSH, and check or repair disks on Azure Linux VMs using the 'VMAccess' Extension](./extensions/vmaccess.md)
virtual-machines Hbv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv2-series.md
HBv2-series VMs feature 200 Gb/sec Mellanox HDR InfiniBand. These VMs are connec
| Size | vCPU | Processor | Memory (GiB) | Memory bandwidth GB/s | Base CPU frequency (GHz) | All-cores frequency (GHz, peak) | Single-core frequency (GHz, peak) | RDMA performance (Gb/s) | MPI support | Temp storage (GiB) | Max data disks | Max Ethernet vNICs | | | | | | | | | | | | | | | | Standard_HB120rs_v2 | 120 | AMD EPYC 7V12 | 456 | 350 | 2.45 | 3.1 | 3.3 | 200 | All | 480 + 960 | 8 | 8 |
+| Standard_HB120-96rs_v2 | 96 | AMD EPYC 7V12 | 456 | 350 | 2.45 | 3.1 | 3.3 | 200 | All | 480 + 960 | 8 | 8 |
+| Standard_HB120-64rs_v2 | 64 | AMD EPYC 7V12 | 456 | 350 | 2.45 | 3.1 | 3.3 | 200 | All | 480 + 960 | 8 | 8 |
+| Standard_HB120-32rs_v2 | 32 | AMD EPYC 7V12 | 456 | 350 | 2.45 | 3.1 | 3.3 | 200 | All | 480 + 960 | 8 | 8 |
+| Standard_HB120-16rs_v2 | 16 | AMD EPYC 7V12 | 456 | 350 | 2.45 | 3.1 | 3.3 | 200 | All | 480 + 960 | 8 | 8 |
Learn more about the: - [Architecture and VM topology](./workloads/hpc/hbv2-series-overview.md)
virtual-machines Suse Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/suse-create-upload-vhd.md
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
exit ```
- For more information on the waagent.conf configuration options, see the [Linux agent configuration](/azure/virtual-machines/extensions/agent-linux#configuration) documentation.
+ For more information on the waagent.conf configuration options, see the [Linux agent configuration](../extensions/agent-linux.md#configuration) documentation.
If you want to mount, format and create a swap partition you can either: * Pass this configuration in as a cloud-init config every time you create a VM.
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
## Next steps
-You're now ready to use your SUSE Linux virtual hard disk to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
+You're now ready to use your SUSE Linux virtual hard disk to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
virtual-machines Automation Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-plan-deployment.md
For generic SAP on Azure design considerations, visit [Introduction to an SAP ad
## Control Plane planning
-You can perform the deployment and configuration activities from either Azure Pipelines or by using the provided shell scripts directly from Azure hosted Linux virtual machines. This environment is referred to as the control plane. For setting up Azure DevOps for the deployment framework, see [Set up Azure DevOps for SDAF](/azure/virtual-machines/workloads/sap/automation-configure-control-plane.md).
+You can perform the deployment and configuration activities from either Azure Pipelines or by using the provided shell scripts directly from Azure hosted Linux virtual machines. This environment is referred to as the control plane. For setting up Azure DevOps for the deployment framework, see [Set up Azure DevOps for SDAF](./automation-configure-control-plane.md).
Before you design your control plane, consider the following questions:
If you want to [configure custom disk sizes](automation-configure-extra-disks.md
## Next steps > [!div class="nextstepaction"]
-> [About manual deployments of automation framework](automation-manual-deployment.md)
+> [About manual deployments of automation framework](automation-manual-deployment.md)
virtual-machines Deployment Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/deployment-checklist.md
This document should contain:
- The current inventory of SAP components and applications, and a target application inventory for Azure. - A responsibility assignment matrix (RACI) that defines the responsibilities and assignments of the parties involved. Start at a high level, and work to more granular levels throughout planning and the first deployments. - A high-level solution architecture. Best practices and example architectures from [Azure Architecture Center](/azure/architecture/reference-architectures/sap/sap-overview) should be consulted.-- A decision about which Azure regions to deploy to. See the [list of Azure regions](https://azure.microsoft.com/global-infrastructure/regions/), and list of [regions with availability zone support](/azure/reliability/availability-zones-service-support). To learn which services are available in each region, see [products available by region](https://azure.microsoft.com/global-infrastructure/services/).
+- A decision about which Azure regions to deploy to. See the [list of Azure regions](https://azure.microsoft.com/global-infrastructure/regions/), and list of [regions with availability zone support](../../../reliability/availability-zones-service-support.md). To learn which services are available in each region, see [products available by region](https://azure.microsoft.com/global-infrastructure/services/).
- A networking architecture to connect from on-premises to Azure. Start to familiarize yourself with the [Azure enterprise scale landing zone](/azure/cloud-adoption-framework/ready/enterprise-scale/) concept. - Security principles for running high-impact business data in Azure. To learn about data security, start with the Azure security documentation. - Storage strategy to cover block devices (Managed Disk) and shared filesystems (such as Azure Files or Azure NetApp Files) that should be further refined to file-system sizes and layouts in the technical design document.
Further included in same technical document(s) should be:
- [IBM Db2 HADR](./high-availability-guide-rhel-ibm-db2-luw.md) - For disaster recovery across Azure regions, review the solutions offered by different DBMS vendors. Most of them support asynchronous replication or log shipping. - For the SAP application layer, determine whether you'll run your business regression test systems, which ideally are replicas of your production deployments, in the same Azure region or in your DR region. In the second case, you can target that business regression system as the DR target for your production deployments.
- - Look into Azure Site Recovery as a method for replicating the SAP application layer into the Azure DR region. For more information, see a [set-up disaster recovery for a multi-tier SAP NetWeaver app deployment](/azure/site-recovery/site-recovery-sap).
+ - Look into Azure Site Recovery as a method for replicating the SAP application layer into the Azure DR region. For more information, see a [set-up disaster recovery for a multi-tier SAP NetWeaver app deployment](../../../site-recovery/site-recovery-sap.md).
- For projects required to remain in a single region for compliance reasons, consider a combined HADR configuration by using [Azure Availability Zones](./sap-ha-availability-zones.md#combined-high-availability-and-disaster-recovery-configuration). - An inventory of all SAP interfaces and the connected systems (SAP and non-SAP) - Design of foundation services. This design should include the following items, many of which are covered by the [landing zone accelerator for SAP](/azure/cloud-adoption-framework/scenarios/sap/):
Further included in same technical document(s) should be:
- Data reduction and data migration plan for migrating SAP data into Azure. For SAP NetWeaver systems, SAP has guidelines on how to limit the volume of large amounts of data. See [this SAP guide](https://wiki.scn.sap.com/wiki/download/attachments/247399467/DVM_%20Guide_7.2.pdf?version=1&modificationDate=1549365516000&api=v2) about data management in SAP ERP systems. Some of the content also applies to NetWeaver and S/4HANA systems in general. - An automated deployment approach. Many customers start with scripts, using a combination of PowerShell, CLI, Ansible and Terraform. Microsoft developed solutions for SAP deployment automation are:
- - [Azure Center for SAP solutions](/azure/center-sap-solutions/) ΓÇô Azure service to deploy and operate a SAP systemΓÇÖs infrastructure
+ - [Azure Center for SAP solutions](../../../center-sap-solutions/index.yml) ΓÇô Azure service to deploy and operate a SAP systemΓÇÖs infrastructure
- [SAP on Azure Deployment Automation](./automation-deployment-framework.md), an open-source orchestration tool for deploying and maintaining SAP environments > [!NOTE]
We recommend that you set up and validate a full HADR solution and security desi
- **Compute / VM types** - Review the resources in SAP support notes, in the SAP HANA hardware directory, and in the SAP PAM again. Make sure to match supported VMs for Azure, supported OS releases for those VM types, and supported SAP and DBMS releases. - Validate again the sizing of your application and the infrastructure you deploy on Azure. If you're moving existing applications, you can often derive the necessary SAPS from the infrastructure you use and the [SAP benchmark webpage](https://www.sap.com/dmc/exp/2018-benchmark-directory/#/sd) and compare it to the SAPS numbers listed in [SAP note 1928533](https://launchpad.support.sap.com/#/notes/1928533). Also keep [this article on SAPS ratings](https://techcommunity.microsoft.com/t5/Running-SAP-Applications-on-the/SAPS-ratings-on-Azure-VMs-8211-where-to-look-and-where-you-can/ba-p/368208) in mind.
- - Evaluate and test the sizing of your Azure VMs for maximum storage and network throughput of the VM types you chose during the planning phase. Details of [VM performance limits](/azure/virtual-machines/sizes) are available, for storage itΓÇÖs important to consider the limit of max uncached disk throughput for sizing. Carefully consider sizing and temporary effects of [disk and VM level bursting](/azure/virtual-machines/disk-bursting).
+ - Evaluate and test the sizing of your Azure VMs for maximum storage and network throughput of the VM types you chose during the planning phase. Details of [VM performance limits](../../sizes.md) are available, for storage itΓÇÖs important to consider the limit of max uncached disk throughput for sizing. Carefully consider sizing and temporary effects of [disk and VM level bursting](../../disk-bursting.md).
- Test and determine whether you want to create your own OS images for your VMs in Azure or whether you want to use an image from the Azure compute gallery (formerly known as shared image gallery). If you're using an image from the Azure compute gallery, make sure to use an image that reflects the support contract with your OS vendor. For some OS vendors, Azure Compute Gallery lets you bring your own license images. For other OS images, support is included in the price quoted by Azure. - Using own OS images allows you to store required enterprise dependencies, such as security agents, hardening and customizations directly in the image. This way they are deployed immediately with every VM. If you decide to create your own OS images, you can find documentation in these articles:
- - [Build a generalized image of a Windows VM deployed in Azure](/azure/virtual-machines/windows/capture-image-resource)
- - [Build a generalized image of a Linux VM deployed in Azure](/azure/virtual-machines/linux/capture-image)
+ - [Build a generalized image of a Windows VM deployed in Azure](../../windows/capture-image-resource.md)
+ - [Build a generalized image of a Linux VM deployed in Azure](../../linux/capture-image.md)
- If you use Linux images from the Azure compute gallery and add hardening as part of your deployment pipeline, you need to use the images suitable for SAP provided by the Linux vendors. - [Red Hat Enterprise Linux for SAP Offerings on Microsoft Azure FAQ](https://access.redhat.com/articles/5456301) - [SUSE public cloud information tracker - OS Images for SAP](https://pint.suse.com/?resource=images&csp=microsoft&search=sap) - [Oracle Linux](https://www.oracle.com/cloud/azure/interconnect/faq/)
- - Choosing an OS image determines the type of Azure VMΓÇÖs generation. Azure supports both [Hyper-V generation 1 and 2 VMs](/azure/virtual-machines/generation-2). Some VM families are available as [generation 2 only](/azure/virtual-machines/generation-2#generation-2-vm-sizes), some VM families are certified for SAP use as generation 2 only ([SAP note 1928533](https://launchpad.support.sap.com/#/notes/1928533)) even if Azure allows both generations. **It is recommended to use generation 2 VM for every VM of SAP landscape.**
+ - Choosing an OS image determines the type of Azure VMΓÇÖs generation. Azure supports both [Hyper-V generation 1 and 2 VMs](../../generation-2.md). Some VM families are available as [generation 2 only](../../generation-2.md#generation-2-vm-sizes), some VM families are certified for SAP use as generation 2 only ([SAP note 1928533](https://launchpad.support.sap.com/#/notes/1928533)) even if Azure allows both generations. **It is recommended to use generation 2 VM for every VM of SAP landscape.**
- **Storage** - Read the document [Azure storage types for SAP workload](./planning-guide-storage.md)
- - Use [Azure premium storage](/azure/virtual-machines/disks-types#premium-ssds), [premium storage v2](/azure/virtual-machines/disks-types#premium-ssd-v2) for all production grade SAP environments and when ensuring high SLA. For some DBMS, Azure NetApp Files can be used for [large parts of the overall storage requirements](./planning-guide-storage.md#azure-netapp-files-anf).
- - At a minimum, use [Azure standard SSD](/azure/virtual-machines/disks-types#standard-ssds) storage for VMs that represent SAP application layers and for deployment of DBMSs that aren't performance sensitive. Keep in mind different Azure storage types influence the [single VM availability SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines).
+ - Use [Azure premium storage](../../disks-types.md#premium-ssds), [premium storage v2](../../disks-types.md#premium-ssd-v2) for all production grade SAP environments and when ensuring high SLA. For some DBMS, Azure NetApp Files can be used for [large parts of the overall storage requirements](./planning-guide-storage.md#azure-netapp-files-anf).
+ - At a minimum, use [Azure standard SSD](../../disks-types.md#standard-ssds) storage for VMs that represent SAP application layers and for deployment of DBMSs that aren't performance sensitive. Keep in mind different Azure storage types influence the [single VM availability SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines).
- In general, we don't recommend the use of [Azure standard HDD](./planning-guide-storage.md#azure-standard-hdd-storage) disks for SAP. - For the different DBMS types, check the [generic SAP-related DBMS documentation](./dbms_guide_general.md) and DBMS-specific documentation that the first document points to. Use disk striping over multiple disks with premium storage (v1 or v2) for database data and log area. Verify lvm disk striping is active and with correct stripe size with command 'lvs -a -o+lv_layout,lv_role,stripes,stripe_size,devices' on Linux, see storage spaces properties on Windows. - For optimal storage configuration with SAP HANA, see [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md).
We recommend that you set up and validate a full HADR solution and security desi
- **Networking** - Test and evaluate your virtual network infrastructure and the distribution of your SAP applications across or within the different Azure virtual networks. - Evaluate the hub-and-spoke or virtual WAN virtual network architecture approach with discrete virtual network(s) spokes for SAP workload. For smaller scale, micro-segmentation approach within a single Azure virtual network. Base this evaluation on:
- - Costs of data exchange [between peered Azure virtual networks](/azure/virtual-network/virtual-network-peering-overview)
+ - Costs of data exchange [between peered Azure virtual networks](../../../virtual-network/virtual-network-peering-overview.md)
- Advantages of a fast disconnection of the peering between Azure virtual networks as opposed to changing the network security group to isolate a subnet within a virtual network. This evaluation is for cases when applications or VMs hosted in a subnet of the virtual network became a security risk. - Central logging and auditing of network traffic between on-premises, the outside world, and the virtual datacenter you built in Azure. - Evaluate and test the data path between the SAP application layer and the SAP DBMS layer. - Placement of [Azure network virtual appliances](https://azure.microsoft.com/solutions/network-appliances/) in the communication path between the SAP application and the DBMS layer of SAP systems running the SAP kernel isn't supported. - Placement of the SAP application layer and SAP DBMS in different Azure virtual networks that aren't peered isn't supported.
- - You can use [application security group and network security group rules](/azure//virtual-network/network-security-groups-overview) to secure communication paths to and between the SAP application layer and the SAP DBMS layer.
- - Make sure that [accelerated networking](/azure/virtual-network/accelerated-networking-overview) is enabled on every VM used for SAP.
+ - You can use [application security group and network security group rules](../../..//virtual-network/network-security-groups-overview.md) to secure communication paths to and between the SAP application layer and the SAP DBMS layer.
+ - Make sure that [accelerated networking](../../../virtual-network/accelerated-networking-overview.md) is enabled on every VM used for SAP.
- Test and evaluate the network latency between the SAP application layer VMs and DBMS VMs according to SAP notes [500235](https://launchpad.support.sap.com/#/notes/500235) and [1100926](https://launchpad.support.sap.com/#/notes/1100926). In addition to SAPΓÇÖs niping, you can use tools such as [sockperf](https://github.com/Mellanox/sockperf) or [ethr](https://github.com/microsoft/ethr) for tcp latency measurement. Evaluate the results against the network latency guidance in [SAP note 1100926](https://launchpad.support.sap.com/#/notes/1100926). The network latency should be in the moderate or good range. - Optimize network throughput on high vCPU VMs, typically used for database servers. Particularly important for HANA scale-out and any large SAP system. Follow recommendations in [this article](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/optimizing-network-throughput-on-azure-m-series-vms/ba-p/3581129) for optimization. - If deploying with availability sets and latency measurement values are not meeting SAP requirements in [SAP note 1100926](https://launchpad.support.sap.com/#/notes/1100926), consider guidance in article [proximity placement groups](./sap-proximity-placement-scenarios.md) to get optimal network latency. No usage of proximity placement groups for zonal or cross-zonal deployment patterns. - Verify correct availability, routing and secure access from the SAP landscape to any needed Internet endpoint, such as OS patch repositories, deployment tooling or service endpoint. Similarly, if your SAP environment provides a publicly accessible service such as SAP Fiori or SAProuter, verify it is reachable and secured. - **High availability and disaster recovery deployments**
- - Always use standard load balancer for clustered environments. Basic load balancer will be [retired](/azure/load-balancer/skus).
- - If you deploy the SAP application layer without defining a specific availability zone, make sure that all VMs that run SAP application instances of a single SAP system are deployed in an [availability set](/azure/virtual-machines/availability-set-overview).
+ - Always use standard load balancer for clustered environments. Basic load balancer will be [retired](../../../load-balancer/skus.md).
+ - If you deploy the SAP application layer without defining a specific availability zone, make sure that all VMs that run SAP application instances of a single SAP system are deployed in an [availability set](../../availability-set-overview.md).
- When you protect SAP Central Services and the DBMS layer for high availability by using passive replication, place the two nodes for SAP Central Services in one separate availability set and the two DBMS nodes in another availability set. Do not mix application VM roles inside an availability set. - If you deploy into [availability zones](./sap-ha-availability-zones.md), you can't combine with availability sets. But you do need to make sure you deploy the active and passive central services nodes into two different availability zones. Use two availability zones that have the lowest latency between them. - If you're using Azure Load Balancer together with Linux guest operating systems, check that the Linux network parameter net.ipv4.tcp_timestamps is set to 0. This recommendation conflicts with recommendations in older versions of [SAP note 2382421](https://launchpad.support.sap.com/#/notes/2382421). The SAP note is now updated to state that this parameter needs to be set to 0 to work with Azure load balancers.
We recommend that you set up and validate a full HADR solution and security desi
- **Security checks** - Test the validity of your Azure role-based access control (Azure RBAC) architecture. Segregation of duties requires to separate and limit the access and permissions of different teams. For example, SAP Basis team members should be able to deploy VMs and assign disks from Azure Storage into a given Azure virtual machine. But the SAP Basis team shouldn't be able to create its own virtual networks or change the settings of existing virtual networks. Members of the network team shouldn't be able to deploy VMs into virtual networks in which SAP application and DBMS VMs are running. Nor should members of this team be able to change attributes of VMs or even delete VMs or disks.
- - Verify that [network security group and ASG rules](/azure/virtual-network/network-security-groups-overview) work as expected and shield the protected resources.
+ - Verify that [network security group and ASG rules](../../../virtual-network/network-security-groups-overview.md) work as expected and shield the protected resources.
- Make sure that all resources that need to be encrypted are encrypted. Define and implement processes to back up certificates, store and access those certificates, and restore the encrypted entities.
- - For storage encryption, server-side encryption with platform managed key (SSE-PMK) is enabled for every storage service used for SAP in Azure by default, including managed disks, Azure Files and Azure NetApp Files. [Key management](/azure/virtual-machines/disk-encryption) with customer managed keys can be considered, if required for customer owned key rotation.
- - [Host based server-side encryption](/azure/virtual-machines/disk-encryption#encryption-at-hostend-to-end-encryption-for-your-vm-data) should not be enabled for performance reasons on M-series family Linux VMs.
- - Do not use Azure Disk Encryption on Linux with SAP as [OS images ΓÇÿfor SAPΓÇÖ](/azure/virtual-machines/linux/disk-encryption-overview#supported-operating-systems) are not supported.
+ - For storage encryption, server-side encryption with platform managed key (SSE-PMK) is enabled for every storage service used for SAP in Azure by default, including managed disks, Azure Files and Azure NetApp Files. [Key management](../../disk-encryption.md) with customer managed keys can be considered, if required for customer owned key rotation.
+ - [Host based server-side encryption](../../disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data) should not be enabled for performance reasons on M-series family Linux VMs.
+ - Do not use Azure Disk Encryption on Linux with SAP as [OS images ΓÇÿfor SAPΓÇÖ](../../linux/disk-encryption-overview.md#supported-operating-systems) are not supported.
- Database native encryption is deployed by most SAP on Azure customers to protect DBMS data and backups. Transparent Data Encryption (TDE) typically has no noticeable performance overhead, greatly increases security, and should be considered. Encryption key management and location must be secured. Database encryption occurs inside the VM and is independent of any storage encryption such as SSE. - **Performance testing**
Here are some common methods:
During the go-live phase, be sure to follow the playbooks you developed during earlier phases. Execute the steps that you tested and practiced. Don't accept last-minute changes in configurations and processes. Also complete these steps: -- Verify that Azure portal monitoring and other monitoring tools are working. Use Azure tools such as [Azure Monitor](/azure/azure-monitor/overview) for infrastructure monitoring. [Azure Monitor for SAP](/azure/virtual-machines/workloads/sap/monitor-sap-on-azure) for a combination of OS and application KPIs, allowing you to integrate all in one dashboard for visibility during and after go-live.
+- Verify that Azure portal monitoring and other monitoring tools are working. Use Azure tools such as [Azure Monitor](../../../azure-monitor/overview.md) for infrastructure monitoring. [Azure Monitor for SAP](./monitor-sap-on-azure.md) for a combination of OS and application KPIs, allowing you to integrate all in one dashboard for visibility during and after go-live.
For operating system key performance indicators: - [SAP note 1286256 - How-to: Using Windows LogMan tool to collect performance data on Windows Platforms](https://launchpad.support.sap.com/#/notes/1286256) - On Linux ensure sysstat tool is installed and capturing details at regular intervals
After deploying infrastructure and applications and before each migration starts
- Make sure that only disks holding DBMS online logs are cached with None+ Write Accelerator. - Other disks with premium storage are using cache settings none or ReadOnly, depending on use - Check the [configuration of LVM on Linux VMs in Azure](/azure/virtual-machines/linux/configure-lvm).
-10. [Azure managed disks](https://azure.microsoft.com/services/managed-disks/) or [Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-solution-architectures#sap-on-azure-solutions) NFS volumes are used exclusively for DBMS VMs.
-11. For Azure NetApp Files, [correct mount options are used](/azure/azure-netapp-files/performance-linux-mount-options) and volumes are sized appropriately on correct storage tier.
+10. [Azure managed disks](https://azure.microsoft.com/services/managed-disks/) or [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-solution-architectures.md#sap-on-azure-solutions) NFS volumes are used exclusively for DBMS VMs.
+11. For Azure NetApp Files, [correct mount options are used](../../../azure-netapp-files/performance-linux-mount-options.md) and volumes are sized appropriately on correct storage tier.
12. Using Azure services ΓÇô Azure Files or Azure NetApp Files ΓÇô for any SMB or NFS volumes or shares. NFS volumes or SMB shares are reachable by the respective SAP environment or individual SAP system(s). Network routing to the NFS/SMB server goes through private network address space, using private endpoint if needed.
-13. [Azure accelerated networking](/azure/virtual-network/accelerated-networking-overview) is enabled on every network interface for all SAP VMs.
+13. [Azure accelerated networking](../../../virtual-network/accelerated-networking-overview.md) is enabled on every network interface for all SAP VMs.
14. No [network virtual appliances](https://azure.microsoft.com/solutions/network-appliances/) are in the communication path between the SAP application and the DBMS layer of SAP systems based on SAP NetWeaver or ABAP Platform. 15. All load balancers for SAP high-available components use standard load balancer with floating IP and HA ports enabled. 16. SAP application and DBMS VM(s) are placed in same or different subnets of one virtual network or in virtual networks directly peered.
After deploying infrastructure and applications and before each migration starts
Several of the checks above are checked in automated way with [SAP on Azure Quality Check Tool](https://github.com/Azure/SAP-on-Azure-Scripts-and-Utilities/tree/main/QualityCheck). These checks can be executed automated with the provided open-source project. While no automatic remediation of issues found is performed, the tool will warn about configuration against Microsoft recommendations. > [!TIP]
-> Same [quality checks and additional insights](/azure/center-sap-solutions/get-quality-checks-insights) are executed regularly when SAP systems are deployed or registered with [Azure Center for SAP solution](/azure/center-sap-solutions/) as well and are part of the service.
+> Same [quality checks and additional insights](../../../center-sap-solutions/get-quality-checks-insights.md) are executed regularly when SAP systems are deployed or registered with [Azure Center for SAP solution](../../../center-sap-solutions/index.yml) as well and are part of the service.
Further tools to allow easier deployment checks and document findings, plan next remediation steps and generally optimize your SAP on Azure landscape are: - [Azure Well-Architected Framework review](/assessments/?id=azure-architecture-review&mode=pre-assessment) An assessment of your workload focusing on the five main pillars of reliability, security, cost optimization, operation excellence and performance efficiency. Supports SAP workloads and recommended to running a review at start and after every project phase.
See these articles:
> [!div class="checklist"] > * [Azure planning and implementation for SAP NetWeaver](./planning-guide.md) > * [Considerations for Azure Virtual Machines DBMS deployment for SAP workloads](./dbms_guide_general.md)
-> * [Azure Virtual Machines deployment for SAP NetWeaver](./deployment-guide.md)
+> * [Azure Virtual Machines deployment for SAP NetWeaver](./deployment-guide.md)
virtual-machines Rise Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/rise-integration.md
SAP RISE/ECS exposes the communication ports for these applications to use but h
## Single Sign-On for SAP Single Sign-On (SSO) is configured for many SAP environments. With SAP workloads running in ECS/RISE, identical setup steps can be followed for SSO against Azure Active Directory (AAD). The integration steps with AAD based SSO are available for typical ECS/RISE managed workloads:-- [Tutorial: Azure Active Directory Single sign-on (SSO) integration with SAP NetWeaver](/azure/active-directory/saas-apps/sap-netweaver-tutorial)-- [Tutorial: Azure Active Directory single sign-on (SSO) integration with SAP Fiori](/azure/active-directory/saas-apps/sap-fiori-tutorial)-- [Tutorial: Azure Active Directory integration with SAP HANA](/azure/active-directory/saas-apps/saphana-tutorial)
+- [Tutorial: Azure Active Directory Single sign-on (SSO) integration with SAP NetWeaver](../../../active-directory/saas-apps/sap-netweaver-tutorial.md)
+- [Tutorial: Azure Active Directory single sign-on (SSO) integration with SAP Fiori](../../../active-directory/saas-apps/sap-fiori-tutorial.md)
+- [Tutorial: Azure Active Directory integration with SAP HANA](../../../active-directory/saas-apps/saphana-tutorial.md)
| SSO method | Identity Provider | Typical use case | Implementation | | : | :: | :- | :-- |
Check out the documentation:
- [SAP workloads on Azure: planning and deployment checklist](./sap-deployment-checklist.md) - [Virtual network peering](../../../virtual-network/virtual-network-peering-overview.md)-- [SAP Data Integration Using Azure Data Factory](https://github.com/Azure/Azure-DataFactory/blob/main/whitepaper/SAP%20Data%20Integration%20using%20Azure%20Data%20Factory.pdf)
+- [SAP Data Integration Using Azure Data Factory](https://github.com/Azure/Azure-DataFactory/blob/main/whitepaper/SAP%20Data%20Integration%20using%20Azure%20Data%20Factory.pdf)
virtual-network Create Peering Different Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-subscriptions.md
The peering is successfully established after you see **Connected** in the **Pee
For more information about using your own DNS for name resolution, see, [Name resolution using your own DNS server](virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
-For more information about Azure DNS, see [What is Azure DNS?](/azure/dns/dns-overview).
+For more information about Azure DNS, see [What is Azure DNS?](../dns/dns-overview.md).
## Next steps <!-- Add a context sentence for the following links -->
virtual-network Deploy Container Networking Docker Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/deploy-container-networking-docker-linux.md
In this article, you learned how to install the Azure CNI plugin and create a te
For more information about Azure container networking and Azure Kubernetes service, see: -- [What is Azure Kubernetes Service?](/azure/aks/intro-kubernetes)
+- [What is Azure Kubernetes Service?](../aks/intro-kubernetes.md)
- [Microsoft Azure Container Networking](https://github.com/Azure/azure-container-networking) - [Azure CNI plugin releases](https://github.com/Azure/azure-container-networking/releases) -- [Deploy the Azure Virtual Network container network interface plug-in](deploy-container-networking.md)
+- [Deploy the Azure Virtual Network container network interface plug-in](deploy-container-networking.md)
virtual-network Deploy Container Networking Docker Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/deploy-container-networking-docker-windows.md
In this article, you learned how to install the Azure CNI plugin and create a te
For more information about Azure container networking and Azure Kubernetes service, see: -- [What is Azure Kubernetes Service?](/azure/aks/intro-kubernetes)
+- [What is Azure Kubernetes Service?](../aks/intro-kubernetes.md)
- [Microsoft Azure Container Networking](https://github.com/Azure/azure-container-networking) - [Azure CNI plugin releases](https://github.com/Azure/azure-container-networking/releases) -- [Deploy the Azure Virtual Network container network interface plug-in](deploy-container-networking.md)
+- [Deploy the Azure Virtual Network container network interface plug-in](deploy-container-networking.md)
virtual-network Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-gateway-resource.md
NAT gateway selects a port at random out of the available inventory of ports to
| Flow | Source tuple | Source tuple after SNAT | Destination tuple | |::|::|::|::|
-| 4 | 10.0.0.1: 4285 | 65.52.1.1: **1234** | 23.53.254.142: 80 |
+| 4 | 10.0.0.1: 4285 | 65.52.1.1: **1234** | 23.53.254.143: 80 |
A NAT gateway will translate flow 4 to a SNAT port that may already be in use for other destinations as well (see flow 1 from previous table). See [Scale NAT gateway](#scalability) for more discussion on correctly sizing your IP address provisioning.
The following table provides information about when a TCP port becomes available
| Timer | Description | Value | |||| | TCP FIN | After a connection is closed by a TCP FIN packet, a 65-second timer is activated that holds down the SNAT port. The SNAT port will be available for reuse after the timer ends. | 65 seconds |
-| TCP RST | After a connection is closed by a TCP RST packet (reset), a 20-second timer is activated that holds down the SNAT port. When the timer ends, the port is available for reuse. | 20 seconds |
+| TCP RST | After a connection is closed by a TCP RST packet (reset), a 20-second timer is activated that holds down the SNAT port. When the timer ends, the port is available for reuse. | 16 seconds |
| TCP half open | During connection establishment where one connection endpoint is waiting for acknowledgment from the other endpoint, a 25-second timer is activated. If no traffic is detected, the connection will close. Once the connection has closed, the source port is available for reuse to the same destination endpoint. | 25 seconds | For UDP traffic, after a connection has closed, the port will be in hold down for 65 seconds before it's available for reuse.
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
Virtual appliance UDR / ExpressRoute >> NAT gateway >> Instance-level public IP
* NAT gateway canΓÇÖt be associated to an IPv6 public IP address or IPv6 public IP prefix. It can be associated to a dual stack subnet, but will only be able to direct outbound traffic with an IPv4 address.
-* NAT gateway can be used to provide outbound connectivity in a hub and spoke model when associated with Azure Firewall. NAT gateway can be associated to an Azure Firewall subnet in a hub virtual network and provide outbound connectivity from spoke virtual networks peered to the hub. To learn more, see [Azure Firewall integration with NAT gateway](/azure/firewall/integrate-with-nat-gateway).
+* NAT gateway can be used to provide outbound connectivity in a hub and spoke model when associated with Azure Firewall. NAT gateway can be associated to an Azure Firewall subnet in a hub virtual network and provide outbound connectivity from spoke virtual networks peered to the hub. To learn more, see [Azure Firewall integration with NAT gateway](../../firewall/integrate-with-nat-gateway.md).
### Availability zones
For information on the SLA, see [SLA for Virtual Network NAT](https://azure.micr
* [Learn module: Introduction to Azure Virtual Network NAT](/training/modules/intro-to-azure-virtual-network-nat).
-* To learn more about architecture options for Azure Virtual Network NAT, see [Azure Well-Architected Framework review of an Azure NAT gateway](/azure/architecture/networking/guide/well-architected-network-address-translation-gateway).
+* To learn more about architecture options for Azure Virtual Network NAT, see [Azure Well-Architected Framework review of an Azure NAT gateway](/azure/architecture/networking/guide/well-architected-network-address-translation-gateway).
virtual-network Troubleshoot Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat.md
You may experience outbound connectivity failure if your NAT gateway resource is
### Can't delete NAT gateway
-NAT gateway must be detached from all subnets within a virtual network before the resource can be removed or deleted. See [Remove NAT gateway from an existing subnet and delete the resource](/azure/virtual-network/nat-gateway/manage-nat-gateway?tabs=manage-nat-portal#remove-a-nat-gateway-from-an-existing-subnet-and-delete-the-resource) for step by step guidance.
+NAT gateway must be detached from all subnets within a virtual network before the resource can be removed or deleted. See [Remove NAT gateway from an existing subnet and delete the resource](./manage-nat-gateway.md?tabs=manage-nat-portal#remove-a-nat-gateway-from-an-existing-subnet-and-delete-the-resource) for step by step guidance.
## Add or remove subnet
NAT gateway is a standard SKU resource and can't be used with basic SKU resource
### Can't mismatch zones of public IP addresses and NAT gateway
-NAT gateway is a [zonal resource](/azure/virtual-network/nat-gateway/nat-availability-zones) and can either be designated to a specific zone or to ΓÇÿno zoneΓÇÖ. When NAT gateway is placed in ΓÇÿno zoneΓÇÖ, Azure places the NAT gateway into a zone for you, but you don't have visibility into which zone the NAT gateway is located.
+NAT gateway is a [zonal resource](./nat-availability-zones.md) and can either be designated to a specific zone or to ΓÇÿno zoneΓÇÖ. When NAT gateway is placed in ΓÇÿno zoneΓÇÖ, Azure places the NAT gateway into a zone for you, but you don't have visibility into which zone the NAT gateway is located.
NAT gateway can be used with public IP addresses designated to a specific zone, no zone, all zones (zone-redundant) depending on its own availability zone configuration. Follow guidance below:
To learn more about NAT gateway, see:
* [NAT gateway resource](nat-gateway-resource.md)
-* [Manage NAT gateway](/azure/virtual-network/nat-gateway/manage-nat-gateway)
+* [Manage NAT gateway](./manage-nat-gateway.md)
-* [Metrics and alerts for NAT gateway resources](nat-metrics.md).
+* [Metrics and alerts for NAT gateway resources](nat-metrics.md).
virtual-network Virtual Network For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-for-azure-services.md
Deploying services within a virtual network provides the following capabilities:
|-|-|-| | Compute | Virtual machines: [Linux](/previous-versions/azure/virtual-machines/linux/infrastructure-example?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Windows](/previous-versions/azure/virtual-machines/windows/infrastructure-example?toc=%2fazure%2fvirtual-network%2ftoc.json) <br/>[Virtual machine scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-existing-vnet.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Cloud Service](/previous-versions/azure/reference/jj156091(v=azure.100)): Virtual network (classic) only<br/> [Azure Batch](../batch/nodes-and-pools.md?toc=%2fazure%2fvirtual-network%2ftoc.json#virtual-network-vnet-and-firewall-configuration)| No <br/> No <br/> No <br/> No<sup>2</sup> | Network | [Application Gateway - WAF](../application-gateway/application-gateway-ilb-arm.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Bastion](../bastion/bastion-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Firewall](../firewall/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) <br/>[Azure Route Server](../route-server/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[ExpressRoute Gateway](../expressroute/expressroute-about-virtual-network-gateways.md)<br/>[Network Virtual Appliances](/windows-server/networking/sdn/manage/use-network-virtual-appliances-on-a-vn)<br/>[VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%2ftoc.json) | Yes <br/> Yes <br/> Yes <br/> Yes <br/> Yes <br/> No <br/> Yes
-|Data|[RedisCache](../azure-cache-for-redis/cache-how-to-premium-vnet.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure SQL Managed Instance](/azure/azure-sql/managed-instance/connectivity-architecture-overview?toc=%2fazure%2fvirtual-network%2ftoc.json) </br> [Azure Database for MySQL - Flexible Server](/azure/mysql/flexible-server/concepts-networking-vnet ) </br> [Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/concepts-networking#private-access-vnet-integration)| Yes <br/> Yes <br/> Yes </br> Yes |
+|Data|[RedisCache](../azure-cache-for-redis/cache-how-to-premium-vnet.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure SQL Managed Instance](/azure/azure-sql/managed-instance/connectivity-architecture-overview?toc=%2fazure%2fvirtual-network%2ftoc.json) </br> [Azure Database for MySQL - Flexible Server](../mysql/flexible-server/concepts-networking-vnet.md) </br> [Azure Database for PostgreSQL - Flexible Server](../postgresql/flexible-server/concepts-networking.md#private-access-vnet-integration)| Yes <br/> Yes <br/> Yes </br> Yes |
|Analytics | [Azure HDInsight](../hdinsight/hdinsight-plan-virtual-network-deployment.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks?toc=%2fazure%2fvirtual-network%2ftoc.json) |No<sup>2</sup> <br/> No<sup>2</sup> <br/> | Identity | [Azure Active Directory Domain Services](../active-directory-domain-services/tutorial-create-instance.md?toc=%2fazure%2fvirtual-network%2ftoc.json) |No <br/> | Containers | [Azure Kubernetes Service (AKS)](../aks/concepts-network.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Container Instance (ACI)](https://www.aka.ms/acivnet)<br/>[Azure Container Service Engine](https://github.com/Azure/acs-engine) with Azure Virtual Network CNI [plug-in](https://github.com/Azure/acs-engine/tree/master/examples/vnet)<br/>[Azure Functions](../azure-functions/functions-networking-options.md#virtual-network-integration) |No<sup>2</sup><br/> Yes <br/> No <br/> Yes
Deploying services within a virtual network provides the following capabilities:
| Virtual desktop infrastructure| [Azure Lab Services](../lab-services/how-to-connect-vnet-injection.md)<br/>| Yes <br/> <sup>1</sup> 'Dedicated' implies that only service specific resources can be deployed in this subnet and can't be combined with customer VM/VMSSs <br/>
-<sup>2</sup> It's recommended as a best practice to have these services in a dedicated subnet, but not a mandatory requirement imposed by the service.
+<sup>2</sup> It's recommended as a best practice to have these services in a dedicated subnet, but not a mandatory requirement imposed by the service.
vpn-gateway Point To Site Vpn Client Cert Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-windows.md
description: Learn how to configure VPN clients for P2S configurations that use
Previously updated : 12/01/2022 Last updated : 01/10/2023
Before beginning, verify that you are on the correct article. The following tabl
>[!IMPORTANT] >[!INCLUDE [TLS](../../includes/vpn-gateway-tls-change.md)]
-## Generate VPN client configuration files
+## 1. Generate VPN client configuration files
All of the necessary configuration settings for the VPN clients are contained in a VPN client profile configuration zip file. You can generate client profile configuration files using PowerShell, or by using the Azure portal. Either method returns the same zip file.
The VPN client profile configuration files that you generate are specific to the
[!INCLUDE [Generate profile configuration files - Azure portal](../../includes/vpn-gateway-generate-profile-portal.md)]
-## Generate client certificates
+## 2. Generate client certificates
For certificate authentication, a client certificate must be installed on each client computer. The client certificate you want to use must be exported with the private key, and must contain all certificates in the certification path. Additionally, for some configurations, you'll also need to install root certificate information.
In many cases, you can install the client certificate directly on the client com
* For information about working with certificates, see [Point-to site: Generate certificates](vpn-gateway-certificates-point-to-site.md). * To view an installed client certificate, open **Manage User Certificates**. The client certificate is installed in **Current User\Personal\Certificates**.
+## 3. Configure the VPN client
+ Next, configure the VPN client. Select from the following instructions: * [IKEv2 and SSTP - native VPN client steps](#ike)