Updates from: 01/27/2022 02:08:32
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/synchronization.md
Objects and credentials in an Azure Active Directory Domain Services (Azure AD D
In a hybrid environment, objects and credentials from an on-premises AD DS domain can be synchronized to Azure AD using Azure AD Connect. Once those objects are successfully synchronized to Azure AD, the automatic background sync then makes those objects and credentials available to applications using the managed domain.
-If on-prem AD DS and Azure AD are configured for federated authentication using ADFS then there is no (current/valid) password hash available in Azure DS. Azure AD user accounts created before fed auth was implemented might have an old password hash but this likely doesn't match a hash of their on-prem password. Hence Azure AD DS won't be able to validate the users credentials.
+If on-premises AD DS and Azure AD are configured for federated authentication using ADFS without password hash sync, or if third-party identity protection products and Azure AD are configured for federated authentication without password hash sync, no (current/valid) password hash is available in Azure DS. Azure AD user accounts created before fed auth was implemented might have an old password hash, but this likely doesn't match a hash of their on-premises password. Hence, Azure AD DS won't be able to validate a user's credentials.
The following diagram illustrates how synchronization works between Azure AD DS, Azure AD, and an optional on-premises AD DS environment:
The following diagram illustrates how synchronization works between Azure AD DS,
## Synchronization from Azure AD to Azure AD DS + User accounts, group memberships, and credential hashes are synchronized one way from Azure AD to Azure AD DS. This synchronization process is automatic. You don't need to configure, monitor, or manage this synchronization process. The initial synchronization may take a few hours to a couple of days, depending on the number of objects in the Azure AD directory. After the initial synchronization is complete, changes that are made in Azure AD, such as password or attribute changes, are then automatically synchronized to Azure AD DS. When a user is created in Azure AD, they're not synchronized to Azure AD DS until they change their password in Azure AD. This password change process causes the password hashes for Kerberos and NTLM authentication to be generated and stored in Azure AD. The password hashes are needed to successfully authenticate a user in Azure AD DS.
active-directory Tutorial Enable Sspr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/tutorial-enable-sspr.md
In this tutorial you learn how to:
> * Set up authentication methods and registration options > * Test the SSPR process as a user
+## Video tutorial
+
+You can also follow along in a related video: [How to enable and configure SSPR in Azure AD](https://www.youtube.com/embed/rA8TvhNcCvQ?azure-portal=true).
+ ## Prerequisites To finish this tutorial, you need the following resources and privileges:
active-directory Id Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/id-tokens.md
Previously updated : 12/28/2021 Last updated : 01/25/2022
The table below shows the claims that are in most ID tokens by default (except w
| `c_hash`| String |The code hash is included in ID tokens only when the ID token is issued with an OAuth 2.0 authorization code. It can be used to validate the authenticity of an authorization code. To understand how to do this validation, see the [OpenID Connect specification](https://openid.net/specs/openid-connect-core-1_0.html#HybridIDToken). | |`at_hash`| String |The access token hash is included in ID tokens only when the ID token is issued from the `/authorize` endpoint with an OAuth 2.0 access token. It can be used to validate the authenticity of an access token. To understand how to do this validation, see the [OpenID Connect specification](https://openid.net/specs/openid-connect-core-1_0.html#HybridIDToken). This is not returned on ID tokens from the `/token` endpoint. | |`aio` | Opaque String | An internal claim used by Azure AD to record data for token reuse. Should be ignored.|
-|`preferred_username` | String | The primary username that represents the user. It could be an email address, phone number, or a generic username without a specified format. Its value is mutable and might change over time. Since this value can be changed, it must not be used to make authorization decisions. The `profile` scope is required to receive this claim.|
-|`email` | String | The `email` claim is present by default for guest accounts that have an email address. Your app can request the email claim for managed users (those from the same tenant as the resource) using the `email` [optional claim](active-directory-optional-claims.md). On the v2.0 endpoint, your app can also request the `email` OpenID Connect scope - you don't need to request both the optional claim and the scope to get the claim. The email claim only supports addressable mail from the user's profile information. |
+|`preferred_username` | String |The primary username that represents the user. It could be an email address, phone number, or a generic username without a specified format. Its value is mutable and might change over time. Since it is mutable, this value must not be used to make authorization decisions. It can be used for username hints, however, and in human-readable UI as a username. The `profile` scope is required in order to receive this claim. Present only in v2.0 tokens.|
+|`email` | String | The `email` claim is present by default for guest accounts that have an email address. Your app can request the email claim for managed users (those from the same tenant as the resource) using the `email` [optional claim](active-directory-optional-claims.md). On the v2.0 endpoint, your app can also request the `email` OpenID Connect scope - you don't need to request both the optional claim and the scope to get the claim.|
|`name` | String | The `name` claim provides a human-readable value that identifies the subject of the token. The value isn't guaranteed to be unique, it can be changed, and it's designed to be used only for display purposes. The `profile` scope is required to receive this claim. | |`nonce`| String | The nonce matches the parameter included in the original /authorize request to the IDP. If it does not match, your application should reject the token. | |`oid` | String, a GUID | The immutable identifier for an object in the Microsoft identity system, in this case, a user account. This ID uniquely identifies the user across applications - two different applications signing in the same user will receive the same value in the `oid` claim. The Microsoft Graph will return this ID as the `id` property for a given user account. Because the `oid` allows multiple apps to correlate users, the `profile` scope is required to receive this claim. Note that if a single user exists in multiple tenants, the user will contain a different object ID in each tenant - they're considered different accounts, even though the user logs into each account with the same credentials. The `oid` claim is a GUID and cannot be reused. |
active-directory Quickstart V2 Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-android.md
Last updated 01/14/2022+ #Customer intent: As an application developer, I want to learn how Android native apps can call protected APIs that require login and access tokens using the Microsoft identity platform.
active-directory Quickstart V2 Aspnet Core Web Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-web-api.md
Last updated 01/11/2022+ #Customer intent: As an application developer, I want to know how to write an ASP.NET Core web API that uses the Microsoft identity platform to authorize API requests from clients.
active-directory Quickstart V2 Aspnet Core Webapp Calls Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph.md
Last updated 11/22/2021+ #Customer intent: As an application developer, I want to download and run a demo ASP.NET Core web app that can sign in users with personal Microsoft accounts (MSA) and work/school accounts from any Azure Active Directory instance, then access their data in Microsoft Graph on their behalf.
active-directory Quickstart V2 Aspnet Core Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md
Last updated 11/22/2021+ #Customer intent: As an application developer, I want to know how to write an ASP.NET Core web app that can sign in personal accounts, as well as work and school accounts, from any Azure Active Directory instance.
active-directory Quickstart V2 Aspnet Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md
Last updated 11/22/2021+ #Customer intent: As an application developer, I want to see a sample ASP.NET web app that can sign in Azure AD users.
active-directory Quickstart V2 Dotnet Native Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-dotnet-native-aspnet.md
Last updated 01/11/2022+ #Customer intent: As an application developer, I want to know how to set up OpenId Connect authentication in a web application that's built by using Node.js with Express.
active-directory Quickstart V2 Ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-ios.md
Last updated 01/14/2022+
active-directory Quickstart V2 Java Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-java-daemon.md
Last updated 01/10/2022+ #Customer intent: As an application developer, I want to learn how my Java app can get an access token and call an API that's protected by Microsoft identity platform endpoint using client credentials flow.
active-directory Quickstart V2 Java Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-java-webapp.md
Last updated 11/22/2021+
active-directory Quickstart V2 Netcore Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
Last updated 01/10/2022+
active-directory Quickstart V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-console.md
Last updated 01/10/2022+ #Customer intent: As an application developer, I want to learn how my Node.js app can get an access token and call an API that is protected by a Microsoft identity platform endpoint using client credentials flow.
active-directory Quickstart V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-desktop.md
Last updated 01/14/2022+ #Customer intent: As an application developer, I want to learn how my Node.js Electron desktop application can get an access token and call an API that's protected by a Microsoft identity platform endpoint.
active-directory Quickstart V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-webapp-msal.md
Last updated 11/22/2021+ #Customer intent: As an application developer, I want to know how to set up authentication in a web application built using Node.js and MSAL Node.
active-directory Quickstart V2 Nodejs Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-webapp.md
Last updated 11/22/2021+ #Customer intent: As an application developer, I want to know how to set up OpenID Connect authentication in a web application built using Node.js with Express.
active-directory Quickstart V2 Python Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-python-daemon.md
Last updated 01/10/2022+ #Customer intent: As an application developer, I want to learn how my Python app can get an access token and call an API that's protected by the Microsoft identity platform using client credentials flow.
active-directory Quickstart V2 Python Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-python-webapp.md
Last updated 11/22/2021+
active-directory Quickstart V2 Uwp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-uwp.md
Last updated 01/14/2022-+ #Customer intent: As an application developer, I want to learn how my Universal Windows Platform (XAML) application can get an access token and call an API that's protected by the Microsoft identity platform.
active-directory Quickstart V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-windows-desktop.md
Title: "Quickstart: Sign in users and call Microsoft Graph in a Windows desktop app | Azure"
-description: In this quickstart, learn how a Windows desktop .NET (XAML) application can get an access token and call an API protected by the Microsoft identity platform.
+description: In this quickstart, learn how a Windows Presentation Foundation (WPF) application can get an access token and call an API protected by the Microsoft identity platform.
Last updated 01/14/2022
-#Customer intent: As an application developer, I want to learn how my Windows desktop .NET application can get an access token and call an API that's protected by the Microsoft identity platform.
+#Customer intent: As an application developer, I want to learn how my Windows Presentation Foundation (WPF) application can get an access token and call an API that's protected by the Microsoft identity platform.
# Quickstart: Acquire a token and call Microsoft Graph API from a Windows desktop app
-In this quickstart, you download and run a code sample that demonstrates how a Windows desktop .NET (WPF) application can sign in users and get an access token to call the Microsoft Graph API.
+In this quickstart, you download and run a code sample that demonstrates how a Windows Presentation Foundation (WPF) application can sign in users and get an access token to call the Microsoft Graph API.
See [How the sample works](#how-the-sample-works) for an illustration.
-> [!div renderon="docs"]
-> ## Prerequisites
->
-> * [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) with the [Universal Windows Platform development](/windows/uwp/get-started/get-set-up) workload installed
->
-> ## Register and download your quickstart app
-> You have two options to start your quickstart application:
-> * [Express] [Option 1: Register and auto configure your app and then download your code sample](#option-1-register-and-auto-configure-your-app-and-then-download-your-code-sample)
-> * [Manual] [Option 2: Register and manually configure your application and code sample](#option-2-register-and-manually-configure-your-application-and-code-sample)
->
-> ### Option 1: Register and auto configure your app and then download your code sample
->
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/WinDesktopQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
-> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application with just one click.
->
-> ### Option 2: Register and manually configure your application and code sample
->
-> #### Step 1: Register your application
-> To register your application and add the app's registration information to your solution manually, follow these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `Win-App-calling-MsGraph`. Users of your app might see this name, and you can change it later.
-> 1. In the **Supported account types** section, select **Accounts in any organizational directory and personal Microsoft accounts (for example, Skype, Xbox, Outlook.com)**.
-> 1. Select **Register** to create the application.
-> 1. Under **Manage**, select **Authentication**.
-> 1. Select **Add a platform** > **Mobile and desktop applications**.
-> 1. In the **Redirect URIs** section, select `https://login.microsoftonline.com/common/oauth2/nativeclient` and in **Custom redirect URIs** add `ms-appx-web://microsoft.aad.brokerplugin/{client_id}` where `{client_id}` is the application (client) ID of your application (the same GUID that appears in the `msal{client_id}://auth` checkbox).
-> 1. Select **Configure**.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 1: Configure your application in Azure portal
-> For the code sample in this quickstart to work, add a **Redirect URI** of `https://login.microsoftonline.com/common/oauth2/nativeclient` and `ms-appx-web://microsoft.aad.brokerplugin/{client_id}`.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make this change for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these attributes.
-#### Step 2: Download your Visual Studio project
+#### Step 1: Configure your application in Azure portal
+For the code sample in this quickstart to work, add a **Redirect URI** of `https://login.microsoftonline.com/common/oauth2/nativeclient` and `ms-appx-web://microsoft.aad.brokerplugin/{client_id}`.
+> [!div class="nextstepaction"]
+> [Make this change for me]()
+
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these attributes.
-> [!div renderon="docs"]
-> [Download the Visual Studio project](https://github.com/Azure-Samples/active-directory-dotnet-desktop-msgraph-v2/archive/msal3x.zip)
+#### Step 2: Download your Visual Studio project
-> [!div class="sxs-lookup" renderon="portal"]
-> Run the project using Visual Studio 2019.
-> [!div renderon="portal" id="autoupdate" class="sxs-lookup nextstepaction"]
+Run the project using Visual Studio 2019.
+> [!div class="nextstepaction"]
> [Download the code sample](https://github.com/Azure-Samples/active-directory-dotnet-desktop-msgraph-v2/archive/msal3x.zip) [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Your app is configured and ready to run
-> We have configured your project with values of your app's properties and it's ready to run.
+#### Step 3: Your app is configured and ready to run
+We have configured your project with values of your app's properties and it's ready to run.
-> [!div class="sxs-lookup" renderon="portal"]
+> [!div class="sxs-lookup"]
> > [!NOTE] > > `Enter_the_Supported_Account_Info_Here`
-> [!div renderon="docs"]
-> #### Step 3: Configure your Visual Studio project
-> 1. Extract the zip file to a local folder close to the root of the disk, for example, **C:\Azure-Samples**.
-> 1. Open the project in Visual Studio.
-> 1. Edit **App.Xaml.cs** and replace the values of the fields `ClientId` and `Tenant` with the following code:
->
-> ```csharp
-> private static string ClientId = "Enter_the_Application_Id_here";
-> private static string Tenant = "Enter_the_Tenant_Info_Here";
-> ```
->
-> Where:
-> - `Enter_the_Application_Id_here` - is the **Application (client) ID** for the application you registered.
->
-> To find the value of **Application (client) ID**, go to the app's **Overview** page in the Azure portal.
-> - `Enter_the_Tenant_Info_Here` - is set to one of the following options:
-> - If your application supports **Accounts in this organizational directory**, replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
-> - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`
-> - If your application supports **Accounts in any organizational directory and personal Microsoft accounts**, replace this value with `common`.
->
-> To find the values of **Directory (tenant) ID** and **Supported account types**, go to the app's **Overview** page in the Azure portal.
->
- ## More information ### How the sample works
active-directory Scenario Protected Web Api Verification Scope App Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-protected-web-api-verification-scope-app-roles.md
public class TodoListController : Controller
/// The web API will accept only tokens 1) for users, 2) that have the `access_as_user` scope for /// this API. /// </summary>
- const string[] scopeRequiredByApi = new string[] { "access_as_user" };
+ static readonly string[] scopeRequiredByApi = new string[] { "access_as_user" };
// GET: api/values [HttpGet]
active-directory V2 Permissions And Consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-permissions-and-consent.md
Previously updated : 07/06/2021 Last updated : 01/14/2022
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
In this article, youΓÇÖll learn to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to header-based applications using F5ΓÇÖs BIG-IP Easy Button Guided Configuration.
-Configuring a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
+Enabling BIG-IP published services for Azure Active Directory (Azure AD) SSO provides many benefits, including:
* Improved Zero Trust governance through Azure AD pre-authentication and authorization
To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD i
For this scenario, we have a legacy application using HTTP authorization headers to control access to protected content.
-Ideally, application access should be managed directly by Azure AD but being legacy it lacks any form of modern authentication protocol. Modernization would take considerable effort and time, introducing inevitable costs and risk of potential downtime. Instead, a BIG-IP deployed between the public internet and the internal application will be used to gate inbound access to the application.
+Being legacy, the application lacks any form of modern protocols to support a direct integration with Azure AD. Modernizing the app is also costly, requires careful planning, and introduces risk of potential impact.
-Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application.
+One option would be to consider [Azure AD Application Proxy](/azure/active-directory/app-proxy/application-proxy), to gate remote access to the application.
+
+Another approach is to use an F5 BIG-IP Application Delivery Controller, as it too provides the protocol transitioning required to bridge legacy applications to the modern ID control plane.
+
+Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application for both remote and local access.
## Scenario architecture
The SHA solution for this scenario is made up of:
SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
-![Secure hybrid access - SP initiated flow](./media/f5-big-ip-easy-button-ldap/sp-initiated-flow.png)
+![Secure hybrid access - SP initiated flow](./media/f5-big-ip-easy-button-header/sp-initiated-flow.png)
| Steps| Description | | - |-|
With the **Easy Button**, admins no longer go back and forth between Azure AD an
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the Microsoft identity platform. Registering with Azure AD establishes a trust relationship between your application and the IdP. BIG-IP must also be registered as a client in Azure AD, before the Easy Button wizard is trusted to access Microsoft Graph.
+Before a client or service can access Microsoft Graph, it must be trusted by the Microsoft identity platform.
+
+The Easy Button client must also be registered as a client in Azure AD, before it is allowed to establish a trust relationship between each SAML SP instance of a BIG-IP published applications, and the IdP.
1. Sign-in to the [Azure AD portal](https://portal.azure.com/) using an account with Application Administrative rights 2. From the left navigation pane, select the **Azure Active Directory** service
Before a client or service can access Microsoft Graph, it must be trusted by the
## Configure Easy Button
-Next, step through the Easy Button configurations, and complete the trust to start publishing the internal application. Start by provisioning your BIG-IP with an X509 certificate that Azure AD can use to sign SAML tokens and claims issued for SHA enabled services.
+Next, step through the Easy Button configurations to federate and publish the internal application. Start by provisioning your BIG-IP with an X509 certificate that Azure AD can use to sign SAML tokens and claims issued for SHA enabled services.
1. From a browser, sign-in to the F5 BIG-IP management console 2. Navigate to **System > Certificate Management > Traffic Certificate Management SSL Certificate List > Import** 3. Select **PKCS 12 (IIS)** and import your certificate along with its private key Once provisioned, the certificate can be used for every application published through Easy Button. You can also choose to upload a separate certificate for individual applications.
+
![Screenshot for Configure Easy Button- Import SSL certificates and keys](./media/f5-big-ip-easy-button-ldap/configure-easy-button.png) 4. Navigate to **Access > Guided Configuration > Microsoft Integration and select Azure AD Application**
The **Easy Button** template will display the sequence of steps required to publ
![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png)
-Configuration steps flow
### Configuration Properties
Enabling SSO allows users to access BIG-IP published services without having to
![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-header/sso-http-headers.png) >[!NOTE]
-> The APM session variables defined within curly brackets are CASE sensitive. If you enter EmployeeID when the Azure AD attribute name is being sent as employeeid, it will cause an attribute mapping failure. In case of any issues, troubleshoot using the session analysis steps to check how the APM has variables defined.
+> APM session variables defined within curly brackets are CASE sensitive. If you enter EmployeeID when the Azure AD attribute name is being defined as employeeid, it will cause an attribute mapping failure.
### Session Management
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
In this article, you'll learn to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to header-based applications that also require session augmentation through Lightweight Directory Access Protocol (LDAP) sourced attributes using F5ΓÇÖs BIG-IP Easy Button guided configuration.
-Configuring BIG-IP published applications with Azure AD provides many benefits, including:
+Enabling BIG-IP published services for Azure Active Directory (Azure AD) SSO provides many benefits, including:
* Improved Zero Trust governance through Azure AD pre-authentication and authorization
To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD i
## Scenario description
-For this scenario, we have a legacy application using HTTP authorization headers to control access to protected content. Azure AD pre-authentication provides the user identifier, while other attributes fetched from an LDAP connected Human Resource (HR) system provide fine grained application permissions.
+For this scenario, we have a legacy application using HTTP authorization headers to control access to protected content.
-Ideally, application access should be managed directly by Azure AD but being legacy it lacks any form of modern authentication protocol. Modernization would take considerable effort and time, introducing inevitable costs and risk of potential downtime.
+Being legacy, the application lacks any form of modern protocols to support a direct integration with Azure AD. Modernizing the app is also costly, requires careful planning, and introduces risk of potential impact.
-Instead, a BIG-IP deployed between the public internet and the internal application will be used to gate inbound access to the application.
+One option would be to consider [Azure AD Application Proxy](/azure/active-directory/app-proxy/application-proxy), to gate remote access to the application.
-Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application.
+Another approach is to use an F5 BIG-IP Application Delivery Controller, as it too provides the protocol transitioning required to bridge legacy applications to the modern ID control plane.
+
+Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application for both remote and local access.
## Scenario architecture The secure hybrid access solution for this scenario is made up of:
-**Application:** BIG-IP published service to be protected by and Azure AD SHA.
+**Application:** BIG-IP published service to be protected by Azure AD SHA.
-**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP APM. Trough SSO, Azure AD provides the BIG-IP with any required session attributes.
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP APM. Through SSO, Azure AD provides the BIG-IP with any required session attributes.
**HR system:** Legacy employee database acting as source of truth for fine grained application permissions.
Prior BIG-IP experience isn't necessary, but you'll need:
## BIG-IP configuration methods
-There are many methods to deploy BIG-IP for this scenario including a template-driven Guided Configuration wizard, or the manual advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy Button template.
+There are many methods to deploy BIG-IP for this scenario including a template-driven Guided Configuration wizard, or the manual advanced configuration. This tutorial covers the Easy Button templates offered by the Guided Configuration 16.1 and upwards.
-With the **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for secure hybrid access. The end-to-end deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures applications can quickly, easily support identity federation, SSO, and Azure AD Multi-Factor Authentication (MFA), without management overhead of having to do on a per app basis.
+With the **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for secure hybrid access. The end-to-end deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
For scenarios where the Guided Configuration lacks the flexibility to achieve a particular set of requirements, see the [Advanced deployment](#advanced-deployment) at the end of this tutorial.
For scenarios where the Guided Configuration lacks the flexibility to achieve a
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the Microsoft identity platform by being registered with Azure AD. A BIG-IP must also be registered as a client in Azure AD, before the Easy Button wizard is trusted to access Microsoft Graph.
+Before a client or service can access Microsoft Graph, it must be trusted by the Microsoft identity platform.
+
+The Easy Button client must also be registered as a client in Azure AD, before it is allowed to establish a trust relationship between each SAML SP instance of a BIG-IP published applications, and the IdP.
1. Sign-in to the [Azure AD portal](https://portal.azure.com) using an account with Application Administrative rights
Before a client or service can access Microsoft Graph, it must be trusted by the
## Configure Easy Button
-Next, step through the Easy Button configurations, and complete the trust to start publishing the internal application. Start by provisioning your BIG-IP with an X509 certificate that Azure AD can use to sign SAML tokens and claims issued for SHA enabled services.
+Next, step through the Easy Button configurations to federate and publish the EBS application. Start by provisioning your BIG-IP with an X509 certificate that Azure AD can use to sign SAML tokens and claims issued for SHA enabled services.
1. From a browser, sign-in to the F5 BIG-IP management console 2. Navigate to **System > Certificate Management > Traffic Certificate Management SSL Certificate List > Import**
Enabling SSO allows users to access BIG-IP published services without having to
![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-ldap/sso-headers.png) >[!NOTE]
->The APM session variables defined within curly brackets are CASE sensitive. For example, if our queried LDAP attribute was returned as eventroles, then the above variable definition would fail to populate the eventrole header value. In case of any issues, troubleshoot using the session analysis steps to check how the APM has variables defined.
+>APM session variables defined within curly brackets are CASE sensitive. If you enter EventRoles when the Azure AD attribute name is being defined as eventroles, it will cause an attribute mapping failure.
### Session Management
active-directory Facebook Work Accounts Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/facebook-work-accounts-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://work.facebook.com` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Facebook Work Accounts Client support team](mailto:WorkplaceSupportPartnerships@fb.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Engage the [Work Accounts team](https://www.workplace.com/help/work) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
### Create Facebook Work Accounts test user
-In this section, you create a user called Britta Simon in Facebook Work Accounts. Work with [Facebook Work Accounts support team](mailto:WorkplaceSupportPartnerships@fb.com) to add the users in the Facebook Work Accounts platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in Facebook Work Accounts. Work with the [Work Accounts team](https://www.workplace.com/help/work) to add the users in the Facebook Work Accounts platform. Users must be created and activated before you use single sign-on.
## Test SSO
active-directory Github Enterprise Managed User Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/github-enterprise-managed-user-tutorial.md
To configure the integration of GitHub Enterprise Managed User into Azure AD, yo
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. Navigate to **Enterprise Applications**.
1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **GitHub Enterprise Managed User** in the search box.
-1. Select **GitHub Enterprise Managed User** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. Type **GitHub Enterprise Managed User** in the search box.
+1. Select **GitHub Enterprise Managed User** from results panel and then click on the **Create** button. Wait a few seconds while the app is added to your tenant.
## Configure and test Azure AD SSO for GitHub Enterprise Managed User
In this section, you'll take the information provided from AAD above and enter t
## Next steps
-GitHub Enterprise Managed User **requires** all accounts to be created through automatic user provisioning, you can find more details [here](./github-enterprise-managed-user-provisioning-tutorial.md) on how to configure automatic user provisioning.
+GitHub Enterprise Managed User **requires** all accounts to be created through automatic user provisioning, you can find more details [here](./github-enterprise-managed-user-provisioning-tutorial.md) on how to configure automatic user provisioning.
active-directory Salesforce Sandbox Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/salesforce-sandbox-provisioning-tutorial.md
Before configuring and enabling the provisioning service, you need to decide whi
* When assigning a user to Salesforce Sandbox, you must select a valid user role. The "Default Access" role does not work for provisioning.
+> [!NOTE]
+> The Salesforce Sandbox app will, by default, append a string to the username and email of the users provisioned. Usernames and Emails have to be unique across all of Salesforce so this is to prevent creating real user data in the sandbox which would prevent these users being provisioned to the production Salesforce environment
+ > [!NOTE] > This app imports custom roles from Salesforce Sandbox as part of the provisioning process, which the customer may want to select when assigning users.
For more information on how to read the Azure AD provisioning logs, see [Reporti
* [Managing user account provisioning for Enterprise Apps](tutorial-list.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
-* [Configure Single Sign-on](./salesforce-sandbox-tutorial.md)
+* [Configure Single Sign-on](./salesforce-sandbox-tutorial.md)
active-directory Credential Design https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/credential-design.md
During the issuance flow, the user can be asked to input some self-attested info
```json { "attestations": {
- "selfIssued": {
+ "selfIssued" :
+ {
"mapping": {
- "alias": {
- "claim": "name"
- }
- },
- },
- "validityInterval": 25920000,
- "vc": {
- "type": [
- "ProofOfNinjaNinja"
- ],
+ "firstName": { "claim": "firstName" },
+ "lastName": { "claim": "lastName" }
+ }
}
+ },
+ "validityInterval": 2592001,
+ "vc": {
+ "type": [ "VerifiedCredentialExpert" ]
}-
+}
```
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
Create a service principal for the Request Service API. The service API is the M
To create the service principal:
-1. Run the following PowerShell commands. These commands install and import the `AzureAD` module. For more information, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps#installation).
+1. Run the following PowerShell commands. These commands install and import the `Az` module. For more information, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps#installation).
```powershell
- if ((Get-Module -ListAvailable -Name "AzureAD") -eq $null) { Install-Module "AzureAD" -Scope CurrentUser } Import-Module AzureAD
+ if ((Get-Module -ListAvailable -Name "Az.Accounts") -eq $null) { Install-Module -Name "Az.Accounts" -Scope CurrentUser }
+ if ((Get-Module -ListAvailable -Name "Az.Resources") -eq $null) { Install-Module "Az.Resources" -Scope CurrentUser }
``` 1. Run the following PowerShell command to connect to your Azure AD tenant. Replace \<*your-tenant-ID*> with your [Azure AD tenant ID](../../active-directory/fundamentals/active-directory-how-to-find-tenant.md). ```powershell
- Connect-AzureAD -TenantId <your-tenant-ID>
+ Connect-AzAccount -TenantId <your-tenant-ID>
``` 1. Run the following command in the same PowerShell session. The `AppId` `bbb94529-53a3-4be5-a069-7eaf2712b826` refers to the Verifiable Credentials Microsoft service. ```powershell
- New-AzureADServicePrincipal -AppId "bbb94529-53a3-4be5-a069-7eaf2712b826" -DisplayName "Verifiable Credential Request Service"
+ New-AzADServicePrincipal -ApplicationId "bbb94529-53a3-4be5-a069-7eaf2712b826" -DisplayName "Verifiable Credential Request Service"
``` ## Create a key vault
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/dapr.md
The AKS Dapr extension uses the Azure CLI to provision the Dapr control plane on
- **dapr-placement**: Used for actors only. Creates mapping tables that map actor instances to pods - **dapr-sentry**: Manages mTLS between services and acts as a certificate authority. For more information read the [security overview][dapr-security].
-Once Dapr is installed on your AKS cluster, your application services now have the Dapr sidecar running alongside them. This enables you to immediately start using the Dapr building block APIs. For a more in-depth overview of the building block APIs and how to best use them, please see the [Dapr building blocks overview][building-blocks-concepts].
+Once Dapr is installed on your AKS cluster, you can begin to develop using the Dapr building block APIs by [adding a few annotations][dapr-deployment-annotations] to your deployments. For a more in-depth overview of the building block APIs and how to best use them, please see the [Dapr building blocks overview][building-blocks-concepts].
> [!WARNING] > If you install Dapr through the AKS extension, our recommendation is to continue using the extension for future management of Dapr instead of the Dapr CLI. Combining the two tools can cause conflicts and result in undesired behavior.
az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu
[dapr-configuration-options]: https://github.com/dapr/dapr/blob/master/charts/dapr/README.md#configuration [sample-application]: https://github.com/dapr/quickstarts/tree/master/hello-kubernetes#step-2create-and-configure-a-state-store [dapr-security]: https://docs.dapr.io/concepts/security-concept/
+[dapr-deployment-annotations]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-overview/#adding-dapr-to-a-kubernetes-deployment
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/upgrade-cluster.md
Last updated 12/17/2020
# Upgrade an Azure Kubernetes Service (AKS) cluster
-Part of the AKS cluster lifecycle involves performing periodic upgrades to the latest Kubernetes version. It is important you apply the latest security releases, or upgrade to get the latest features. This article shows you how to upgrade the master components or a single, default node pool in an AKS cluster.
+Part of the AKS cluster lifecycle involves performing periodic upgrades to the latest Kubernetes version. ItΓÇÖs important you apply the latest security releases, or upgrade to get the latest features. This article shows you how to check for, configure, and apply upgrades to your AKS cluster.
For AKS clusters that use multiple node pools or Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade]. ## Before you begin
-This article requires that you are running the Azure CLI version 2.0.65 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+This article requires Azure CLI version 2.0.65 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
> [!WARNING] > An AKS cluster upgrade triggers a cordon and drain of your nodes. If you have a low compute quota available, the upgrade may fail. For more information, see [increase quotas](../azure-portal/supportability/regional-quota-requests.md)
az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster --outpu
``` > [!NOTE]
-> When you upgrade a supported AKS cluster, Kubernetes minor versions cannot be skipped. All upgrades must be performed sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed, however *1.14.x* -> *1.16.x* is not allowed.
-> > Skipping multiple versions can only be done when upgrading from an _unsupported version_ back to a _supported version_. For example, an upgrade from an unsupported *1.10.x* --> a supported *1.15.x* can be completed.
+> When you upgrade a supported AKS cluster, Kubernetes minor versions can't be skipped. All upgrades must be performed sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed, however *1.14.x* -> *1.16.x* is not allowed.
+>
+> Skipping multiple versions can only be done when upgrading from an _unsupported version_ back to a _supported version_. For example, an upgrade from an unsupported *1.10.x* -> a supported *1.15.x* can be completed if available.
The following example output shows that the cluster can be upgraded to versions *1.19.1* and *1.19.3*:
Name ResourceGroup MasterVersion Upgrades
default myResourceGroup 1.18.10 1.19.1, 1.19.3 ```
-If no upgrade is available, you will get the message:
+The following output shows that no upgrades are available:
```console ERROR: Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info. ```
+> [!IMPORTANT]
+> If no upgrade is available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. Attempting to upgrade a cluster to a newer Kubernetes version when `az aks get-upgrades` shows no upgrades available is not supported.
+ ## Customize node surge upgrade > [!Important]
ERROR: Table output unavailable. Use the --query option to specify an appropriat
> > If using Azure CNI, validate there are available IPs in the subnet as well to [satisfy IP requirements of Azure CNI](configure-azure-cni.md).
-By default, AKS configures upgrades to surge with one additional node. A default value of one for the max surge settings will enable AKS to minimize workload disruption by creating an additional node before the cordon/drain of existing applications to replace an older versioned node. The max surge value may be customized per node pool to enable a trade-off between upgrade speed and upgrade disruption. By increasing the max surge value, the upgrade process completes faster, but setting a large value for max surge may cause disruptions during the upgrade process.
+By default, AKS configures upgrades to surge with one extra node. A default value of one for the max surge settings will enable AKS to minimize workload disruption by creating an extra node before the cordon/drain of existing applications to replace an older versioned node. The max surge value may be customized per node pool to enable a trade-off between upgrade speed and upgrade disruption. By increasing the max surge value, the upgrade process completes faster, but setting a large value for max surge may cause disruptions during the upgrade process.
For example, a max surge value of 100% provides the fastest possible upgrade process (doubling the node count) but also causes all nodes in the node pool to be drained simultaneously. You may wish to use a higher value such as this for testing environments. For production node pools, we recommend a max_surge setting of 33%.
-AKS accepts both integer values and a percentage value for max surge. An integer such as "5" indicates five additional nodes to surge. A value of "50%" indicates a surge value of half the current node count in the pool. Max surge percent values can be a minimum of 1% and a maximum of 100%. A percent value is rounded up to the nearest node count. If the max surge value is lower than the current node count at the time of upgrade, the current node count is used for the max surge value.
+AKS accepts both integer values and a percentage value for max surge. An integer such as "5" indicates five extra nodes to surge. A value of "50%" indicates a surge value of half the current node count in the pool. Max surge percent values can be a minimum of 1% and a maximum of 100%. A percent value is rounded up to the nearest node count. If the max surge value is lower than the current node count at the time of upgrade, the current node count is used for the max surge value.
During an upgrade, the max surge value can be a minimum of 1 and a maximum value equal to the number of nodes in your node pool. You can set larger values, but the maximum number of nodes used for max surge won't be higher than the number of nodes in the pool at the time of upgrade.
myAKSCluster eastus myResourceGroup 1.18.10 Succeeded
When you upgrade your cluster, the following Kubenetes events may occur on each node: - Surge ΓÇô Create surge node.-- Drain ΓÇô Pods are being evicted from the node. Each pod has a 30 minute timeout to complete the eviction.
+- Drain ΓÇô Pods are being evicted from the node. Each pod has a 30-minute timeout to complete the eviction.
- Update ΓÇô Update of a node has succeeded or failed. - Delete ΓÇô Deleted a surge node.
In addition to manually upgrading a cluster, you can set an auto-upgrade channel
| `none`| disables auto-upgrades and keeps the cluster at its current version of Kubernetes| Default setting if left unchanged| | `patch`| automatically upgrade the cluster to the latest supported patch version when it becomes available while keeping the minor version the same.| For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster is upgraded to *1.17.9*| | `stable`| automatically upgrade the cluster to the latest supported patch release on minor version *N-1*, where *N* is the latest supported minor version.| For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster is upgraded to *1.18.6*.
-| `rapid`| automatically upgrade the cluster to the latest supported patch release on the latest supported minor version.| In cases where the cluster is at a version of Kubernetes that is at an *N-2* minor version where *N* is the latest supported minor version, the cluster first upgrades to the latest supported patch version on *N-1* minor version. For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster first is upgraded to *1.18.6*, then is upgraded to *1.19.1*.
+| `rapid`| automatically upgrade the cluster to the latest supported patch release on the latest supported minor version.| In cases where the cluster is at a version of Kubernetes that is at an *N-2* minor version where *N* is the latest supported minor version, the cluster first upgrades to the latest supported patch version on *N-1* minor version. For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster first is upgraded to *1.18.6*, then is upgraded to *1.19.1*.
| `node-image`| automatically upgrade the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes won't get the new images unless you do a node image upgrade. Turning on the node-image channel will automatically update your node images whenever a new version is available. | > [!NOTE] > Cluster auto-upgrade only updates to GA versions of Kubernetes and will not update to preview versions.
-Automatically upgrading a cluster follows the same process as manually upgrading a cluster. For more details, see [Upgrade an AKS cluster][upgrade-cluster].
+Automatically upgrading a cluster follows the same process as manually upgrading a cluster. For more information, see [Upgrade an AKS cluster][upgrade-cluster].
To set the auto-upgrade channel when creating a cluster, use the *auto-upgrade-channel* parameter, similar to the following example.
az aks update --resource-group myResourceGroup --name myAKSCluster --auto-upgrad
## Using Cluster Auto-Upgrade with Planned Maintenance
-If you are using Planned Maintenance as well as Auto-Upgrade, your upgrade will start during your specified maintenance window. For more details on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster (preview)][planned-maintenance].
+If youΓÇÖre using Planned Maintenance and Auto-Upgrade, your upgrade will start during your specified maintenance window. For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster (preview)][planned-maintenance].
## Special considerations for node pools that span multiple Availability Zones
-AKS uses best-effort zone balancing in node groups. During an Upgrade surge, zone(s) for the surge node(s) in virtual machine scale sets is unknown ahead of time. This can temporarily cause an unbalanced zone configuration during an upgrade. However, AKS deletes the surge node(s) once the upgrade has been completed and preserves the original zone balance. If you desire to keep your zones balanced during upgrade, increase the surge to a multiple of 3 nodes. Virtual machine scale sets will then balance your nodes across Availability Zones with best-effort zone balancing.
+AKS uses best-effort zone balancing in node groups. During an Upgrade surge, zone(s) for the surge node(s) in virtual machine scale sets is unknown ahead of time. This can temporarily cause an unbalanced zone configuration during an upgrade. However, AKS deletes the surge node(s) once the upgrade has been completed and preserves the original zone balance. If you desire to keep your zones balanced during upgrade, increase the surge to a multiple of three nodes. Virtual machine scale sets will then balance your nodes across Availability Zones with best-effort zone balancing.
-If you have PVCs backed by Azure LRS Disks, they will be bound to a particular zone and may fail to recover immediately if the surge node does not match the zone of the PVC. This could cause downtime on your application when the Upgrade operation continues to drain nodes but the PVs are bound to a zone. To handle this case and maintain high availability, configure a [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) on your application. This allows Kubernetes to respect your availability requirements during Upgrade's drain operation.
+If you have PVCs backed by Azure LRS Disks, theyΓÇÖll be bound to a particular zone and may fail to recover immediately if the surge node doesnΓÇÖt match the zone of the PVC. This could cause downtime on your application when the Upgrade operation continues to drain nodes but the PVs are bound to a zone. To handle this case and maintain high availability, configure a [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) on your application. This allows Kubernetes to respect your availability requirements during Upgrade's drain operation.
## Next steps
This article showed you how to upgrade an existing AKS cluster. To learn more ab
[az-provider-register]: /cli/azure/provider#az_provider_register [nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool [upgrade-cluster]: #upgrade-an-aks-cluster
-[planned-maintenance]: planned-maintenance.md
+[planned-maintenance]: planned-maintenance.md
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-multiple-node-pools.md
FIPS-enabled node pools are currently in preview.
You will need the *aks-preview* Azure CLI extension version *0.5.11* or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command. ```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+ You need the Azure CLI version 2.32.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. FIPS-enabled node pools have the following limitations:
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/overview.md
Title: App Service Environment overview
description: Overview on the App Service Environment Previously updated : 11/15/2021 Last updated : 01/26/2022
The App Service Environment has many use cases including:
- Network isolated application hosting - Multi-tier applications
-There are many networking features that enable apps in the multi-tenant App Service to reach network isolated resources or become network isolated themselves. These features are enabled at the application level. With an App Service Environment, there's no added configuration required for the apps to be in the virtual network. The apps are deployed into a network isolated environment that is already in a virtual network. If you really need a complete isolation story, you can also get your App Service Environment deployed onto dedicated hardware.
+There are many networking features that enable apps in the multi-tenant App Service to reach network isolated resources or become network isolated themselves. These features are enabled at the application level. With an App Service Environment, there's no added configuration required for the apps to be in the virtual network. The apps are deployed into a network-isolated environment that is already in a virtual network. If you really need a complete isolation story, you can also get your App Service Environment deployed onto dedicated hardware.
## Dedicated environment
The App Service Environment is a single tenant deployment of the Azure App Servi
Applications are hosted in App Service plans, which are created in an App Service Environment. The App Service plan is essentially a provisioning profile for an application host. As you scale your App Service plan out, you create more application hosts with all of the apps in that App Service plan on each host. A single App Service Environment v3 can have up to 200 total App Service plan instances across all of the App Service plans combined. A single Isolated v2 App Service plan can have up to 100 instances by itself.
+When you're deploying on dedicated hardware (hosts), you're limited in scaling across all App Service plans to the amount of cores in this type of environment. An App Service Environment deployed on dedicated hosts has 132 vCores available. I1v2 uses 2 vCores, I2v2 uses 4 vCores, and I3v2 uses 8 vCores per instance.
+ ## Virtual network support
-The App Service Environment feature is a deployment of the Azure App Service into a single subnet in a customer's virtual network. When you deploy an app into an App Service Environment, the app will be exposed on the inbound address assigned to the App Service Environment. If your App Service Environment is deployed with an internal virtual IP (VIP), then the inbound address for all of the apps will be an address in the App Service Environment subnet. If your App Service Environment is deployed with an external VIP, then the inbound address will be an internet addressable address and your apps will be in public DNS.
+The App Service Environment feature is a deployment of the Azure App Service into a single subnet in a customer's virtual network. When you deploy an app into an App Service Environment, the app will be exposed on the inbound address assigned to the App Service Environment. If your App Service Environment is deployed with an internal virtual IP (VIP), then the inbound address for all of the apps will be an address in the App Service Environment subnet. If your App Service Environment is deployed with an external VIP, then the inbound address will be an internet-addressable address and your apps will be in public DNS.
The number of addresses used by an App Service Environment v3 in its subnet will vary based on how many instances you have along with how much traffic. There are infrastructure roles that are automatically scaled depending on the number of App Service plans and the load. The recommended size for your App Service Environment v3 subnet is a `/24` CIDR block with 256 addresses in it as that can host an App Service Environment v3 scaled out to its limit.
-The apps in an App Service Environment do not need any features enabled to access resources in the same virtual network that the App Service Environment is in. If the App Service Environment virtual network is connected to another network, then the apps in the App Service Environment can access resources in those extended networks. Traffic can be blocked by user configuration on the network.
+The apps in an App Service Environment don't need any features enabled to access resources in the same virtual network that the App Service Environment is in. If the App Service Environment virtual network is connected to another network, then the apps in the App Service Environment can access resources in those extended networks. Traffic can be blocked by user configuration on the network.
-The multi-tenant version of Azure App Service contains numerous features to enable your apps to connect to your various networks. Those networking features enable your apps to act as if they were deployed in a virtual network. The apps in an App Service Environment v3 do not need any configuration to be in the virtual network. A benefit of using an App Service Environment over the multi-tenant service is that any network access controls to the App Service Environment hosted apps is external to the application configuration. With the apps in the multi-tenant service, you must enable the features on an app by app basis and use RBAC or policy to prevent any configuration changes.
+The multi-tenant version of Azure App Service contains numerous features to enable your apps to connect to your various networks. Those networking features enable your apps to act as if they were deployed in a virtual network. The apps in an App Service Environment v3 don't need any configuration to be in the virtual network. A benefit of using an App Service Environment over the multi-tenant service is that any network access controls to the App Service Environment hosted apps is external to the application configuration. With the apps in the multi-tenant service, you must enable the features on an app-by-app basis and use Role-based access control or policy to prevent any configuration changes.
## Feature differences
Compared to earlier versions of the App Service Environment, there are some diff
- There are no networking dependencies in the customer virtual network. You can secure all inbound and outbound as desired. Outbound traffic can be routed also as desired. - You can deploy it enabled for zone redundancy. Zone redundancy can only be set during creation and only in regions where all App Service Environment v3 dependencies are zone redundant. -- You can deploy it on a dedicated host group. Host group deployments are not zone redundant. -- Scaling is much faster than with App Service Environment v2. While scaling still is not immediate as in the multi-tenant service, it is a lot faster.
+- You can deploy it on a dedicated host group. Host group deployments aren't zone redundant.
+- Scaling is much faster than with App Service Environment v2. While scaling still isn't immediate as in the multi-tenant service, it's a lot faster.
- Front end scaling adjustments are no longer required. The App Service Environment v3 front ends automatically scale to meet needs and are deployed on better hosts. - Scaling no longer blocks other scale operations within the App Service Environment v3 instance. Only one scale operation can be in effect for a combination of OS and size. For example, while your Windows small App Service plan was scaling, you could kick off a scale operation to run at the same time on a Windows medium or anything else other than Windows small. - Apps in an internal VIP App Service Environment v3 can be reached across global peering. Access across global peering was not possible with previous versions.
There are a few features that are not available in App Service Environment v3 th
With App Service Environment v3, there is a different pricing model depending on the type of App Service Environment deployment you have. The three pricing models are: -- **App Service Environment v3**: If App Service Environment is empty, there is a charge as if you had one instance of Windows I1v2. The one instance charge is not an additive charge but is only applied if the App Service Environment is empty.-- **Zone redundant App Service Environment v3**: There is a minimum charge of nine instances. There is no added charge for availability zone support if you have nine or more App Service plan instances. If you have less than nine instances (of any size) across App Service plans in the zone redundant App Service Environment, the difference between nine and the running instance count is charged as additional Windows I1v2 instances.-- **Dedicated host App Service Environment v3**: With a dedicated host deployment, you are charged for two dedicated hosts per our pricing at App Service Environment v3 creation then a small percentage of the Isolated v2 rate per core charge as you scale.
+- **App Service Environment v3**: If App Service Environment is empty, there is a charge as if you had one instance of Windows I1v2. The one instance charge isn't an additive charge but is only applied if the App Service Environment is empty.
+- **Zone redundant App Service Environment v3**: There's a minimum charge of nine instances. There's no added charge for availability zone support if you have nine or more App Service plan instances. If you've fewer than nine instances (of any size) across App Service plans in the zone redundant App Service Environment, the difference between nine and the running instance count is charged as additional Windows I1v2 instances.
+- **Dedicated host App Service Environment v3**: With a dedicated host deployment, you're charged for two dedicated hosts per our pricing at App Service Environment v3 creation then a small percentage of the Isolated v2 rate per core charge as you scale.
Reserved Instance pricing for Isolated v2 is available and is described in [How reservation discounts apply to Azure App Service](../../cost-management-billing/reservations/reservation-discount-app-service.md). The pricing, along with reserved instance pricing, is available at [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/) under **Isolated v2 plan**.
app-service Using An Ase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/using-an-ase.md
description: Learn how to create, publish, and scale apps in an App Service Envi
ms.assetid: a22450c4-9b8b-41d4-9568-c4646f4cf66b Previously updated : 8/5/2021 Last updated : 01/26/2022
An ASE has 1 TB of storage for all the apps in the ASE. An App Service plan in t
## Monitoring
-As a customer, you should monitor the App Service plans and the individual apps running and take appropriate actions. For App Service Environment v2, you should also pay attention to the metrics around the platform infrastructure. These metrics will give you insights into how the platform infrastructure and frontend servers are doing, and you can take action if they are heavily utilized and you are not getting maximum throughput.
+As a customer, you should monitor the App Service plans and the individual apps running and take appropriate actions. For App Service Environment v2, you should also pay attention to the metrics around the platform infrastructure. These metrics will give you insights into how the platform infrastructure and frontend servers (multiRole) are doing, and you can take action if they're heavily utilized and you aren't getting maximum throughput.
-Through CLI you can configure the scale ratio of your frontend servers between 5 and 15 (default 15) App Service plan instances per frontend server. An App Service Environment will always have a minimum of two frontend servers. You can also increase the size of the frontend servers through CLI.
+Through Azure portal and CLI, you can configure the scale ratio of your frontend servers between 5 and 15 (default 15) App Service plan instances per frontend server. An App Service Environment will always have a minimum of two frontend servers. You can also increase the size of the frontend servers.
-You will see some metrics called Small/Medium/Large App Service Plan Workers and a sub-scope called multiRolePools/default. These are applicable to App Service Environment v1 only.
+The [metrics scope](../../azure-monitor/essentials/metrics-supported.md#microsoftwebhostingenvironmentsmultirolepools) used to monitor the platform infrastructure is called `Microsoft.Web/hostingEnvironments/multiRolePools`.
+
+You'll see a scope called `Microsoft.Web/hostingEnvironments/workerPools`. The metrics here are only applicable to App Service Environment v1.
## Logging
You can integrate your ASE with Azure Monitor to send logs about the ASE to Azur
| Situation | Message | ||-| | ASE is unhealthy | The specified ASE is unhealthy due to an invalid virtual network configuration. The ASE will be suspended if the unhealthy state continues. Ensure the guidelines defined here are followed: [Networking considerations for an App Service Environment](network-info.md). |
-| ASE subnet is almost out of space | The specified ASE is in a subnet that is almost out of space. There are {0} remaining addresses. Once these addresses are exhausted, the ASE will not be able to scale. |
+| ASE subnet is almost out of space | The specified ASE is in a subnet that is almost out of space. There are {0} remaining addresses. Once these addresses are exhausted, the ASE won't be able to scale. |
| ASE is approaching total instance limit | The specified ASE is approaching the total instance limit of the ASE. It currently contains {0} App Service Plan instances of a maximum 201 instances. |
-| ASE is unable to reach a dependency | The specified ASE is not able to reach {0}. Ensure the guidelines defined here are followed: [Networking considerations for an App Service Environment](network-info.md). |
+| ASE is unable to reach a dependency | The specified ASE isn't able to reach {0}. Ensure the guidelines defined here are followed: [Networking considerations for an App Service Environment](network-info.md). |
| ASE is suspended | The specified ASE is suspended. The ASE suspension may be due to an account shortfall or an invalid virtual network configuration. Resolve the root cause and resume the ASE to continue serving traffic. | | ASE upgrade has started | A platform upgrade to the specified ASE has begun. Expect delays in scaling operations. | | ASE upgrade has completed | A platform upgrade to the specified ASE has finished. |
To enable logging on your ASE:
![ASE diagnostic log settings][4]
-If you integrate with Log Analytics, you can see the logs by selecting **Logs** from the ASE portal and creating a query against **AppServiceEnvironmentPlatformLogs**. Logs are only emitted when your ASE has an event that will trigger it. If your ASE does not have such an event, there will not be any logs. To quickly see an example of logs in your Log Analytics workspace, perform a scale operation with one of the App Service plans in your ASE. You can then run a query against **AppServiceEnvironmentPlatformLogs** to see those logs.
+If you integrate with Log Analytics, you can see the logs by selecting **Logs** from the ASE portal and creating a query against **AppServiceEnvironmentPlatformLogs**. Logs are only emitted when your ASE has an event that will trigger it. If your ASE doesn't have such an event, there won't be any logs. To quickly see an example of logs in your Log Analytics workspace, perform a scale operation with one of the App Service plans in your ASE. You can then run a query against **AppServiceEnvironmentPlatformLogs** to see those logs.
**Creating an alert**
The pricing SKU called *Isolated* is for use only with ASEs. All App Service pla
In addition to the price of your App Service plans, there's a flat rate for the ASE itself. The flat rate doesn't change with the size of your ASE. It pays for the ASE infrastructure at a default scale rate of one additional front end for every 15 App Service plan instances.
-If the default scale rate of one front end for every 15 App Service plan instances is not fast enough, you can adjust the ratio at which front ends are added or the size of the front ends. When you adjust the ratio or size, you pay for the front-end cores that would not be added by default.
+If the default scale rate of one front end for every 15 App Service plan instances isn't fast enough, you can adjust the ratio at which front ends are added or the size of the front ends. When you adjust the ratio or size, you pay for the front-end cores that wouldn't be added by default.
For example, if you adjust the scale ratio to 10, a front end is added for every 10 instances in your App Service plans. The flat fee covers a scale rate of one front end for every 15 instances. With a scale ratio of 10, you pay a fee for the third front end that's added for the 10 App Service plan instances. You don't need to pay for it when you reach 15 instances because it was added automatically.
To delete an ASE:
## ASE CLI
-There are command line capabilities to administer to an ASE. The az cli commands are noted below.
+There are command line capabilities to administer to an ASE. The Azure CLI commands are noted below.
```azurecli C:\>az appservice ase --help
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-vnet-integration.md
Title: Integrate your app with an Azure virtual network
description: Integrate your app in Azure App Service with Azure virtual networks. Previously updated : 10/20/2021 Last updated : 01/26/2022
When you scale up or down in size, the required address space is doubled for a s
Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity, use a `/26` with 64 addresses. When creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /27 is required.
-When you want your apps in your plan to reach a virtual network that's already connected to by apps in another plan, select a different subnet than the one being used by the preexisting virtual network integration.
+When you want your apps in your plan to reach a virtual network that's already connected to by apps in another plan, select a different subnet than the one being used by the pre-existing virtual network integration.
-You must have at least the following RBAC permissions on the subnet or at a higher level to configure regional virtual network integration through Azure portal, CLI or when setting the `virtualNetworkSubnetId` site property directly:
+You must have at least the following Role-based access control permissions on the subnet or at a higher level to configure regional virtual network integration through Azure portal, CLI or when setting the `virtualNetworkSubnetId` site property directly:
| Action | Description | |-|-|
You must have at least the following RBAC permissions on the subnet or at a high
| Microsoft.Network/virtualNetworks/subnets/read | Read a virtual network subnet definition | | Microsoft.Network/virtualNetworks/subnets/join/action | Joins a virtual network |
+If the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the Microsoft.Web resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but it will also automatically be registered when creating the first web app in a subscription.
+ ### Routes There are two types of routing to consider when you configure regional virtual network integration. Application routing defines what traffic is routed from your application and into the virtual network. Network routing is the ability to control how traffic is routed from your virtual network and out.
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
docker-compose down
The Form Recognizer containers send billing information to Azure by using a Form Recognizer resource on your Azure account.
-Queries to the container are billed at the pricing tier of the Azure resource that's used for the `ApiKey`. You will be billed for each container instance used to process your documents and images. Thus, If you use the business card feature, you will be billed for the Form Recognizer `BusinessCard` and `Compuer Vision Read` container instances. For the invoice feature, you will be billed for the Form Recognizer `Invoice` and `Layout` container instances. *See*, [Form Recognizer](https://azure.microsoft.com/pricing/details/form-recognizer/) and Computer Vision [Read feature](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) container pricing.
+Queries to the container are billed at the pricing tier of the Azure resource that's used for the `ApiKey`. You will be billed for each container instance used to process your documents and images. Thus, If you use the business card feature, you will be billed for the Form Recognizer `BusinessCard` and `Computer Vision Read` container instances. For the invoice feature, you will be billed for the Form Recognizer `Invoice` and `Layout` container instances. *See*, [Form Recognizer](https://azure.microsoft.com/pricing/details/form-recognizer/) and Computer Vision [Read feature](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) container pricing.
Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. Containers must be enabled to communicate billing information with the billing endpoint at all times. Cognitive Services containers don't send customer data, such as the image or text that's being analyzed, to Microsoft.
azure-arc Migrate To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/migrate-to-managed-instance.md
In this step, we will connect to the source SQL Server and create the backup fil
1. Similarly, prepare the **BACKUP DATABASE** command as follows to create a backup file to the blob container. Once you have substituted the values, run the query. ```sql
- BACKUP DATABASE <database name> TO URL = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>'
+ BACKUP DATABASE <database name> TO URL = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>/<file name>.bak'
``` 1. Open Azure Storage Explorer and validate that the backup file created in previous step is visible in the Blob container
+Learn more about backup to URL here:
+
+- [SQL Server Backup and Restore with Azure Blob Storage](/sql/relational-databases/backup-restore/sql-server-backup-and-restore-with-microsoft-azure-blob-storage-service)
+
+- [Back up to URL docs](/sql/relational-databases/backup-restore/sql-server-backup-to-url)
+
+- [Back up to URL using SQL Server Management Studio (SSMS)](/sql/relational-databases/tutorial-sql-server-backup-and-restore-to-azure-blob-storage-service)
++ ### Step 4: Restore the database from Azure blob storage to SQL Managed Instance - Azure Arc 1. From Azure Data Studio, login and connect to the SQL Managed Instance - Azure Arc.
In this step, we will connect to the source SQL Server and create the backup fil
1. Prepare and run the **RESTORE DATABASE** command as follows to restore the backup file to a database on SQL Managed Instance - Azure Arc ```sql
- RESTORE DATABASE <database name> FROM URL = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>/<file name>'
+ RESTORE DATABASE <database name> FROM URL = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>/<file name>.bak'
WITH MOVE 'Test' to '/var/opt/mssql/datf' ,MOVE 'Test_log' to '/var/opt/mssql/data/<file name>.ldf' ,RECOVERY
In this step, we will connect to the source SQL Server and create the backup fil
GO ```
-Learn more about backup to URL here:
-
-[Backup to URL docs](/sql/relational-databases/backup-restore/sql-server-backup-to-url)
-
-[Backup to URL using SQL Server Management Studio (SSMS)](/sql/relational-databases/tutorial-sql-server-backup-and-restore-to-azure-blob-storage-service)
- - ## Method 2: Copy the backup file into an Azure SQL Managed Instance - Azure Arc pod using kubectl
Backup the SQL Server database to your local file path like any typical SQL Serv
```sql BACKUP DATABASE Test
-TO DISK = 'c:\tmp\test.bak'
+TO DISK = 'C:\Backupfiles\test.bak'
WITH FORMAT, MEDIANAME = 'Test' ; GO ```
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/azure-rbac.md
az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --featur
1. SSH into every master node of the cluster and take the following steps:
+ **If your `kube-apiserver` is a [static pod](https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/):**
+
+ 1. The `azure-arc-guard-manifests` secret in the `kube-system` namespace contains two files `guard-authn-webhook.yaml` and `guard-authz-webhook.yaml`. Copy these files to the `/etc/guard` directory of the node.
+ 1. Open the `apiserver` manifest in edit mode: ```console
az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --featur
``` 1. Add the following specification under `volumes`:
-
+
```yml - name: azure-rbac
- secret:
- secretName: azure-arc-guard-manifests
+ hostPath:
+ path: /etc/guard
+ type: Directory
``` 1. Add the following specification under `volumeMounts`: ```yml - mountPath: /etc/guard
- name: azure-rbac
- readOnly: true
+ name: azure-rbac
+ readOnly: true
```
- 1. Add the following `apiserver` arguments:
+ **If your `kube-apiserver` is a not a static pod:**
- ```yml
- - --authentication-token-webhook-config-file=/etc/guard/guard-authn-webhook.yaml
- - --authentication-token-webhook-cache-ttl=5m0s
- - --authorization-webhook-cache-authorized-ttl=5m0s
- - --authorization-webhook-config-file=/etc/guard/guard-authz-webhook.yaml
- - --authorization-webhook-version=v1
- - --authorization-mode=Node,Webhook,RBAC
+ 1. Open the `apiserver` manifest in edit mode:
+
+ ```console
+ sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
```+
+ 1. Add the following specification under `volumes`:
- If the Kubernetes cluster is version 1.19.0 or later, you also need to set the following `apiserver` argument:
+ ```yml
+ - name: azure-rbac
+ secret:
+ secretName: azure-arc-guard-manifests
+ ```
+
+ 1. Add the following specification under `volumeMounts`:
```yml
- - --authentication-token-webhook-version=v1
+ - mountPath: /etc/guard
+ name: azure-rbac
+ readOnly: true
```
- 1. Save and close the editor to update the `apiserver` pod.
+1. Add the following `apiserver` arguments:
+
+ ```yml
+ - --authentication-token-webhook-config-file=/etc/guard/guard-authn-webhook.yaml
+ - --authentication-token-webhook-cache-ttl=5m0s
+ - --authorization-webhook-cache-authorized-ttl=5m0s
+ - --authorization-webhook-config-file=/etc/guard/guard-authz-webhook.yaml
+ - --authorization-webhook-version=v1
+ - --authorization-mode=Node,RBAC,Webhook
+ ```
+
+ If the Kubernetes cluster is version 1.19.0 or later, you also need to set the following `apiserver` argument:
+
+ ```yml
+ - --authentication-token-webhook-version=v1
+ ```
+
+1. Save and close the editor to update the `apiserver` pod.
### Cluster created by using Cluster API
az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --featur
authentication-token-webhook-cache-ttl: 5m0s authentication-token-webhook-config-file: /etc/guard/guard-authn-webhook.yaml authentication-token-webhook-version: v1
- authorization-mode: Node,Webhook,RBAC
+ authorization-mode: Node,RBAC,Webhook
authorization-webhook-cache-authorized-ttl: 5m0s authorization-webhook-config-file: /etc/guard/guard-authz-webhook.yaml authorization-webhook-version: v1
azure-cache-for-redis Cache How To Redis Cli Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-redis-cli-tool.md
Title: Use redis-cli with Azure Cache for Redis description: Learn how to use *redis-cli.exe* as a command-line tool for interacting with an Azure Cache for Redis as a client - Previously updated : 02/08/2021 Last updated : 01/25/2022 # Use the Redis command-line tool with Azure Cache for Redis
The tool is available for Windows platforms by downloading the [Redis command-line tools for Windows](https://github.com/MSOpenTech/redis/releases/).
-If you want to run the command-line tool on another platform, download official Redis from [https://redis.io/download](https://redis.io/download).
+If you want to run the command-line tool on another platform, download open-source Redis from [https://redis.io/download](https://redis.io/download).
## Gather cache access information
You can gather the information needed to access the cache using three methods:
1. Azure CLI using [az redis list-keys](/cli/azure/redis#az_redis_list_keys) 2. Azure PowerShell using [Get-AzRedisCacheKey](/powershell/module/az.rediscache/Get-AzRedisCacheKey)
-3. Using the Azure portal.
+3. Using the Azure portal
-In this section, you will retrieve the keys from the Azure portal.
+In this section, you retrieve the keys from the Azure portal.
[!INCLUDE [redis-cache-create](includes/redis-cache-access-keys.md)]
In this section, you will retrieve the keys from the Azure portal.
With Azure Cache for Redis, only the TLS port (6380) is enabled by default. The `redis-cli.exe` command-line tool doesn't support TLS. You have two configuration choices to use it:
-1. [Enable the non-TLS port (6379)](cache-configure.md#access-ports) - **This configuration is not recommended** because in this configuration, the access keys are sent via TCP in clear text. This change can compromise access to your cache. The only scenario where you might consider this configuration is when you are just accessing a test cache.
+1. [Enable the non-TLS port (6379)](cache-configure.md#access-ports) - **This configuration is not recommended** because in this configuration, the access keys are sent via TCP in clear text. This change can compromise access to your cache. The only scenario where you might consider this configuration is when youΓÇÖre just accessing a test cache.
2. Download and install [stunnel](https://www.stunnel.org/downloads.html). Run **stunnel GUI Start** to start the server.
- Right-click the taskbar icon for the stunnel server and select **Show Log Window**.
+ Right-click the taskbar icon for the *stunnel* server and select **Show Log Window**.
- On the stunnel Log Window menu, select **Configuration** > **Edit Configuration** to open the current configuration file.
+ On the *stunnel* Log Window menu, select **Configuration** > **Edit Configuration** to open the current configuration file.
Add the following entry for *redis-cli.exe* under the **Service definitions** section. Insert your actual cache name in place of `yourcachename`.
With Azure Cache for Redis, only the TLS port (6380) is enabled by default. The
## Connect using the Redis command-line tool.
-When using stunnel, run *redis-cli.exe*, and pass only your *port*, and *access key* (primary or secondary) to connect to the cache.
+When using *stunnel*, run *redis-cli.exe*, and pass only your *port*, and *access key* (primary or secondary) to connect to the cache.
``` redis-cli.exe -p 6380 -a YourAccessKey
azure-functions Create First Function Vs Code Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-python.md
In this section, you use Visual Studio Code to create a local Azure Functions pr
![Choose Create a new project](./media/functions-create-first-function-vs-code/create-new-project.png)
-1. Choose a directory location for your project workspace and choose **Select**.
+1. Choose a directory location for your project workspace and choose **Select**. It is recommended that you create a new folder or choose an empty folder as the project workspace.
> [!NOTE] > These steps were designed to be completed outside of a workspace. In this case, do not select a project folder that is part of a workspace.
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-app-settings.md
The configuration is specific to Python function apps. It defines the prioritiza
|PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES|`0`| Prioritize loading the Python libraries from internal Python worker's dependencies. Third-party libraries defined in requirements.txt may be shadowed. | |PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES|`1`| Prioritize loading the Python libraries from application's package defined in requirements.txt. This prevents your libraries from colliding with internal Python worker's libraries. |
+## PYTHON_ENABLE_DEBUG_LOGGING
+Enables debug-level logging in a Python function app. A value of '1' enables debug-level logging. Without this setting, only information and higher level logs are sent from the Python worker to the Functions host. Use this setting when debugging or tracing your Python function executions.
+
+|Key|Sample value|
+|||
+|PYTHON_ENABLE_DEBUG_LOGGING|`1`|
+
+When debugging Python functions, make sure to also set a debug or trace [logging level](functions-host-json.md#logging) in the host.json file, as needed. To learn more, see [How to configure monitoring for Azure Functions](configure-monitoring.md).
++ ## PYTHON\_ENABLE\_WORKER\_EXTENSIONS The configuration is specific to Python function apps. Setting this to `1` allows the worker to load in [Python worker extensions](functions-reference-python.md#python-worker-extensions) defined in requirements.txt. It enables your function app to access new features provided by third-party packages. It may also change the behavior of function load and invocation in your app. Please ensure the extension you choose is trustworthy as you bear the risk of using it. Azure Functions gives no express warranties to any extensions. For how to use an extension, please visit the extension's manual page or readme doc. By default, this value sets to `0`.
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 12/07/2021 Last updated : 01/19/2022 # Compare Azure Government and global Azure
The following Azure SQL Managed Instance **features are not currently available*
- Long-term retention
+## Developer tools
+
+This section outlines variations and considerations when using Developer tools in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=load-testing,app-configuration,devtest-lab,lab-services,azure-devops&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+
+### [Enterprise Dev/Test subscription offer](https://azure.microsoft.com/offers/ms-azr-0148p/)
+
+- Enterprise Dev/Test subscription offer in existing or separate tenant is currently available only in Azure public as documented in [Azure EA portal administration](../cost-management-billing/manage/ea-portal-administration.md#enterprise-devtest-offer).
+ ## Identity This section outlines variations and considerations when using Identity services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=information-protection,active-directory-ds,active-directory&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
description: This article tracks FedRAMP and DoD compliance scope for Azure, Dyn
Previously updated : 12/07/2021
+recommendations: false
Last updated : 01/25/2022 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
Last updated 12/07/2021
Microsoft Azure cloud environments meet demanding US government compliance requirements that produce formal authorizations, including: - [Federal Risk and Authorization Management Program](https://www.fedramp.gov/) (FedRAMP)-- Department of Defense (DoD) Cloud Computing [Security Requirements Guide](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html) (SRG) Impact Level (IL) 2, 4, 5, and 6
+- Department of Defense (DoD) Cloud Computing [Security Requirements Guide](https://public.cyber.mil/dccs/dccs-documents/) (SRG) Impact Level (IL) 2, 4, 5, and 6
- [Intelligence Community Directive (ICD) 503](http://www.dni.gov/files/documents/ICD/ICD_503.pdf) - [Joint Special Access Program (SAP) Implementation Guide (JSIG)](https://www.dcsa.mil/portals/91/documents/ctp/nao/JSIG_2016April11_Final_(53Rev4).pdf)
For current Azure Government regions and available services, see [Products avail
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for FedRAMP High, DoD IL2, DoD IL4, DoD IL5, and DoD IL6 authorizations across Azure, Azure Government, and Azure Government Secret cloud environments. For other authorization details in Azure Government Secret and Azure Government Top Secret, contact your Microsoft account representative. ## Azure public services by audit scope
-*Last updated: December 2021*
+*Last updated: January 2022*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| Service | FedRAMP High | DoD IL2 | | - |::|:-:|
+| [AI Builder](/ai-builder/overview) | &#x2705; | &#x2705; |
| [API Management](https://azure.microsoft.com/services/api-management/) | &#x2705; | &#x2705; | | [App Configuration](https://azure.microsoft.com/services/app-configuration/) | &#x2705; | &#x2705; | | [Application Gateway](https://azure.microsoft.com/services/application-gateway/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Arc-enabled Servers](../../azure-arc/servers/overview.md) | &#x2705; | &#x2705; | | [Azure Archive Storage](https://azure.microsoft.com/services/storage/archive/) | &#x2705; | &#x2705; | | [Azure Backup](https://azure.microsoft.com/services/backup/) | &#x2705; | &#x2705; |
-| [Azure Bastion](https://azure.microsoft.com/services/azure-bastion/) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure Bastion](https://azure.microsoft.com/services/azure-bastion/) | &#x2705; | &#x2705; |
| [Azure Blueprints](https://azure.microsoft.com/services/blueprints/) | &#x2705; | &#x2705; | | [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | | [Azure Cache for Redis](https://azure.microsoft.com/services/cache/) | &#x2705; | &#x2705; | | [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/) | &#x2705; | &#x2705; |
-| [Azure Cognitive Search](https://azure.microsoft.com/services/search/) | &#x2705; | &#x2705; |
+| [Azure Cognitive Search](https://azure.microsoft.com/services/search/) (formerly Azure Search) | &#x2705; | &#x2705; |
| [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) | &#x2705; | &#x2705; | | [Azure Cost Management and Billing](https://azure.microsoft.com/services/cost-management/) | &#x2705; | &#x2705; | | [Azure Data Box](https://azure.microsoft.com/services/databox/) **&ast;** | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/) | &#x2705; | &#x2705; | | [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) | &#x2705; | &#x2705; | | [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) | &#x2705; | &#x2705; |
-| [Azure Databricks](https://azure.microsoft.com/services/databricks/) **&ast;&ast;** | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure Databricks](https://azure.microsoft.com/services/databricks/) **&ast;&ast;** | &#x2705; | &#x2705; |
| [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | &#x2705; | &#x2705; | | [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | &#x2705; | &#x2705; | | [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Health Bot](/healthbot/) | &#x2705; | &#x2705; | | [Azure HDInsight](https://azure.microsoft.com/services/hdinsight/) | &#x2705; | &#x2705; | | [Azure Healthcare APIs](https://azure.microsoft.com/services/healthcare-apis/) (formerly Azure API for FHIR) | &#x2705; | &#x2705; |
-| [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | &#x2705; | &#x2705; |
| [Azure Immersive Reader](https://azure.microsoft.com/services/immersive-reader/) | &#x2705; | &#x2705; | | [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | &#x2705; | &#x2705; | | [Azure Internet Analyzer](https://azure.microsoft.com/services/internet-analyzer/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Maps](https://azure.microsoft.com/services/azure-maps/) | &#x2705; | &#x2705; | | [Azure Media Services](https://azure.microsoft.com/services/media-services/) | &#x2705; | &#x2705; | | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | &#x2705; | &#x2705; |
-| [Azure Monitor](https://azure.microsoft.com/services/monitor/) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md), and [Application Change Analysis](../../azure-monitor/app/change-analysis.md)) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure Monitor](https://azure.microsoft.com/services/monitor/) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md), and [Application Change Analysis](../../azure-monitor/app/change-analysis.md)) | &#x2705; | &#x2705; |
| [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) | &#x2705; | &#x2705; | | [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/) | &#x2705; | &#x2705; | | [Azure Peering Service](../../peering-service/about.md) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | | [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | &#x2705; | &#x2705; |
-| [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) | &#x2705; | &#x2705; |
| [Azure Sphere](https://azure.microsoft.com/services/azure-sphere/) | &#x2705; | &#x2705; |
-| [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) (incl. [Azure SQL Managed Instance](https://azure.microsoft.com/products/azure-sql/managed-instance/)) | &#x2705; | &#x2705; |
+| [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) | &#x2705; | &#x2705; |
| [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | | [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/) | &#x2705; | &#x2705; | | [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/) | &#x2705; | &#x2705; | | [Azure Time Series Insights](https://azure.microsoft.com/services/time-series-insights/) | &#x2705; | &#x2705; |
-| [Azure Video Analyzer](https://azure.microsoft.com/products/video-analyzer/) | &#x2705; | &#x2705; |
| [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | | [Azure VMware Solution](https://azure.microsoft.com/services/azure-vmware/) | &#x2705; | &#x2705; |
-| [Azure Web Application Firewall)](https://azure.microsoft.com/services/web-application-firewall/) | &#x2705; | &#x2705; |
+| [Azure Web Application Firewall](https://azure.microsoft.com/services/web-application-firewall/) | &#x2705; | &#x2705; |
| [Batch](https://azure.microsoft.com/services/batch/) | &#x2705; | &#x2705; | | [Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) | &#x2705; | &#x2705; | | [Cognitive
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | | [Microsoft Threat Experts](/microsoft-365/security/defender-endpoint/microsoft-threat-experts) | &#x2705; | &#x2705; | | [Multifactor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; |
-| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) incl. [Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; |
+| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) (incl. [Traffic Analytics](../../network-watcher/traffic-analytics.md)) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | | [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; |
-| [Power AI Builder](/ai-builder/overview) | &#x2705; | &#x2705; |
| [Power Apps](/powerapps/powerapps-overview) | &#x2705; | &#x2705; | | [Power Apps Portal](https://powerapps.microsoft.com/portals/) | &#x2705; | &#x2705; | | [Power Automate](/power-automate/getting-started) (formerly Microsoft Flow) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [SQL Server Stretch Database](https://azure.microsoft.com/services/sql-server-stretch-database/) | &#x2705; | &#x2705; | | [Storage: Blobs](https://azure.microsoft.com/services/storage/blobs/) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | &#x2705; | &#x2705; | | [Storage: Data Movement](../../storage/common/storage-use-data-movement-library.md) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Storage: Disks](https://azure.microsoft.com/services/storage/disks/) (incl. [managed disks](../../virtual-machines/managed-disks-overview.md)) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Storage: Files](https://azure.microsoft.com/services/storage/files/) (incl. [Azure File Sync](../../storage/file-sync/file-sync-introduction.md)) | &#x2705; | &#x2705; | | [Storage: Queues](https://azure.microsoft.com/services/storage/queues/) | &#x2705; | &#x2705; | | [Storage: Tables](https://azure.microsoft.com/services/storage/tables/) | &#x2705; | &#x2705; | | [StorSimple](https://azure.microsoft.com/services/storsimple/) | &#x2705; | &#x2705; | | [Traffic Manager](https://azure.microsoft.com/services/traffic-manager/) | &#x2705; | &#x2705; |
+| [Video Analyzer for Media (formerly Video Indexer)](../../azure-video-analyzer/video-analyzer-for-media-docs/index.yml) | &#x2705; | &#x2705; |
| [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/) | &#x2705; | &#x2705; | | [Virtual Machines (incl. Reserved Instances)](https://azure.microsoft.com/services/virtual-machines/) | &#x2705; | &#x2705; | | [Virtual Network](https://azure.microsoft.com/services/virtual-network/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
**&ast;&ast;** FedRAMP High authorization for Azure Databricks is applicable to limited regions in Azure. To configure Azure Databricks for FedRAMP High use, contact your Microsoft or Databricks representative. ## Azure Government services by audit scope
-*Last updated: December 2021*
+*Last updated: January 2022*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| Service | FedRAMP High | DoD IL2 | DoD IL4 | DoD IL5 | DoD IL6 | | - |::|:-:|:-:|:-:|:-:|
+| [AI Builder](/ai-builder/overview) | &#x2705; | &#x2705; | | | |
| [API Management](https://azure.microsoft.com/services/api-management/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [App Configuration](https://azure.microsoft.com/services/app-configuration/) | &#x2705; | &#x2705; | | | |
+| [App Configuration](https://azure.microsoft.com/services/app-configuration/) | &#x2705; | &#x2705; | &#x2705; |&#x2705; | |
| [Application Gateway](https://azure.microsoft.com/services/application-gateway/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Automation](https://azure.microsoft.com/services/automation/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Active Directory (Free and Basic)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Bastion](https://azure.microsoft.com/services/azure-bastion/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Blueprints](https://azure.microsoft.com/services/blueprints/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Cache for Redis](https://azure.microsoft.com/services/cache/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Cache for Redis](https://azure.microsoft.com/services/cache/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Cognitive Search](https://azure.microsoft.com/services/search/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Cognitive Search](https://azure.microsoft.com/services/search/) (formerly Azure Search) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Cost Management and Billing](https://azure.microsoft.com/services/cost-management/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure CXP Nomination Portal](https://cxp.azure.com/nominationportal/nominationform/fasttrack)| &#x2705; | &#x2705; | | | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Databricks](https://azure.microsoft.com/services/databricks/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure DNS](https://azure.microsoft.com/services/dns/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Firewall Manager](https://azure.microsoft.com/services/firewall-manager/#overview) | &#x2705; | &#x2705; | | | |
-| [Azure Form Recognizer](https://azure.microsoft.com/services/form-recognizer/) | &#x2705; | &#x2705; | | | |
+| [Azure Form Recognizer](https://azure.microsoft.com/services/form-recognizer/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Functions](https://azure.microsoft.com/services/functions/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure HDInsight](https://azure.microsoft.com/services/hdinsight/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) **&ast;&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Lab Services](https://azure.microsoft.com/services/lab-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Lab Services](https://azure.microsoft.com/services/lab-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Lighthouse](https://azure.microsoft.com/services/azure-lighthouse/)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Peering Service](../../peering-service/about.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Policy](https://azure.microsoft.com/services/azure-policy/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Policy's guest configuration](../../governance/policy/concepts/guest-configuration.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; | | | |
+| [Azure Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure Scheduler](../../scheduler/scheduler-intro.md) (replaced by [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Sign-up portal](https://signup.azure.com/) | &#x2705; | &#x2705; | | | | | [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure SQL Managed Instance](https://azure.microsoft.com/products/azure-sql/managed-instance/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Stack Bridge](/azure-stack/operator/azure-stack-usage-reporting) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | &#x2705; | | | |
+| [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
-| [Azure Web Application Firewall)](https://azure.microsoft.com/services/web-application-firewall/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Web Application Firewall](https://azure.microsoft.com/services/web-application-firewall/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Batch](https://azure.microsoft.com/services/batch/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Cognitive
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Event Grid](https://azure.microsoft.com/services/event-grid/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; | | | |
+| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Import/Export](https://azure.microsoft.com/services/storage/import-export/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Key Vault](https://azure.microsoft.com/services/key-vault/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Load Balancer](https://azure.microsoft.com/services/load-balancer/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; | | | |
+| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | &#x2705; | | [Microsoft Azure Government portal](../documentation-government-get-started-connect-with-portal.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | &#x2705; | | [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/) (formerly Azure Security Center) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security) (formerly Microsoft Cloud App Security) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) (formerly Microsoft Defender Advanced Threat Protection) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Microsoft Defender for Identity](/defender-for-identity/what-is) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Microsoft Defender for Identity](/defender-for-identity/what-is) (formerly Azure Advanced Threat Protection) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Microsoft Defender for IoT](https://azure.microsoft.com/services/azure-defender-for-iot/) (formerly Azure Security for IoT) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Graph](/graph/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Microsoft Intune](/mem/intune/fundamentals/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Network Watcher Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Planned Maintenance for VMs](../../virtual-machines/maintenance-control-portal.md) | &#x2705; | &#x2705; | | | |
-| [Power AI Builder](/ai-builder/overview) | &#x2705; | &#x2705; | | | |
+| [Planned Maintenance for VMs](../../virtual-machines/maintenance-and-updates.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Power Apps](/powerapps/powerapps-overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power Automate](/power-automate/getting-started) (formerly Microsoft Flow) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power BI](https://powerbi.microsoft.com/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Power BI Embedded](https://azure.microsoft.com/services/power-bi-embedded/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power Data Integrator](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power Query Online](https://powerquery.microsoft.com/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Power Virtual Agents](/power-virtual-agents/fundamentals-what-is-power-virtual-agents) | &#x2705; | &#x2705; | | | |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Power Virtual Agents](/power-virtual-agents/fundamentals-what-is-power-virtual-agents) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Private Link](https://azure.microsoft.com/services/private-link/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Service Bus](https://azure.microsoft.com/services/service-bus/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [SQL Server Stretch Database](https://azure.microsoft.com/services/sql-server-stretch-database/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Storage: Blobs](https://azure.microsoft.com/services/storage/blobs/) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Virtual Machines](https://azure.microsoft.com/services/virtual-machines/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Virtual Network](https://azure.microsoft.com/services/virtual-network/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Virtual WAN](https://azure.microsoft.com/services/virtual-wan/) | &#x2705; | &#x2705; | | | &#x2705; |
+| [Virtual WAN](https://azure.microsoft.com/services/virtual-wan/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [VPN Gateway](https://azure.microsoft.com/services/vpn-gateway/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Web Apps (App Service)](https://azure.microsoft.com/services/app-service/web/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | **&ast;** Authorizations for edge devices (such as Azure Data Box and Azure Stack Edge) apply only to Azure services that support on-premises, customer-managed devices. You are wholly responsible for the authorization package that covers the physical devices. For assistance with accelerating your onboarding and authorization of devices, contact your Microsoft account representative. **&ast;&ast;** Azure Information Protection (AIP) is part of the Microsoft Information Protection (MIP) solution - it extends the labeling and classification functionality provided by Microsoft 365. Before AIP can be used for DoD workloads at a given impact level (IL), the corresponding Microsoft 365 services must be authorized at the same IL.+
+## Next steps
+
+Learn more about Azure Government:
+
+- [Acquiring and accessing Azure Government](https://azure.microsoft.com/offers/azure-government/)
+- [Azure Government overview](../documentation-government-welcome.md)
+- [Azure Government security](../documentation-government-plan-security.md)
+- [FedRAMP High](/azure/compliance/offerings/offering-fedramp)
+- [DoD Impact Level 2](/azure/compliance/offerings/offering-dod-il2)
+- [DoD Impact Level 4](/azure/compliance/offerings/offering-dod-il4)
+- [DoD Impact Level 5](/azure/compliance/offerings/offering-dod-il5)
+- [DoD Impact Level 6](/azure/compliance/offerings/offering-dod-il6)
+- [Isolation guidelines for Impact Level 5 workloads](../documentation-government-impact-level-5.md)
+- [Azure guidance for secure isolation](../azure-secure-isolation-guidance.md)
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-csp-list.md
Title: Azure Government authorized reseller list
-description: Comprehensive list of Azure Government cloud solution providers, resellers, and distributors.
+description: Comprehensive list of Azure Government Cloud Solution Providers, resellers, and distributors.
cloud: gov Previously updated : 10/05/2021 Last updated : 01/19/2022 # Azure Government authorized reseller list
-Since the launch of [Azure Government services in the Cloud Solution Provider (CSP) program)](https://azure.microsoft.com/blog/announcing-microsoft-azure-government-services-in-the-cloud-solution-provider-program/), we have worked with the partner community to bring them the benefits of this channel, enable them to resell Azure Government, and help them grow their business while providing the cloud services their customers need.
+Since the launch of [Azure Government services in the Cloud Solution Provider (CSP) program)](https://azure.microsoft.com/blog/announcing-microsoft-azure-government-services-in-the-cloud-solution-provider-program/), we've worked with the partner community to bring them the benefits of this channel. Our goal is to enable the partner community to resell Azure Government and help them grow their business while providing customers with cloud services they need.
Below you can find a list of all the authorized Cloud Solution Providers (CSPs), Agreement for Online Services for Government (AOS-G), and Licensing Solution Providers (LSP) that can transact Azure Government. Updates to this list will be made as new partners are onboarded.
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Casserly Consulting](https://www.casserlyconsulting.com)| |[Carahsoft Technology Corporation](https://www.carahsoft.com/)| |[Castalia Systems](https://www.castaliasystems.com)|
+|[cb20](https://cb20.com/)|
|[CB5 Solutions](https://www.cbfive.com/)| |[cBEYONData](https://cbeyondata.com/)| |[CBTS](https://www.cbts.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[CodeLynx, LLC](http://www.codelynx.com/)| |[Columbus US, Inc.](https://www.columbusglobal.com)| |[Competitive Innovations, LLC](https://www.cillc.com)|
-|[Computer Professionals International](https://cb20.com/)|
|[Computer Solutions Inc.](http://cs-inc.co/)| |[Computex Technology Solutions](http://www.computex-inc.com/)| |[ConvergeOne](https://www.convergeone.com)|
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-impact-level-5.md
Previously updated : 11/17/2021
+recommendations: false
Last updated : 01/20/2022 # Isolation guidelines for Impact Level 5 workloads
-Azure Government supports applications that use Impact Level 5 (IL5) data in all available regions. IL5 requirements are defined in the [US Department of Defense (DoD) Cloud Computing Security Requirements Guide (SRG)](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html#3INFORMATIONSECURITYOBJECTIVES/IMPACTLEVELS). IL5 workloads have a higher degree of impact to the DoD and must be secured to a higher standard. When you deploy these workloads on Azure Government, you can meet their isolation requirements in various ways. The guidance in this document addresses configurations and settings needed to meet the IL5 isolation requirements. We'll update this document as we enable new isolation options and the Defense Information Systems Agency (DISA) authorizes new services for IL5 data.
+Azure Government supports applications that use Impact Level 5 (IL5) data in all available regions. IL5 requirements are defined in the [US Department of Defense (DoD) Cloud Computing Security Requirements Guide (SRG)](https://public.cyber.mil/dccs/dccs-documents/). IL5 workloads have a higher degree of impact to the DoD and must be secured to a higher standard. When you deploy these workloads on Azure Government, you can meet their isolation requirements in various ways. The guidance in this document addresses configurations and settings needed to meet the IL5 isolation requirements. We'll update this document as we enable new isolation options and the Defense Information Systems Agency (DISA) authorizes new services for IL5 data.
## Background
-In January 2017, DISA awarded the IL5 Provisional Authorization (PA) to [Azure Government](https://azure.microsoft.com/global-infrastructure/government/get-started/), making it the first IL5 PA awarded to a hyperscale cloud provider. The PA covered two Azure Government regions (US DoD Central and US DoD East) that are [dedicated to the DoD](https://azure.microsoft.com/global-infrastructure/government/dod/). Based on DoD mission owner feedback and evolving security capabilities, Microsoft has partnered with DISA to expand the IL5 PA boundary in December 2018 to cover the remaining Azure Government regions: US Gov Arizona, US Gov Texas, and US Gov Virginia. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
+In January 2017, DISA awarded the [IL5 Provisional Authorization](/azure/compliance/offerings/offering-dod-il5) (PA) to [Azure Government](https://azure.microsoft.com/global-infrastructure/government/get-started/), making it the first IL5 PA awarded to a hyperscale cloud provider. The PA covered two Azure Government regions (US DoD Central and US DoD East) that are [dedicated to the DoD](https://azure.microsoft.com/global-infrastructure/government/dod/). Based on DoD mission owner feedback and evolving security capabilities, Microsoft has partnered with DISA to expand the IL5 PA boundary in December 2018 to cover the remaining Azure Government regions: US Gov Arizona, US Gov Texas, and US Gov Virginia. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
Azure Government is available to US federal, state, local, and tribal governments and their partners. The IL5 expansion to Azure Government honors the isolation requirements mandated by the DoD. Azure Government continues to provide more PaaS services suitable for DoD IL5 workloads than any other cloud services environment.
You need to address two key areas for Azure services in IL5 scope: compute isola
### Compute isolation
-IL5 separation requirements are stated in the SRG [Section 5.2.2.3](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html#5.2LegalConsiderations). The SRG focuses on compute separation during "processing" of IL5 data. This separation ensures that a virtual machine that could potentially compromise the physical host can't affect a DoD workload. To remove the risk of runtime attacks and ensure long running workloads aren't compromised from other workloads on the same host, **all IL5 virtual machines and virtual machine scale sets** should be isolated via [Azure Dedicated Host](https://azure.microsoft.com/services/virtual-machines/dedicated-host/) or [isolated virtual machines](../virtual-machines/isolation.md). Doing so provides a dedicated physical server to host your Azure Virtual Machines (VMs) for Windows and Linux.
+IL5 separation requirements are stated in Section 5.2.2.3 (Page 51) of the [Cloud Computing SRG](https://public.cyber.mil/dccs/dccs-documents/). The SRG focuses on compute separation during "processing" of IL5 data. This separation ensures that a virtual machine that could potentially compromise the physical host can't affect a DoD workload. To remove the risk of runtime attacks and ensure long running workloads aren't compromised from other workloads on the same host, **all IL5 virtual machines and virtual machine scale sets** should be isolated via [Azure Dedicated Host](https://azure.microsoft.com/services/virtual-machines/dedicated-host/) or [isolated virtual machines](../virtual-machines/isolation.md). Doing so provides a dedicated physical server to host your Azure Virtual Machines (VMs) for Windows and Linux.
For services where the compute processes are obfuscated from access by the owner and stateless in their processing of data, you should accomplish isolation by focusing on the data being processed and how it's stored and retained. This approach ensures the data is stored in protected mediums. It also ensures the data isn't present on these services for extended periods unless it's encrypted as needed. ### Storage isolation
+The DoD requirements for encrypting data at rest are provided in Section 5.11 (Page 122) of the [Cloud Computing SRG](https://public.cyber.mil/dccs/dccs-documents/). DoD emphasizes encrypting all data at rest stored in virtual machine virtual hard drives, mass storage facilities at the block or file level, and database records where the mission owner does not have sole control over the database service. For cloud applications where encrypting data at rest with DoD key control is not possible, mission owners must perform a risk analysis with relevant data owners before transmitting data into a cloud service offering.
+ In a recent PA for Azure Government, DISA approved logical separation of IL5 from other data via cryptographic means. In Azure, this approach involves data encryption via keys that are maintained in Azure Key Vault and stored in [FIPS 140 validated](/azure/compliance/offerings/offering-fips-140-2) Hardware Security Modules (HSMs). The keys are owned and managed by the IL5 system owner (also known as customer-managed keys). Here's how this approach applies to
Here's how this approach applies to
This approach ensures all key material for decrypting data is stored separately from the data itself using a hardware-based key management solution.
-The DoD requirements for encrypting data at rest are provided in the SRG [Section 5.11](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html#5.11EncryptionofData-at-RestinCommercialCloudStorage). DoD emphasizes encrypting all data at rest stored in virtual machine virtual hard drives, mass storage facilities at the block or file level, and database records where the mission owner does not have sole control over the database service. For cloud applications where encrypting data at rest with DoD key control is not possible, mission owners must perform a risk analysis with relevant data owners before transmitting data into a cloud service offering.
- ## Applying this guidance IL5 guidelines require workloads to be deployed with a high degree of security, isolation, and control. The following configurations are required *in addition* to any other configurations or controls needed to meet IL5 requirements. Network isolation, access controls, and other necessary security measures aren't necessarily addressed in this article. > [!NOTE]
-> This article tracks Azure services that have received DoD IL5 PA and that require extra configuration options to meet IL5 isolation requirmements. Services with IL5 PA that do not require any extra configuration options are not mentioned in this article. For a list of services in scope for DoD IL5 PA, see **[Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).**
+> This article tracks Azure services that have received DoD IL5 PA and that require extra configuration options to meet IL5 isolation requirements. Services with IL5 PA that do not require any extra configuration options are not mentioned in this article. For a list of services in scope for DoD IL5 PA, see **[Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).**
Be sure to review the entry for each service you're using and ensure that all isolation requirements are implemented.
For Containers services availability in Azure Government, see [Products availabl
For Databases services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sql,sql-server-stretch-database,redis-cache,database-migration,postgresql,mariadb,mysql,sql-database,cosmos-db&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
-### [Azure Cache for Redis](https://azure.microsoft.com/services/cache/)
-
-Azure Cache for Redis supports Impact Level 5 workloads in Azure Government with no extra configuration required.
- ### [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) - Data stored in your Azure Cosmos account is automatically and seamlessly encrypted with keys managed by Microsoft (service-managed keys). Optionally, you can choose to add a second layer of encryption with keys you manage (customer-managed keys). For more information, see [Configure customer-managed keys for your Azure Cosmos account with Azure Key Vault](../cosmos-db/how-to-setup-cmk.md).
For more information about how to enable this Azure Storage encryption feature,
### [StorSimple](https://azure.microsoft.com/services/storsimple/) - To help ensure the security and integrity of data moved to the cloud, StorSimple allows you to [define cloud storage encryption keys](../storsimple/storsimple-8000-security.md#storsimple-data-protection). You specify the cloud storage encryption key when you create a volume container. ++
+## Next steps
+
+Learn more about Azure Government:
+
+- [Acquiring and accessing Azure Government](https://azure.microsoft.com/offers/azure-government/)
+- [Azure Government overview](./documentation-government-welcome.md)
+- [DoD Impact Level 5](/azure/compliance/offerings/offering-dod-il5)
+- [DoD in Azure Government](./documentation-government-overview-dod.md)
+- [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope)
+- [Azure Government security](./documentation-government-plan-security.md)
+- [Azure guidance for secure isolation](./azure-secure-isolation-guidance.md)
+
+Start using Azure Government:
+
+- [Guidance for developers](./documentation-government-developer-guide.md)
+- [Connect with the Azure Government portal](./documentation-government-get-started-connect-with-portal.md)
+- [Deploy STIG-compliant Linux VMs](./documentation-government-stig-linux-vm.md)
+- [Deploy STIG-compliant Windows VMs](./documentation-government-stig-windows-vm.md)
azure-government Documentation Government Overview Dod https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-overview-dod.md
Title: Azure Government DoD Overview | Microsoft Docs description: Features and guidance for using Azure Government DoD regions-
-cloud: gov
- Previously updated : 08/04/2021
+recommendations: false
Last updated : 01/25/2022 # Department of Defense (DoD) in Azure Government ## Overview
-Azure Government is used by the US Department of Defense (DoD) entities to deploy a broad range of workloads and solutions. Some of these workloads can be subject to the DoD Cloud Computing [Security Requirements Guide](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html) (SRG) Impact Level 4 (IL4) and Impact Level 5 (IL5) restrictions. Azure Government was the first hyperscale cloud services platform to be awarded a DoD IL5 Provisional Authorization (PA) by the Defense Information Systems Agency (DISA). For more information about DISA and DoD IL5, see [Department of Defense (DoD) Impact Level 5](/azure/compliance/offerings/offering-dod-il5) compliance documentation.
+Azure Government is used by the US Department of Defense (DoD) entities to deploy a broad range of workloads and solutions. Some of these workloads can be subject to the DoD Cloud Computing [Security Requirements Guide](https://public.cyber.mil/dccs/dccs-documents/) (SRG) Impact Level 4 (IL4) and Impact Level 5 (IL5) restrictions. Azure Government was the first hyperscale cloud services platform to be awarded a DoD IL5 Provisional Authorization (PA) by the Defense Information Systems Agency (DISA). For more information about DISA and DoD IL5, see [Department of Defense (DoD) Impact Level 5](/azure/compliance/offerings/offering-dod-il5) compliance documentation.
Azure Government offers the following regions to DoD mission owners and their partners: |Regions|Relevant authorizations|# of IL5 PA services| ||||
-|US Gov Arizona </br> US Gov Texas </br> US Gov Virginia|FedRAMP High, DoD IL4, DoD IL5|138|
-|US DoD Central </br> US DoD East|DoD IL5|64|
+|US Gov Arizona </br> US Gov Texas </br> US Gov Virginia|FedRAMP High, DoD IL4, DoD IL5|145|
+|US DoD Central </br> US DoD East|DoD IL5|60|
**Azure Government regions** (US Gov Arizona, US Gov Texas, and US Gov Virginia) are intended for US federal (including DoD), state, and local government agencies, and their partners. **Azure Government DoD regions** (US DoD Central and US DoD East) are reserved for exclusive DoD use. Separate DoD IL5 PAs are in place for Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) vs. Azure Government DoD regions (US DoD Central and US DoD East). The primary differences between DoD IL5 PAs that are in place for Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) vs. Azure Government DoD regions (US DoD Central and US DoD East) are: -- **IL5 compliance scope:** Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) have many more services authorized provisionally at DoD IL5, which in turn enables DoD mission owners and their partners to deploy more realistic applications in these regions. For a complete list of services in scope for DoD IL5 PA in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia), see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). For a complete list of Azure Government DoD regions (US DoD Central and US DoD East) services in scope for DoD IL5 PA, see [Azure Government DoD regions IL5 audit scope](#azure-government-dod-regions-il5-audit-scope).
+- **IL5 compliance scope:** Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) have many more services authorized provisionally at DoD IL5, which in turn enables DoD mission owners and their partners to deploy more realistic applications in these regions.
+ - For a complete list of services in scope for DoD IL5 PA in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia), see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
+ - For a complete list of services in scope for DoD IL5 in Azure Government DoD regions (US DoD Central and US DoD East), see [Azure Government DoD regions IL5 audit scope](#azure-government-dod-regions-il5-audit-scope) in this article.
- **IL5 configuration:** Azure Government DoD regions (US DoD Central and US DoD East) are physically isolated from the rest of Azure Government and reserved for exclusive DoD use. Therefore, no extra configuration is needed in DoD regions when deploying Azure services intended for IL5 workloads. In contrast, some Azure services deployed in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md). > [!NOTE]
The following services are in scope for DoD IL5 PA in Azure Government DoD regio
- [Azure Media Services](https://azure.microsoft.com/services/media-services/) - [Azure Monitor](https://azure.microsoft.com/services/monitor/) - [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/)-- [Azure Scheduler](../scheduler/index.yml)
+- [Azure Scheduler](../scheduler/index.yml) (replaced by [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/))
- [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) - [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) - [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/)-- [Azure SQL Database](https://azure.microsoft.com/products/azure-sql/database/) (incl. [Azure SQL Managed Instance](https://azure.microsoft.com/products/azure-sql/managed-instance/))
+- [Azure SQL Database](https://azure.microsoft.com/products/azure-sql/database/)
- [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/) - [Batch](https://azure.microsoft.com/services/batch/)-- [Dataverse](/powerapps/maker/data-platform/data-platform-intro) (formerly Common Data Service)-- [Dynamics 365 Customer Service](/dynamics365/customer-service/overview)-- [Dynamics 365 Field Service](/dynamics365/field-service/overview)-- [Dynamics 365 Project Service Automation](/dynamics365/project-operations/psa/overview)-- [Dynamics 365 Sales](/dynamics365/sales-enterprise/overview)
+- [Dynamics 365 Customer Engagement](/dynamics365/admin/admin-guide)
- [Event Grid](https://azure.microsoft.com/services/event-grid/) - [Event Hubs](https://azure.microsoft.com/services/event-hubs/) - [Import/Export](https://azure.microsoft.com/services/storage/import-export/) - [Key Vault](https://azure.microsoft.com/services/key-vault/) - [Load Balancer](https://azure.microsoft.com/services/load-balancer/)-- [Microsoft Azure porta](https://azure.microsoft.com/features/azure-portal/)
+- [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/)
- [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) (formerly Microsoft Defender Advanced Threat Protection) - [Microsoft Graph](/graph/overview) - [Microsoft Stream](/stream/overview) - [Network Watcher](https://azure.microsoft.com/services/network-watcher/)-- [Network Watcher Traffic Analytics](../network-watcher/traffic-analytics.md) - [Power Apps](/powerapps/powerapps-overview) - [Power Apps portal](https://powerapps.microsoft.com/portals/) - [Power Automate](/power-automate/getting-started) (formerly Microsoft Flow)
The following services are in scope for DoD IL5 PA in Azure Government DoD regio
- [Service Bus](https://azure.microsoft.com/services/service-bus/) - [SQL Server Stretch Database](https://azure.microsoft.com/services/sql-server-stretch-database/) - [Storage: Blobs](https://azure.microsoft.com/services/storage/blobs/) (incl. [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md))-- [Storage: Disks (incl. Managed Disks)](https://azure.microsoft.com/services/storage/disks/)
+- [Storage: Disks](https://azure.microsoft.com/services/storage/disks/) (incl. [managed disks](../virtual-machines/managed-disks-overview.md))
- [Storage: Files](https://azure.microsoft.com/services/storage/files/) - [Storage: Queues](https://azure.microsoft.com/services/storage/queues/) - [Storage: Tables](https://azure.microsoft.com/services/storage/tables/)
The following services are in scope for DoD IL5 PA in Azure Government DoD regio
## Frequently asked questions
-### What are the Azure Government DoD regions? 
+### What are the Azure Government DoD regions?
Azure Government DoD regions (US DoD Central and US DoD East) are physically separated Azure Government regions reserved for exclusive use by the DoD.
-### What is the difference between Azure Government and the Azure Government DoD regions? 
+### What is the difference between Azure Government and Azure Government DoD regions?
Azure Government is a US government community cloud providing services for federal, state and local government customers, tribal entities, and other entities subject to various US government regulations such as CJIS, ITAR, and others. All Azure Government regions are designed to meet the security requirements for DoD IL5 workloads. Azure Government DoD regions (US DoD Central and US DoD East) achieve DoD IL5 tenant separation requirements by being dedicated exclusively to DoD. In Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia), some services require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md). ### How do Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) support IL5 data?
-Azure provides [extensive support for tenant isolation](./azure-secure-isolation-guidance.md) across compute, storage, and networking services to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications. Moreover, some Azure services deployed in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md).
+Azure provides [extensive support for tenant isolation](./azure-secure-isolation-guidance.md) across compute, storage, and networking services to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications. Some Azure services deployed in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md).
-### What is IL5 data? 
+### What is IL5 data?
IL5 accommodates controlled unclassified information (CUI) that requires a higher level of protection than is afforded by IL4 as deemed necessary by the information owner, public law, or other government regulations. IL5 also supports unclassified National Security Systems (NSS). This impact level accommodates NSS and CUI categorizations based on CNSSI 1253 up to moderate confidentiality and moderate integrity (M-M-x). For more information on IL5 data, see [DoD IL5 overview](/azure/compliance/offerings/offering-dod-il5#dod-il5-overview).
-### What is the difference between IL4 and IL5 data?  
+### What is the difference between IL4 and IL5 data?
IL4 data is controlled unclassified information (CUI) that may include data subject to export control, protected health information, and other data requiring explicit CUI designation (for example, For Official Use Only, Law Enforcement Sensitive, and Sensitive Security Information). IL5 data includes CUI that requires a higher level of protection as deemed necessary by the information owner, public law, or government regulation. IL5 data is inclusive of unclassified National Security Systems.
-### Do Azure Government regions support classified data such as IL6? 
-No. Azure Government regions support only unclassified data up to and including IL5. In contrast, IL6 data is defined as classified information up to Secret, and can be accommodated in [Azure Government Secret](https://azure.microsoft.com/global-infrastructure/government/national-security/).
+### Do Azure Government regions support classified data such as IL6?
+No. Azure Government regions support only unclassified data up to and including IL5. In contrast, [IL6 data](/azure/compliance/offerings/offering-dod-il6) is defined as classified information up to Secret, and can be accommodated in [Azure Government Secret](https://azure.microsoft.com/global-infrastructure/government/national-security/).
-### What DoD organizations can use Azure Government? 
+### What DoD organizations can use Azure Government?
All Azure Government regions are built to support DoD customers, including: - The Office of the Secretary of Defense
All Azure Government regions are built to support DoD customers, including:
- The unified combatant commands - Other offices, agencies, activities, and commands under the control or supervision of any approved entity named above
-### What services are part of your IL5 authorization scope? 
-For a complete list of services in scope for DoD IL5 PA in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia), see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). For a complete list of services in scope for DoD IL5 PA in Azure Government DoD regions (US DoD Central and US DoD East), see [Azure Government DoD regions IL5 audit scope](#azure-government-dod-regions-il5-audit-scope).
+### What services are part of your IL5 authorization scope?
+For a complete list of services in scope for DoD IL5 PA in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia), see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). For a complete list of services in scope for DoD IL5 PA in Azure Government DoD regions (US DoD Central and US DoD East), see [Azure Government DoD regions IL5 audit scope](#azure-government-dod-regions-il5-audit-scope) in this article.
## Next steps
For a complete list of services in scope for DoD IL5 PA in Azure Government regi
- [How to buy Azure Government](https://azure.microsoft.com/global-infrastructure/government/how-to-buy/) - [Get started with Azure Government](./documentation-government-get-started-connect-with-portal.md) - [Azure Government Blog](https://devblogs.microsoft.com/azuregov/)
+- [Azure Government security](./documentation-government-plan-security.md)
+- [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope)
+- [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md)
+- [DoD Impact Level 4](/azure/compliance/offerings/offering-dod-il4)
+- [DoD Impact Level 5](/azure/compliance/offerings/offering-dod-il5)
+- [DoD Impact Level 6](/azure/compliance/offerings/offering-dod-il6)
azure-government Documentation Government Welcome https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-welcome.md
Title: Azure Government Overview
-description: Overview of Azure Government capabilities, including security and compliance capabilities applicable to federal, state, and local government organizations and their partners
+description: Overview of Azure Government capabilities
Previously updated : 10/01/2021
+recommendations: false
Last updated : 01/25/2022 # What is Azure Government?
-US government agencies or their partners interested in cloud services that meet government security and compliance requirements, can be confident that [Microsoft Azure Government](https://azure.microsoft.com/global-infrastructure/government/) provides world-class [security, protection, and compliance services](../compliance/index.yml). Azure Government delivers a dedicated cloud enabling government agencies and their partners to transform mission-critical workloads to the cloud. Azure Government services handle data that is subject to various government regulations and requirements, such as FedRAMP, DoD IL4 and IL5, CJIS, IRS 1075, ITAR, CMMC, NIST 800-171, and others. To provide you with the highest level of security and compliance, Azure Government uses physically isolated datacenters and networks located in the US only.
+US government agencies or their partners interested in cloud services that meet government security and compliance requirements, can be confident that [Microsoft Azure Government](https://azure.microsoft.com/global-infrastructure/government/) provides world-class [security and compliance](../compliance/index.yml). Azure Government delivers a dedicated cloud enabling government agencies and their partners to transform mission-critical workloads to the cloud. Azure Government services handle data that is subject to various [US government regulations and requirements](/azure/compliance/offerings/offering-cjis). To provide you with the highest level of security and compliance, Azure Government uses physically isolated datacenters and networks located in the US only.
-Azure Government customers (US federal, state, and local government or their partners) are subject to validation of eligibility. If there is a question about eligibility for Azure Government, you should consult your account team. To sign up for trial, request your [trial subscription](https://azure.microsoft.com/global-infrastructure/government/request/?ReqType=Trial).
+Azure Government customers (US federal, state, and local government or their partners) are subject to validation of eligibility. If there's a question about eligibility for Azure Government, you should consult your account team. To sign up for trial, request your [trial subscription](https://azure.microsoft.com/global-infrastructure/government/request/?ReqType=Trial).
The following video provides a good introduction to Azure Government:
The following video provides a good introduction to Azure Government:
## Compare Azure Government and global Azure
-Azure Government uses same underlying technologies as global Azure, which includes the core components of [Infrastructure-as-a-Service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas/), [Platform-as-a-Service (PaaS)](https://azure.microsoft.com/overview/what-is-paas/), and [Software-as-a-Service (SaaS)](https://azure.microsoft.com/overview/what-is-saas/). Azure Government includes geo-synchronous data replication, auto scaling, network, storage, data management, identity management, and many other services. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). Services available in Azure Government are listed by category and whether they are Generally Available or available through Preview.
+Azure Government offers [Infrastructure-as-a-Service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas/), [Platform-as-a-Service (PaaS)](https://azure.microsoft.com/overview/what-is-paas/), and [Software-as-a-Service (SaaS)](https://azure.microsoft.com/overview/what-is-saas/) cloud service models based on the same underlying technologies as global Azure. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). Services available in Azure Government are listed by category and whether they're Generally Available or available through Preview.
-There are some key differences that developers working on applications hosted in Azure Government must be aware of. For detailed information, see [Guidance for developers](./documentation-government-developer-guide.md). As a developer, you must know how to connect to Azure Government and once you connect you will mostly have the same experience as in global Azure. To see feature variations and usage limitations between Azure Government and global Azure, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md) and click on individual service.
+There are some key differences that developers working on applications hosted in Azure Government must be aware of. For detailed information, see [Guidance for developers](./documentation-government-developer-guide.md). As a developer, you must know how to connect to Azure Government and once you connect you'll mostly have the same experience as in global Azure. To see feature variations and usage limitations between Azure Government and global Azure, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md) and click on individual service.
## Region pairing
To start using Azure Government, first check out [Guidance for developers](./doc
## Next steps - [Acquiring and accessing Azure Government](https://azure.microsoft.com/offers/azure-government/)-- [Get started with Azure Government](./documentation-government-get-started-connect-with-portal.md)
+- [Azure Government security](./documentation-government-plan-security.md)
- View [YouTube videos](https://www.youtube.com/playlist?list=PLLasX02E8BPA5IgCPjqWms5ne5h4briK7)
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/action-groups.md
Each action is made up of the following properties:
For information on how to use Azure Resource Manager templates to configure action groups, see [Action group Resource Manager templates](./action-groups-create-resource-manager-template.md).
-Action Group is **Global** service, therefore there is no dependency on a specific Azure region. Requests from client can be processed by action group service in any region, which means, if one region of service is down, the traffic will be routed and process by other regions automatically. Being a *global service* it helps client not to worry about **disaster recovery**.
+Action Group is **Global** service, therefore there's no dependency on a specific Azure region. Requests from client can be processed by action group service in any region, which means, if one region of service is down, the traffic will be routed and process by other regions automatically. Being a *global service* it helps client not to worry about **disaster recovery**.
## Create an action group by using the Azure portal
Under **Instance details**:
b. **Name**: Enter a unique name for the action.
- c. **Details**: Based on the action type, enter a webhook URI, Azure app, ITSM connection, or Automation runbook. For ITSM Action, additionally specify **Work Item** and other fields your ITSM tool requires.
+ c. **Details**: Based on the action type, enter a webhook URI, Azure app, ITSM connection, or Automation Runbook. For ITSM Action, additionally specify **Work Item** and other fields your ITSM tool requires.
d. **Common alert schema**: You can choose to enable the [common alert schema](./alerts-common-schema.md), which provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor.
When creating or updating an action group in the Azure portal, you can **test**
![Select Sample Type + notification + action type](./media/action-groups/test-sample-action-group.png)
-1. If you close the window or select **Back to test setup** while the test is running, the test is stopped, and you will not get test results.
+1. If you close the window or select **Back to test setup** while the test is running, the test is stopped, and you won't get test results.
![Stop running test](./media/action-groups/stop-running-test.png)
When creating or updating an action group in the Azure portal, you can **test**
![Test sample failed](./media/action-groups/test-sample-failed.png) You can use the information in the **Error details section**, to understand the issue so that you can edit and test the action group again.
-To allow you to check the action groups are working as expected before you enable them in a production environment, you will get email and SMS alerts with the subject: Test.
+To allow you to check the action groups are working as expected before you enable them in a production environment, you'll get email and SMS alerts with the subject: Test.
All the details and links in Test email notifications for the alerts fired are a sample set for reference.
All the details and links in Test email notifications for the alerts fired are a
> You may have a limited number of actions in a test Action Group. See the [rate limiting information](./alerts-rate-limiting.md) article. > > You can opt in or opt out to the common alert schema through Action Groups, on the portal. You can [find common schema samples for test action groups for all the sample types](./alerts-common-schema-test-action-definitions.md).
+> You can opt in or opt out to the non-common alert schema through Action Groups, on the portal. You can [find non-common schema alert definitions](./alerts-non-common-schema-definitions.md).
## Manage your action groups
After you create an action group, you can view **Action groups** by selecting **
* Add, edit, or remove actions. * Delete the action group.
-## Action specific information
+## Action-specific information
> [!NOTE] > See [Subscription Service Limits for Monitoring](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-monitor-limits) for numeric limits on each of the items below.
Emails will be sent from the following email addresses. Ensure that your email f
You may have a limited number of email actions in an Action Group. See the [rate limiting information](./alerts-rate-limiting.md) article. ### Email Azure Resource Manager Role
-Send email to the members of the subscription's role. Email will only be sent to **Azure AD user** members of the role. Email will not be sent to Azure AD groups or service principals.
+Send email to the members of the subscription's role. Email will only be sent to **Azure AD user** members of the role. Email won't be sent to Azure AD groups or service principals.
A notification email is sent only to the *primary email* address.
-If you are not receiving Notifications on your *primary email*, then you can try following steps:
+If you aren't receiving Notifications on your *primary email*, then you can try following steps:
-1. In Azure portal go to *Active Directory*.
+1. In Azure portal, go to *Active Directory*.
2. Click on All users (in left pane), you will see list of users (in right pane). 3. Select the user for which you want to review the *primary email* information.
- :::image type="content" source="media/action-groups/active-directory-user-profile.png" alt-text="Example on how to review user profile." border="true":::
+ :::image type="content" source="media/action-groups/active-directory-user-profile.png" alt-text="Example of how to review user profile." border="true":::
4. In User profile under Contact Info if "Email" tab is blank then click on *edit* button on the top and add your *primary email* and hit *save* button on the top.
- :::image type="content" source="media/action-groups/active-directory-add-primary-email.png" alt-text="Example on how to add primary email." border="true":::
+ :::image type="content" source="media/action-groups/active-directory-add-primary-email.png" alt-text="Example of how to add primary email." border="true":::
You may have a limited number of email actions in an Action Group. See the [rate limiting information](./alerts-rate-limiting.md) article.
-While setting up *Email ARM Role* you need to make sure below 3 conditions are met:
+While setting up *Email ARM Role*, you need to make sure below three conditions are met:
1. The type of the entity being assigned to the role needs to be **ΓÇ£UserΓÇ¥**. 2. The assignment needs to be done at the **subscription** level.
An event hub action publishes notifications to [Azure Event Hubs](~/articles/eve
### Function Calls an existing HTTP trigger endpoint in [Azure Functions](../../azure-functions/functions-get-started.md). To handle a request, your endpoint must handle the HTTP POST verb.
-When defining the Function action the the Function's httptrigger endpoint and access key are saved in the action definition. For example: `https://azfunctionurl.azurewebsites.net/api/httptrigger?code=this_is_access_key`. If you change the access key for the function you will need to remove and recreate the Function action in the Action Group.
+When defining the Function action the Function's httptrigger endpoint and access key are saved in the action definition. For example: `https://azfunctionurl.azurewebsites.net/api/httptrigger?code=this_is_access_key`. If you change the access key for the function, you will need to remove and recreate the Function action in the Action Group.
You may have a limited number of Function actions in an Action Group.
The Action Groups Secure Webhook action enables you to take advantage of Azure A
1. Create an Azure AD Application for your protected web API. See [Protected web API: App registration](../../active-directory/develop/scenario-protected-web-api-app-registration.md). - Configure your protected API to be [called by a daemon app](../../active-directory/develop/scenario-protected-web-api-app-registration.md#if-your-web-api-is-called-by-a-daemon-app).
-2. Enable Action Groups to use your Azure AD Application.
+2. Enable Action Group to use your Azure AD Application.
> [!NOTE] > You must be a member of the [Azure AD Application Administrator role](../../active-directory/roles/permissions-reference.md#all-roles) to execute this script.
Connect-AzureAD -TenantId "<provide your Azure AD tenant ID here>"
# This is your Azure AD Application's ObjectId. $myAzureADApplicationObjectId = "<the Object ID of your Azure AD Application>"
-# This is the Action Groups Azure AD AppId
+# This is the Action Group Azure AD AppId
$actionGroupsAppId = "461e8683-5575-4561-ac7f-899cc907d62a" # This is the name of the new role we will add to your Azure AD Application
Write-Host $myAppRoles
# Create the role if it doesn't exist if ($myAppRoles -match "ActionGroupsSecureWebhook") {
- Write-Host "The Action Groups role is already defined.`n"
+ Write-Host "The Action Group role is already defined.`n"
} else { $myServicePrincipal = Get-AzureADServicePrincipal -Filter ("appId eq '" + $myApp.AppId + "'") # Add our new role to the Azure AD Application
- $newRole = CreateAppRole -Name $actionGroupRoleName -Description "This is a role for Action Groups to join"
+ $newRole = CreateAppRole -Name $actionGroupRoleName -Description "This is a role for Action Group to join"
$myAppRoles.Add($newRole) Set-AzureADApplication -ObjectId $myApp.ObjectId -AppRoles $myAppRoles }
if ($actionGroupsSP -match "AzNS AAD Webhook")
} else {
- # Create a service principal for the Action Groups Azure AD Application and add it to the role
+ # Create a service principal for the Action Group Azure AD Application and add it to the role
$actionGroupsSP = New-AzureADServicePrincipal -AppId $actionGroupsAppId }
See the [rate limiting information](./alerts-rate-limiting.md) and [SMS alert be
You may have a limited number of SMS actions in an Action Group. > [!NOTE]
-> If the Azure portal action group user interface does not let you select your country/region code, then SMS is not supported for your country/region. If your country/region code is not available, you can vote to have your country/region added at [user voice](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, a work around is to have your action group call a webhook to a third-party SMS provider with support in your country/region.
+> If the Azure portal Action Group user interface does not let you select your country/region code, then SMS is not supported for your country/region. If your country/region code is not available, you can vote to have your country/region added at [user voice](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, a work around is to have your Action Group call a webhook to a third-party SMS provider with support in your country/region.
Pricing for supported countries/regions is listed in the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
See the [rate limiting information](./alerts-rate-limiting.md) article for addit
You may have a limited number of Voice actions in an Action Group. > [!NOTE]
-> If the Azure portal action group user interface does not let you select your country/region code, then voice calls are not supported for your country/region. If your country/region code is not available, you can vote to have your country/region added at [user voice](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, a work around is to have your action group call a webhook to a third-party voice call provider with support in your country/region.
-> Only Country code supported today in Azure portal action group for Voice Notification is +1(United States).
+> If the Azure portal Action Group user interface does not let you select your country/region code, then voice calls are not supported for your country/region. If your country/region code is not available, you can vote to have your country/region added at [user voice](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, a work around is to have your Action Group call a webhook to a third-party voice call provider with support in your country/region.
+> Only Country code supported today in Azure portal Action Group for Voice Notification is +1(United States).
Pricing for supported countries/regions is listed in the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
Pricing for supported countries/regions is listed in the [Azure Monitor pricing
> If the webhook endpoint can't handle the alert context information on its own, you can use a solution like a [Logic App action](./action-groups-logic-app.md) for a custom manipulation of the alert context information to match the webhook's expected data format. Webhooks are processed using the following rules-- A webhook call is attempted a maximum of 3 times.
+- A webhook call is attempted a maximum of three times.
- The call will be retried if a response is not received within the timeout period or one of the following HTTP status codes is returned: 408, 429, 503 or 504. - The first call will wait 10 seconds for a response. - The second and third attempts will wait 30 seconds for a response.-- After the 3 attempts to call the webhook have failed no action group will call the endpoint for 15 minutes.
+- After the three attempts to call the webhook have failed no Action Group will call the endpoint for 15 minutes.
Please see [Action Group IP Addresses](../app/ip-addresses.md) for source IP address ranges.
azure-monitor Alerts Log Webhook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-log-webhook.md
Last updated 09/22/2020
## Sample payloads This section shows sample payloads for webhooks for log alerts. The sample payloads include examples when the payload is standard and when it's custom.
-### Log alert for all resources logs (from API version `2020-08-01`)
+### Log alert for all resources logs (from API version `2021-08-01`)
The following sample payload is for a standard webhook when it's used for log alerts based on resources logs:
azure-monitor Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-log.md
You can also [create log alert rules using Azure Resource Manager templates](../
This section describes how to manage log alerts using the cross-platform [Azure CLI](/cli/azure/get-started-with-azure-cli). Quickest way to start using Azure CLI is through [Azure Cloud Shell](../../cloud-shell/overview.md). For this article, we'll use Cloud Shell. > [!NOTE]
-> Azure CLI support is only available for the scheduledQueryRules API version `2020-08-01` and later. Previous API versions can use the Azure Resource Manager CLI with templates as described below. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you will need to switch to use CLI. [Learn more about switching](./alerts-log-api-switch.md).
+> Azure CLI support is only available for the scheduledQueryRules API version `2021-08-01` and later. Previous API versions can use the Azure Resource Manager CLI with templates as described below. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you will need to switch to use CLI. [Learn more about switching](./alerts-log-api-switch.md).
1. In the [portal](https://portal.azure.com/), select **Cloud Shell**.
azure-monitor Alerts Manage Alerts Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md
> [!NOTE] > Since the [bin()](/azure/data-explorer/kusto/query/binfunction) can result in uneven time intervals, the alert service will automatically convert the [bin()](/azure/data-explorer/kusto/query/binfunction) function to a [binat()](/azure/data-explorer/kusto/query/binatfunction) function with appropriate time at runtime, to ensure results with a fixed point. > [!NOTE]
- > Split by alert dimensions is only available for the current scheduledQueryRules API. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you will need to switch. [Learn more about switching](./alerts-log-api-switch.md). Resource centric alerting at scale is only supported in the API version `2020-08-01` and above.
+ > Split by alert dimensions is only available for the current scheduledQueryRules API. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you will need to switch. [Learn more about switching](./alerts-log-api-switch.md). Resource centric alerting at scale is only supported in the API version `2021-08-01` and above.
:::image type="content" source="media/alerts-log/aggregate-on.png" alt-text="Aggregate on.":::
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)] > [!NOTE]
-> PowerShell is not currently supported in API version `2020-08-01`.
+> PowerShell is not currently supported in API version `2021-08-01`.
Use the PowerShell cmdlets listed below to manage rules with the [Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules).
azure-monitor Alerts Non Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-non-common-schema-definitions.md
+
+ Title: Non-common alert schema definitions in Azure Monitor for Test Action Group
+description: Understanding the non-common alert schema definitions for Azure Monitor for Test Action group
++ Last updated : 01/25/2022++
+# Non-common alert schema definitions for Test Action Group (Preview)
+
+This article describes the non common alert schema definitions for Azure Monitor, including those for webhooks, Azure Logic Apps, Azure Functions, and Azure Automation runbooks.
+
+## What is the non-common alert schema?
+
+The non-common alert schema lets you customize the consumption experience for alert notifications in Azure today. Historically, the three alert types in Azure today (metric, log, and activity log) have had their own email templates, webhook schemas, etc.
+
+## Alert context
+
+### Metric alerts - Static threshold
+
+**Sample values**
+```json
+{
+ "schemaId": "AzureMonitorMetricAlert",
+ "data": {
+ "version": "2.0",
+ "properties": {
+ "customKey1": "value1",
+ "customKey2": "value2"
+ },
+ "status": "Activated",
+ "context": {
+ "timestamp": "2021-11-15T09:35:12.9703687Z",
+ "id": "/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/test-RG/providers/microsoft.insights/metricAlerts/test-metricAlertRule",
+ "name": "test-metricAlertRule",
+ "description": "Alert rule description",
+ "conditionType": "SingleResourceMultipleMetricCriteria",
+ "severity": "3",
+ "condition": {
+ "windowSize": "PT5M",
+ "allOf": [
+ {
+ "metricName": "Transactions",
+ "metricNamespace": "Microsoft.Storage/storageAccounts",
+ "operator": "GreaterThan",
+ "threshold": "0",
+ "timeAggregation": "Total",
+ "dimensions": [
+ {
+ "name": "ApiName",
+ "value": "GetBlob"
+ }
+ ],
+ "metricValue": 100,
+ "webTestName": null
+ }
+ ]
+ },
+ "subscriptionId": "11111111-1111-1111-1111-111111111111",
+ "resourceGroupName": "test-RG",
+ "resourceName": "test-storageAccount",
+ "resourceType": "Microsoft.Storage/storageAccounts",
+ "resourceId": "/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/test-RG/providers/Microsoft.Storage/storageAccounts/test-storageAccount",
+ "portalLink": "https://portal.azure.com/#resource/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/test-RG/providers/Microsoft.Storage/storageAccounts/test-storageAccount"
+ }
+ }
+}
+```
+
+### Metric alerts - Dynamic threshold
+**Sample values**
+```json
+{
+ "schemaId": "AzureMonitorMetricAlert",
+ "data": {
+ "version": "2.0",
+ "properties": {
+ "customKey1": "value1",
+ "customKey2": "value2"
+ },
+ "status": "Activated",
+ "context": {
+ "timestamp": "2021-11-15T09:35:24.3468506Z",
+ "id": "/subscriptions/11111111-1111-1111-1111-111111111111/resourcegroups/test-RG/providers/microsoft.insights/metricalerts/test-metricAlertRule",
+ "name": "test-metricAlertRule",
+ "description": "Alert rule description",
+ "conditionType": "DynamicThresholdCriteria",
+ "severity": "3",
+ "condition": {
+ "windowSize": "PT15M",
+ "allOf": [
+ {
+ "alertSensitivity": "Low",
+ "failingPeriods": {
+ "numberOfEvaluationPeriods": 3,
+ "minFailingPeriodsToAlert": 3
+ },
+ "ignoreDataBefore": null,
+ "metricName": "Transactions",
+ "metricNamespace": "Microsoft.Storage/storageAccounts",
+ "operator": "GreaterThan",
+ "threshold": "0.3",
+ "timeAggregation": "Average",
+ "dimensions": [],
+ "metricValue": 78.09,
+ "webTestName": null
+ }
+ ]
+ },
+ "subscriptionId": "11111111-1111-1111-1111-111111111111",
+ "resourceGroupName": "test-RG",
+ "resourceName": "test-storageAccount",
+ "resourceType": "Microsoft.Storage/storageAccounts",
+ "resourceId": "/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/test-RG/providers/Microsoft.Storage/storageAccounts/test-storageAccount",
+ "portalLink": "https://portal.azure.com/#resource/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/test-RG/providers/Microsoft.Storage/storageAccounts/test-storageAccount"
+ }
+ }
+}
+```
+### Log alerts
+#### `monitoringService` = `Log Alerts V1 ΓÇô Metric`
+
+**Sample values**
+```json
+{
+ "SubscriptionId": "11111111-1111-1111-1111-111111111111",
+ "AlertRuleName": "test-logAlertRule-v1-metricMeasurement",
+ "SearchQuery": "Heartbeat | summarize AggregatedValue=count() by bin(TimeGenerated, 5m)",
+ "SearchIntervalStartTimeUtc": "2021-11-15T15:16:49Z",
+ "SearchIntervalEndtimeUtc": "2021-11-16T15:16:49Z",
+ "AlertThresholdOperator": "Greater Than",
+ "AlertThresholdValue": 0,
+ "ResultCount": 2,
+ "SearchIntervalInSeconds": 86400,
+ "LinkToSearchResults": "https://portal.azure.com#@aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/blade/Microsoft_Azure_Monitoring_Logs/LogsBlade/source/Alerts.EmailLinks/scope/%7B%22resources%22%3A%5B%7B%22resourceId%22%3A%22%2Fsubscriptions%2F11111111-1111-1111-1111-111111111111%2FresourceGroups%2Ftest-RG%2Fproviders%2FMicrosoft.OperationalInsights%2Fworkspaces%2Ftest-logAnalyticsWorkspace%22%7D%5D%7D/q/aBcDeFgHi%2BWqaBcDeFgHiMqsSlVwTE8vSk1PLElNCUvMKU2aBcDeFgHiaBcDeFgHiaBcDeFgHiaBcDeFgHiaBcDeFgHi/prettify/1/timespan/2021-11-15T15%3a16%3a49.0000000Z%2f2021-11-16T15%3a16%3a49.0000000Z",
+ "LinkToFilteredSearchResultsUI": "https://portal.azure.com#@aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/blade/Microsoft_Azure_Monitoring_Logs/LogsBlade/source/Alerts.EmailLinks/scope/%7B%22resources%22%3A%5B%7B%22resourceId%22%3A%22%2Fsubscriptions%2F11111111-1111-1111-1111-111111111111%2FresourceGroups%2Ftest-RG%2Fproviders%2FMicrosoft.OperationalInsights%2Fworkspaces%2Ftest-logAnalyticsWorkspace%22%7D%5D%7D/q/aBcDeFgHiaBcDeFgHiaBcDeFgHiTP1DtWhcTfIApUfTx0dp%2BOPOhDKsHR%2FFeJXsaBcDeFgHiaBcDeFgHiaBcDeFgHiaBcDeFgHiaBcDeFgHiaBcDeFgHiRI9mhc%3D/prettify/1/timespan/2021-11-15T15%3a16%3a49.0000000Z%2f2021-11-16T15%3a16%3a49.0000000Z",
+ "LinkToSearchResultsAPI": "https://api.loganalytics.io/v1/workspaces/bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb/query?query=Heartbeat%20%0A%7C%20summarize%20AggregatedValue%3Dcount%28%29%20by%20bin%28TimeGenerated%2C%205m%29&timespan=2021-11-15T15%3a16%3a49.0000000Z%2f2021-11-16T15%3a16%3a49.0000000Z",
+ "LinkToFilteredSearchResultsAPI": "https://api.loganalytics.io/v1/workspaces/bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb/query?query=Heartbeat%20%0A%7C%20summarize%20AggregatedValue%3Dcount%28%29%20by%20bin%28TimeGenerated%2C%205m%29%7C%20where%20todouble%28AggregatedValue%29%20%3E%200&timespan=2021-11-15T15%3a16%3a49.0000000Z%2f2021-11-16T15%3a16%3a49.0000000Z",
+ "Description": "Alert rule description",
+ "Severity": "3",
+ "SearchResult": {
+ "tables": [
+ {
+ "name": "PrimaryResult",
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "datetime"
+ },
+ {
+ "name": "AggregatedValue",
+ "type": "long"
+ }
+ ],
+ "rows": [
+ [
+ "2021-11-16T10:56:49Z",
+ 11
+ ],
+ [
+ "2021-11-16T11:56:49Z",
+ 11
+ ]
+ ]
+ }
+ ],
+ "dataSources": [
+ {
+ "resourceId": "/subscriptions/11111111-1111-1111-1111-111111111111/resourcegroups/test-RG/providers/microsoft.operationalinsights/workspaces/test-logAnalyticsWorkspace",
+ "region": "eastus",
+ "tables": [
+ "Heartbeat"
+ ]
+ }
+ ]
+ },
+ "WorkspaceId": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
+ "ResourceId": "/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/test-RG/providers/Microsoft.OperationalInsights/workspaces/test-logAnalyticsWorkspace",
+ "AlertType": "Metric measurement",
+ "Dimensions": []
+}
+```
+
+#### `monitoringService` = `Log Alerts V1 - Numresults`
+
+**Sample values**
+```json
+{
+ "SubscriptionId": "11111111-1111-1111-1111-111111111111",
+ "AlertRuleName": "test-logAlertRule-v1-numResults",
+ "SearchQuery": "Heartbeat",
+ "SearchIntervalStartTimeUtc": "2021-11-15T15:15:24Z",
+ "SearchIntervalEndtimeUtc": "2021-11-16T15:15:24Z",
+ "AlertThresholdOperator": "Greater Than",
+ "AlertThresholdValue": 0,
+ "ResultCount": 1,
+ "SearchIntervalInSeconds": 86400,
+ "LinkToSearchResults": "https://portal.azure.com#@aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/blade/Microsoft_Azure_Monitoring_Logs/LogsBlade/source/Alerts.EmailLinks/scope/%7B%22resources%22%3A%5B%7B%22resourceId%22%3A%22%2Fsubscriptions%2F11111111-1111-1111-1111-111111111111%2FresourceGroups%2Ftest-RG%2Fproviders%2FMicrosoft.OperationalInsights%2Fworkspaces%2Ftest-logAnalyticsWorkspace%22%7D%5D%7D/q/aBcDeFgHi%2ABCDE%3D%3D/prettify/1/timespan/2021-11-15T15%3a15%3a24.0000000Z%2f2021-11-16T15%3a15%3a24.0000000Z",
+ "LinkToFilteredSearchResultsUI": "https://portal.azure.com#@aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/blade/Microsoft_Azure_Monitoring_Logs/LogsBlade/source/Alerts.EmailLinks/scope/%7B%22resources%22%3A%5B%7B%22resourceId%22%3A%22%2Fsubscriptions%2F11111111-1111-1111-1111-111111111111%2FresourceGroups%2Ftest-RG%2Fproviders%2FMicrosoft.OperationalInsights%2Fworkspaces%2Ftest-logAnalyticsWorkspace%22%7D%5D%7D/q/aBcDeFgHi%2ABCDE%3D%3D/prettify/1/timespan/2021-11-15T15%3a15%3a24.0000000Z%2f2021-11-16T15%3a15%3a24.0000000Z",
+ "LinkToSearchResultsAPI": "https://api.loganalytics.io/v1/workspaces/bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb/query?query=Heartbeat%0A&timespan=2021-11-15T15%3a15%3a24.0000000Z%2f2021-11-16T15%3a15%3a24.0000000Z",
+ "LinkToFilteredSearchResultsAPI": "https://api.loganalytics.io/v1/workspaces/bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb/query?query=Heartbeat%0A&timespan=2021-11-15T15%3a15%3a24.0000000Z%2f2021-11-16T15%3a15%3a24.0000000Z",
+ "Description": "Alert rule description",
+ "Severity": "3",
+ "SearchResult": {
+ "tables": [
+ {
+ "name": "PrimaryResult",
+ "columns": [
+ {
+ "name": "TenantId",
+ "type": "string"
+ },
+ {
+ "name": "Computer",
+ "type": "string"
+ },
+ {
+ "name": "TimeGenerated",
+ "type": "datetime"
+ }
+ ],
+ "rows": [
+ [
+ "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
+ "test-computer",
+ "2021-11-16T12:00:00Z"
+ ]
+ ]
+ }
+ ],
+ "dataSources": [
+ {
+ "resourceId": "/subscriptions/11111111-1111-1111-1111-111111111111/resourcegroups/test-RG/providers/microsoft.operationalinsights/workspaces/test-logAnalyticsWorkspace",
+ "region": "eastus",
+ "tables": [
+ "Heartbeat"
+ ]
+ }
+ ]
+ },
+ "WorkspaceId": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
+ "ResourceId": "/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/test-RG/providers/Microsoft.OperationalInsights/workspaces/test-logAnalyticsWorkspace",
+ "AlertType": "Number of results"
+}
+```
+
+### Activity log alerts
+
+#### `monitoringService` = `Activity Log - Administrative`
+
+**Sample values**
+```json
+{
+ "schemaId": "Microsoft.Insights/activityLogs",
+ "data": {
+ "status": "Activated",
+ "context": {
+ "activityLog": {
+ "authorization": {
+ "action": "Microsoft.Compute/virtualMachines/restart/action",
+ "scope": "/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/test-RG/providers/Microsoft.Compute/virtualMachines/test-VM"
+ },
+ "channels": "Operation",
+ "claims": "{}",
+ "caller": "user-email@domain.com",
+ "correlationId": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
+ "description": "",
+ "eventSource": "Administrative",
+ "eventTimestamp": "2021-11-16T08:27:36.1836909+00:00",
+ "eventDataId": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
+ "level": "Informational",
+ "operationName": "Microsoft.Compute/virtualMachines/restart/action",
+ "operationId": "cccccccc-cccc-cccc-cccc-cccccccccccc",
+ "properties": {
+ "eventCategory": "Administrative",
+ "entity": "/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/test-RG/providers/Microsoft.Compute/virtualMachines/test-VM",
+ "message": "Microsoft.Compute/virtualMachines/restart/action",
+ "hierarchy": "22222222-2222-2222-2222-222222222222/CnAIOrchestrationServicePublicCorpprod/33333333-3333-3333-3333-3333333303333/44444444-4444-4444-4444-444444444444/55555555-5555-5555-5555-555555555555/11111111-1111-1111-1111-111111111111"
+ },
+ "resourceId": "/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/test-RG/providers/Microsoft.Compute/virtualMachines/test-VM",
+ "resourceGroupName": "test-RG",
+ "resourceProviderName": "Microsoft.Compute",
+ "status": "Succeeded",
+ "subStatus": "",
+ "subscriptionId": "11111111-1111-1111-1111-111111111111",
+ "submissionTimestamp": "2021-11-16T08:29:00.141807+00:00",
+ "resourceType": "Microsoft.Compute/virtualMachines"
+ }
+ },
+ "properties": {
+ "customKey1": "value1",
+ "customKey2": "value2"
+ }
+ }
+}
+```
+
+#### `monitoringService` = `ServiceHealth`
+
+**Sample values**
+```json
+{
+ "schemaId": "Microsoft.Insights/activityLogs",
+ "data": {
+ "status": "Activated",
+ "context": {
+ "activityLog": {
+ "channels": "Admin",
+ "correlationId": "11223344-1234-5678-abcd-aabbccddeeff",
+ "description": "This alert rule will trigger when there are updates to a service issue impacting subscription <name>.",
+ "eventSource": "ServiceHealth",
+ "eventTimestamp": "2021-11-17T05:34:44.5778226+00:00",
+ "eventDataId": "12345678-1234-1234-1234-1234567890ab",
+ "level": "Warning",
+ "operationName": "Microsoft.ServiceHealth/incident/action",
+ "operationId": "12345678-abcd-efgh-ijkl-abcd12345678",
+ "properties": {
+ "title": "Test Action Group - Test Service Health Alert",
+ "service": "Azure Service Name",
+ "region": "Global",
+ "communication": "<p><strong>Summary of impact</strong>:&nbsp;This is the impact summary.</p>\n<p><br></p>\n<p><strong>Preliminary Root Cause</strong>: This is the preliminary root cause.</p>\n<p><br></p>\n<p><strong>Mitigation</strong>:&nbsp;Mitigation description.</p>\n<p><br></p>\n<p><strong>Next steps</strong>: These are the next steps.&nbsp;</p>\n<p><br></p>\n<p>Stay informed about Azure service issues by creating custom service health alerts: <a href=\"https://aka.ms/ash-videos\" rel=\"noopener noreferrer\" target=\"_blank\">https://aka.ms/ash-videos</a> for video tutorials and <a href=\"https://aka.ms/ash-alerts%20for%20how-to%20documentation\" rel=\"noopener noreferrer\" target=\"_blank\">https://aka.ms/ash-alerts for how-to documentation</a>.</p>\n<p><br></p>",
+ "incidentType": "Incident",
+ "trackingId": "ABC1-DEF",
+ "impactStartTime": "2021-11-16T20:00:00.0000000Z",
+ "impactMitigationTime": "2021-11-17T01:00:00.0000000Z",
+ "impactedServices": "[{\"ImpactedRegions\":[{\"RegionName\":\"Global\"}],\"ServiceName\":\"Azure Service Name\"}]",
+ "impactedServicesTableRows": "<tr>\r\n<td align='center' style='padding: 5px 10px; border-right:1px solid black; border-bottom:1px solid black'>Azure Service Name</td>\r\n<td align='center' style='padding: 5px 10px; border-bottom:1px solid black'>Global<br></td>\r\n</tr>\r\n",
+ "defaultLanguageTitle": "Test Action Group - Test Service Health Alert",
+ "defaultLanguageContent": "<p><strong>Summary of impact</strong>:&nbsp;This is the impact summary.</p>\n<p><br></p>\n<p><strong>Preliminary Root Cause</strong>: This is the preliminary root cause.</p>\n<p><br></p>\n<p><strong>Mitigation</strong>:&nbsp;Mitigation description.</p>\n<p><br></p>\n<p><strong>Next steps</strong>: These are the next steps.&nbsp;</p>\n<p><br></p>\n<p>Stay informed about Azure service issues by creating custom service health alerts: <a href=\"https://aka.ms/ash-videos\" rel=\"noopener noreferrer\" target=\"_blank\">https://aka.ms/ash-videos</a> for video tutorials and <a href=\"https://aka.ms/ash-alerts%20for%20how-to%20documentation\" rel=\"noopener noreferrer\" target=\"_blank\">https://aka.ms/ash-alerts for how-to documentation</a>.</p>\n<p><br></p>",
+ "stage": "Resolved",
+ "communicationId": "11223344556677",
+ "isHIR": "false",
+ "isSynthetic": "True",
+ "impactType": "SubscriptionList",
+ "version": "0.1.1"
+ },
+ "status": "Resolved",
+ "subscriptionId": "11111111-1111-1111-1111-111111111111",
+ "submissionTimestamp": "2021-11-17T01:23:45.0623172+00:00"
+ }
+ },
+ "properties": {
+ "customKey1": "value1",
+ "customKey2": "value2"
+ }
+ }
+}
+```
+
+#### `monitoringService` = `Resource Health`
+
+**Sample values**
+```json
+{
+ "schemaId": "Microsoft.Insights/activityLogs",
+ "data": {
+ "status": "Activated",
+ "context": {
+ "activityLog": {
+ "channels": "Admin, Operation",
+ "correlationId": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
+ "eventSource": "ResourceHealth",
+ "eventTimestamp": "2021-11-16T09:50:20.406+00:00",
+ "eventDataId": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
+ "level": "Informational",
+ "operationName": "Microsoft.Resourcehealth/healthevent/Activated/action",
+ "operationId": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
+ "properties": {
+ "title": "Rebooted by user",
+ "details": null,
+ "currentHealthStatus": "Unavailable",
+ "previousHealthStatus": "Available",
+ "type": "Downtime",
+ "cause": "UserInitiated"
+ },
+ "resourceId": "/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/test-RG/providers/Microsoft.Compute/virtualMachines/test-VM",
+ "resourceGroupName": "test-RG",
+ "resourceProviderName": "Microsoft.Resourcehealth/healthevent/action",
+ "status": "Active",
+ "subscriptionId": "11111111-1111-1111-1111-111111111111",
+ "submissionTimestamp": "2021-11-16T09:54:08.5303319+00:00",
+ "resourceType": "MICROSOFT.COMPUTE/VIRTUALMACHINES"
+ }
+ },
+ "properties": {
+ "customKey1": "value1",
+ "customKey2": "value2"
+ }
+ }
+}
+```
azure-monitor Alerts Unified Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-unified-log.md
requests
| where resultCode == "500" ``` -- **Time period / Aggregation granularity:** 15 minutes
+- **Aggregation granularity:** 15 minutes
- **Alert frequency:** 15 minutes - **Threshold value:** Greater than 0
For example, you want to monitor errors for multiple virtual machines running yo
This rule monitors if any virtual machine had error events in the last 15 minutes. Each virtual machine is monitored separately and will trigger actions individually. > [!NOTE]
-> Split by alert dimensions is only available for the current scheduledQueryRules API. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you will need to switch. [Learn more about switching](./alerts-log-api-switch.md). Resource centric alerting at scale is only supported in the API version `2020-08-01` and above.
+> Split by alert dimensions is only available for the current scheduledQueryRules API. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you will need to switch. [Learn more about switching](./alerts-log-api-switch.md). Resource centric alerting at scale is only supported in the API version `2021-08-01` and above.
## Alert logic definition
See this alert stateless evaluation example:
Stateful alerts fire once per incident and resolve. The alert rule resolves when the alert condition isn't met for 30 minutes for a specific evaluation period (to account for log ingestion delay), and for three consecutive evaluations to reduce noise if there is flapping conditions. For example, with a frequency of 5 minutes, the alert resolve after 40 minutes or with a frequency of 1 minute, the alert resolve after 32 minutes. The resolved notification is sent out via web-hooks or email, the status of the alert instance (called monitor state) in Azure portal is also set to resolved.
-Stateful alerts feature is currently in preview in the Azure public cloud. You can set this using **Automatically resolve alerts** in the alert details section.
+Stateful alerts feature is currently in preview. You can set this using **Automatically resolve alerts** in the alert details section.
## Location selection in log alerts
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-troubleshoot.md
We recommend the following two steps to resolve this issue:
If you see this exception after upgrading to Java agent version greater than 3.2.0, upgrading your network to resolve the new endpoint shown in the exception might resolve the exception. The reason for the difference between Application Insights versions is that versions greater than 3.2.0 point to the new ingestion endpoint `v2.1/track` compared to the older `v2/track`. The new ingestion endpoint automatically redirects you to the ingestion endpoint (new endpoint shown in exception) nearest to the storage for your Application Insights resource.
+## Missing cipher suites
+
+If the Application Insights Java agent detects that you do not have any of the cipher suites that are supported by the endpoints it connects to, it will alert you and link you here.
+
+### Background on cipher suites:
+Cipher suites come into play before a client application and server exchange information over an SSL/TLS connection. The client application initiates an SSL handshake. Part of that process involves notifying the server which cipher suites it supports. The server receives that information and compares the cipher suites supported by the client application with the algorithms it supports. If it finds a match, the server notifies the client application and a secure connection is established. If it does not find a match, the server refuses the connection.
+
+#### How to determine client side cipher suites:
+In this case, the client is the JVM on which your instrumented application is running. Starting from 3.2.5-BETA, Application Insights Java will log a warning message if missing cipher suites could be causing connection failures to one of the service endpoints.
+
+If using an earlier version of Application Insights Java, compile and run the following Java program to get the list of supported cipher suites in your JVM:
+
+```
+import javax.net.ssl.SSLServerSocketFactory;
+
+public class Ciphers {
+ public static void main(String[] args) {
+ SSLServerSocketFactory ssf = (SSLServerSocketFactory) SSLServerSocketFactory.getDefault();
+ String[] defaultCiphers = ssf.getDefaultCipherSuites();
+ System.out.println("Default\tCipher");
+ for (int i = 0; i < defaultCiphers.length; ++i) {
+ System.out.print('*');
+ System.out.print('\t');
+ System.out.println(defaultCiphers[i]);
+ }
+ }
+}
+```
+Following are the cipher suites that are generally supported by the Application Insights endpoints:
+- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
+- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
+
+#### How to determine server side cipher suites:
+In this case, the server side is the Application Insights ingestion endpoint or the Application Insights Live metrics endpoint. You can use an online tool like [SSLLABS](https://www.ssllabs.com/ssltest/analyze.html) to determine the expected cipher suites based on the endpoint url.
+
+#### How to add the missing cipher suites:
+
+If using Java 9 or later, please check if the JVM has `jdk.crypto.cryptoki` module included in the jmods folder. Also if you are building a custom java runtime using `jlink` please make sure to include the same module.
+
azure-monitor Monitor Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/monitor-azure-resource.md
Azure resources generate the following monitoring data:
## Menu options
-While you can access Azure Monitor features from the **Monitor** menu in the Azure portal, Azure Monitor features can be access directly from the menu for different Azure services. While different Azure services may have slightly different experiences, they share a common set of monitoring options in the Azure portal. This includes **Overview** and **Activity log** and multiple options in the **Monitoring** section of the menu.
+While you can access Azure Monitor features from the **Monitor** menu in the Azure portal, Azure Monitor features can be accessed directly from the menu for different Azure services. While different Azure services may have slightly different experiences, they share a common set of monitoring options in the Azure portal. This includes **Overview** and **Activity log** and multiple options in the **Monitoring** section of the menu.
:::image type="content" source="media/monitor-azure-resource/menu-01.png" lightbox="media/monitor-azure-resource/menu-01.png" alt-text="Monitor menu 1":::
azure-monitor Sql Insights Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-enable.md
You will need to create one or more Azure virtual machines that will be used to
> The [monitoring profiles](#create-sql-monitoring-profile) specifies what data you will collect from the different types of SQL you want to monitor. Each monitoring virtual machine can have only one monitoring profile associated with it. If you have a need for multiple monitoring profiles, then you need to create a virtual machine for each. ### Azure virtual machine requirements
-The Azure virtual machines has the following requirements.
+The Azure virtual machine has the following requirements:
-- Operating system: Ubuntu 18.04 -- Recommended minimum Azure virtual machine sizes: Standard_B2s (2 cpus, 4 GiB memory)
+- Operating system: Ubuntu 18.04 using Azure Marketplace [image](https://azuremarketplace.microsoft.com/marketplace/apps/canonical.0001-com-ubuntu-pro-bionic). Custom images are not supported.
+- Recommended minimum Azure virtual machine sizes: Standard_B2s (2 CPUs, 4 GiB memory)
- Deployed in any Azure region [supported](../agents/azure-monitor-agent-overview.md#supported-regions) by the Azure Monitor agent, and meeting all Azure Monitor agent [prerequisites](../agents/azure-monitor-agent-install.md#prerequisites). > [!NOTE]
-> The Standard_B2s (2 cpus, 4 GiB memory) virtual machine size will support up to 100 connection strings. You shouldn't allocate more than 100 connections to a single virtual machine.
+> The Standard_B2s (2 CPUs, 4 GiB memory) virtual machine size will support up to 100 connection strings. You shouldn't allocate more than 100 connections to a single virtual machine.
Depending upon the network settings of your SQL resources, the virtual machines may need to be placed in the same virtual network as your SQL resources so they can make network connections to collect monitoring data.
azure-netapp-files Azacsnap Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-preview.md
+
+ Title: Preview features for Azure Application Consistent Snapshot tool for Azure NetApp Files | Microsoft Docs
+description: Provides a guide for using the preview features of the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 01/25/2022+++
+# Preview features of Azure Application Consistent Snapshot tool
+
+> [!NOTE]
+> PREVIEWS ARE PROVIDED "AS-IS," "WITH ALL FAULTS," AND "AS AVAILABLE," AND ARE EXCLUDED FROM THE SERVICE LEVEL AGREEMENTS AND LIMITED WARRANTY
+> ref: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+
+This article provides a guide on setup and usage of the new features in preview for **AzAcSnap v5.1**. These new features can be used with Azure NetApp Files, Azure BareMetal, and now Azure Managed Disk. This guide should be read along with the documentation for the generally available version of AzAcSnap at [aka.ms/azacsnap](https://aka.ms/azacsnap).
+
+The 4 new preview features provided with AzAcSnap v5.1 are:
+- Oracle Database support
+- Backint Co-existence
+- Azure Managed Disk
+- RunBefore and RunAfter capability
+
+## Providing feedback
+
+Feedback on AzAcSnap, including this preview, can be provided [online](https://aka.ms/azacsnap-feedback).
+
+## Getting the AzAcSnap Preview snapshot tools
+
+Get the most recent version of the Preview [AzAcSnap Preview Installer](https://aka.ms/azacsnap-preview-installer) from Microsoft.
+
+The self-installation file has an associated [MD5 checksum file](https://aka.ms/azacsnap-preview-installer-checksum) to check the download integrity.
+
+First download the installer. Follow the steps in the main [get started](azacsnap-get-started.md) documentation to complete the install of AzAcSnap. Return
+to this document for details on using the preview features.
+
+## Oracle Database
+
+### Supported platforms and operating systems
+
+> [!NOTE]
+> Support for Oracle is Preview feature.
+> This section's content supplements [What is Azure Application Consistent Snapshot tool](azacsnap-introduction.md) website page.
+
+New database platforms and operating systems supported with this preview release.
+
+- **Databases**
+ - Oracle Database release 12 or later (refer to [Oracle VM images and their deployment on Microsoft Azure](/azure/virtual-machines/workloads/oracle/oracle-vm-solutions) for details)
+
+- **Operating Systems**
+ - Oracle Linux 7+
++
+### Enable communication with database
+
+> [!NOTE]
+> Support for Oracle is Preview feature.
+> This section's content supplements [Install Azure Application Consistent Snapshot tool](azacsnap-installation.md) website page.
+
+This section explains how to enable communication with storage. Ensure the storage back-end you are using is correctly selected.
+
+# [Oracle](#tab/oracle)
+
+The snapshot tools communicate with the Oracle database and need a user with appropriate permissions to enable/disable backup mode. The following example
+shows the setup of the Oracle database user, the use of `mkstore` to create an Oracle Wallet, and the `sqlplus` configuration files required for
+communication to the Oracle database.
+The following example commands set up a user (AZACSNAP) in the Oracle database, change the IP address, usernames, and passwords as appropriate:
+
+1. From the Oracle database installation
+
+ ```bash
+ su ΓÇô oracle
+ sqlplus / AS SYSDBA
+ ```
+
+ ```output
+ SQL*Plus: Release 12.1.0.2.0 Production on Mon Feb 1 01:34:05 2021
+ Copyright (c) 1982, 2014, Oracle. All rights reserved.
+ Connected to:
+ Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production
+ SQL>
+ ```
+
+1. Create the user
+
+ This example creates the AZACSNAP user.
+
+ ```sql
+ SQL> CREATE USER azacsnap IDENTIFIED BY password;
+ ```
+
+ ```output
+ User created.
+ ```
+
+1. Grant the user permissions - This example sets the permission for the AZACSNAP user to allow for putting the database in backup mode.
+
+ ```sql
+ SQL> GRANT CREATE SESSION TO azacsnap;
+ ```
+
+ ```output
+ Grant succeeded.
+ ```
++
+ ```sql
+ SQL> GRANT SYSBACKUP TO azacsnap;
+ ```
+
+ ```output
+ Grant succeeded.
+ ```
+
+ ```sql
+ SQL> connect azacsnap/password
+ ```
+
+ ```output
+ Connected.
+ ```
+
+ ```sql
+ SQL> quit
+ ```
+
+1. OPTIONAL - Prevent user's password from expiring
+
+ It may be necessary to disable password expiry for the user, without this change the user's password could expire preventing snapshots to be taken correctly.
+
+ > [!NOTE]
+ > Check with corporate policy before making this change.
+
+ This example gets the password expiration for the AZACSNAP user:
+
+ ```sql
+ SQL> SELECT username, account_status,expiry_date,profile FROM dba_users WHERE username='AZACSNAP';
+ ```
+
+ ```output
+ USERNAME ACCOUNT_STATUS EXPIRY_DA PROFILE
+
+ AZACSNAP OPEN DD-MMM-YY DEFAULT
+ ```
+
+ There are a few methods for disabling password expiry in the Oracle database, refer to your database administrator for guidance. One example is
+ by modifying the DEFAULT user's profile so the password life time is unlimited as follows:
+
+ ```sql
+ SQL> ALTER PROFILE default LIMIT PASSWORD_LIFE_TIME unlimited;
+ ```
+
+ After making this change, there should be no password expiry date for user's with the DEFAULT profile.
+
+ ```sql
+ SQL> SELECT username, account_status,expiry_date,profile FROM dba_users WHERE username='AZACSNAP';
+ ```
+
+ ```output
+ USERNAME ACCOUNT_STATUS EXPIRY_DA PROFILE
+
+ AZACSNAP OPEN DEFAULT
+ ```
++
+1. The Oracle Wallet provides a method to manage database credentials across multiple domains. This is accomplished by using a database connection string in
+ the datasource definition, which is resolved by an entry in the wallet. When used correctly, the Oracle Wallet makes having passwords in the datasource
+ configuration unnecessary.
+
+ This feature can be leveraged to use the Oracle TNS (Transparent Network Substrate) administrative file to hide the details of the database
+ connection string and instead use an alias. If the connection information changes, it is a matter of changing the `tnsnames.ora` file instead
+ of potentially many datasource definitions.
+
+ Set up the Oracle Wallet (change the password) This example uses the mkstore command from the Linux shell to set up the Oracle wallet. Theses commands
+ are run on the Oracle database server using unique user credentials to avoid any impact on the running database. In this example a new user (azacsnap)
+ is created, and their environment variables configured appropriately.
+
+ > [!IMPORTANT]
+ > Be sure to create a unique user to generate the Oracle Wallet to avoid any impact on the running database.
+
+ 1. Run the following commands on the Oracle Database Server
+
+ 1. Get the Oracle environment variables to be used in setup. Run the following commands as the `root` user on the Oracle Database Server.
+
+ ```bash
+ su - oracle -c 'echo $ORACLE_SID'
+ ```
+
+ ```output
+ oratest1
+ ```
+
+ ```bash
+ su - oracle -c 'echo $ORACLE_HOME'
+ ```
+
+ ```output
+ /u01/app/oracle/product/19.0.0/dbhome_1
+ ```
+
+ 1. Create the Linux user to generate the Oracle Wallet and associated `*.ora` files using the output from the previous step
+
+ > [!NOTE]
+ > In these examples we are using the `bash` shell. If you are using a different shell (for example, csh), then ensure environment variables have been set correctly.
+
+ ```bash
+ useradd -m azacsnap
+ echo "export ORACLE_SID=oratest1" >> /home/azacsnap/.bash_profile
+ echo "export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1" >> /home/azacsnap/.bash_profile
+ echo "export TNS_ADMIN=/home/azacsnap" >> /home/azacsnap/.bash_profile
+ echo "export PATH=\$PATH:\$ORACLE_HOME/bin" >> /home/azacsnap/.bash_profile
+ ```
+
+ 1. As the new Linux user (`azacsnap`), create the wallet and `*.ora` files.
+
+ `su` to the user created in the previous step.
+
+ ```bash
+ sudo su - azacsnap
+ ```
+
+ Create the Oracle Wallet.
+
+ ```bash
+ mkstore -wrl $TNS_ADMIN/.oracle_wallet/ -create
+ ```
+
+ ```output
+ Oracle Secret Store Tool Release 19.0.0.0.0 - Production
+ Version 19.3.0.0.0
+ Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.
+
+ Enter password: <wallet_password>
+ Enter password again: <wallet_password>
+ ```
+
+ Add the connect string credentials to the Oracle Wallet. In the following example command: AZACSNAP is the ConnectString to be used by AzAcSnap; azacsnap
+ is the Oracle Database User; AzPasswd1 is the Oracle User's database password.
+
+ ```bash
+ mkstore -wrl $TNS_ADMIN/.oracle_wallet/ -createCredential AZACSNAP azacsnap AzPasswd1
+ ```
+
+ ```output
+ Oracle Secret Store Tool Release 19.0.0.0.0 - Production
+ Version 19.3.0.0.0
+ Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.
+
+ Enter wallet password: <wallet_password>
+ ```
+
+ Create the `tnsnames-ora` file. In the following example command: HOST should be set to the IP address of the Oracle Database Server; SID should be
+ set to the Oracle Database SID.
+
+ ```bash
+ echo "# Connection string
+ AZACSNAP=\"(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.1)(PORT=1521))(CONNECT_DATA=(SID=oratest1)))\"
+ " > $TNS_ADMIN/tnsnames.ora
+ ```
+
+ Create the `sqlnet.ora` file.
+
+ ```bash
+ echo "SQLNET.WALLET_OVERRIDE = TRUE
+ WALLET_LOCATION=(
+ SOURCE=(METHOD=FILE)
+ (METHOD_DATA=(DIRECTORY=\$TNS_ADMIN/.oracle_wallet))
+ ) " > $TNS_ADMIN/sqlnet.ora
+ ```
+
+ Test the Oracle Wallet.
+
+ ```bash
+ sqlplus /@AZACSNAP as SYSBACKUP
+ ```
+
+ ```output
+ SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jan 12 00:25:32 2022
+ Version 19.3.0.0.0
+
+ Copyright (c) 1982, 2019, Oracle. All rights reserved.
+
+
+ Connected to:
+ Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
+ Version 19.3.0.0.0
+ ```
+
+ ```sql
+ SELECT MACHINE FROM V$SESSION WHERE SID=1;
+ ```
+
+ ```output
+ MACHINE
+ -
+ oradb-19c
+ ```
+
+ ```sql
+ quit
+ ```
+
+ ```output
+ Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
+ Version 19.3.0.0.0
+ ```
+
+ Create a ZIP file archive of the Oracle Wallet and `*.ora` files.
+
+ ```bash
+ cd $TNS_ADMIN
+ zip -r wallet.zip sqlnet.ora tnsnames.ora .oracle_wallet
+ ```
+
+ ```output
+ adding: sqlnet.ora (deflated 9%)
+ adding: tnsnames.ora (deflated 7%)
+ adding: .oracle_wallet/ (stored 0%)
+ adding: .oracle_wallet/ewallet.p12.lck (stored 0%)
+ adding: .oracle_wallet/ewallet.p12 (deflated 1%)
+ adding: .oracle_wallet/cwallet.sso.lck (stored 0%)
+ adding: .oracle_wallet/cwallet.sso (deflated 1%)
+ ```
+
+ 1. Copy the ZIP file to the target system (for example, the centralized virtual machine running AzAcSnap).
+
+ > [!NOTE]
+ > If deploying to a centralized virtual machine, then it will need to have the Oracle instant client installed and setup so the AzAcSnap user can
+ > run `sqlplus` commands. The Oracle Instant Client can downloaded from https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html.
+ > In order for SQL\*Plus to run correctly, download both the required package (for example, Basic Light Package) and the optional SQL\*Plus tools package.
+
+ 1. Complete the following steps on the system running AzAcSnap.
+
+ 1. Deploy ZIP file copied from the previous step.
+
+ > [!IMPORTANT]
+ > This step assumes the user running AzAcSnap, by default `azacsnap`, already has been created using the AzAcSnap installer.
+
+ > [!NOTE]
+ > It's possible to leverage the `TNS_ADMIN` shell variable to allow for multiple Oracle targets by setting the unique shell variable value
+ > for each Oracle system as needed.
+
+ ```bash
+ export TNS_ADMIN=$HOME/ORACLE19c
+ mkdir $TNS_ADMIN
+ cd $TNS_ADMIN
+ unzip ~/wallet.zip
+ ```
+
+ ```output
+ Archive: wallet.zip
+ inflating: sqlnet.ora
+ inflating: tnsnames.ora
+ creating: .oracle_wallet/
+ extracting: .oracle_wallet/ewallet.p12.lck
+ inflating: .oracle_wallet/ewallet.p12
+ extracting: .oracle_wallet/cwallet.sso.lck
+ inflating: .oracle_wallet/cwallet.sso
+ ```
+
+ Check the files have been extracted correctly.
+
+ ```bash
+ ls
+ ```
+
+ ```output
+ sqlnet.ora tnsnames.ora wallet.zip
+ ```
+
+ Assuming all the previous steps have been completed correctly, then it should be possible to connect to the database using the `/@AZACSNAP` connect string.
+
+ ```bash
+ sqlplus /@AZACSNAP as SYSBACKUP
+ ```
+
+ ```output
+ SQL*Plus: Release 21.0.0.0.0 - Production on Wed Jan 12 13:39:36 2022
+ Version 21.1.0.0.0
+
+ Copyright (c) 1982, 2020, Oracle. All rights reserved.
+
+
+ Connected to:
+ Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
+ Version 19.3.0.0.0
+
+ ```sql
+ SQL> quit
+ ```
+
+ ```output
+ Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
+ Version 19.3.0.0.0
+ ```
+
+ > [!IMPORTANT]
+ > The `$TNS_ADMIN` shell variable determines where to locate the Oracle Wallet and `*.ora` files, so it must be set before running `azacsnap` to ensure
+ > correct operation.
+
+ 1. Test the setup with AzAcSnap
+
+ After configuring AzAcSnap (for example, `azacsnap -c configure --configuration new`) with the Oracle connect string (for example, `/@AZACSNAP`), it should
+ be possible to connect to the Oracle database.
+
+ Check the `$TNS_ADMIN` variable is set for the correct Oracle target system
+
+ ```bash
+ ls -al $TNS_ADMIN
+ ```
+
+ ```output
+ total 16
+ drwxrwxr-x. 3 orasnap orasnap 84 Jan 12 13:39 .
+ drwx. 18 orasnap sapsys 4096 Jan 12 13:39 ..
+ drwx. 2 orasnap orasnap 90 Jan 12 13:23 .oracle_wallet
+ -rw-rw-r--. 1 orasnap orasnap 125 Jan 12 13:39 sqlnet.ora
+ -rw-rw-r--. 1 orasnap orasnap 128 Jan 12 13:24 tnsnames.ora
+ -rw-r--r--. 1 root root 2569 Jan 12 13:28 wallet.zip
+ ```
+
+ Run the `azacsnap` test command
+
+ ```bash
+ cd ~/bin
+ azacsnap -c test --test oracle --configfile ORACLE.json
+ ```
+
+ ```output
+ BEGIN : Test process started for 'oracle'
+ BEGIN : Oracle DB tests
+ PASSED: Successful connectivity to Oracle DB version 1903000000
+ END : Test process complete for 'oracle'
+ ```
+
+ > [!IMPORTANT]
+ > The `$TNS_ADMIN` variable must be setup correctly for `azacsnap` to run correctly, either by adding to the user's `.bash_profile` file,
+ > or by exporting it before each run (for example, `export TNS_ADMIN="/home/orasnap/ORACLE19c" ; cd /home/orasnap/bin ; ./azacsnap --configfile ORACLE19c.json
+ > -c backup --volume data --prefix hourly-ora19c --retention 12`)
+++
+### Configuring the database
+
+This section explains how to configure the data base.
+
+# [Oracle](#tab/oracle)
+
+These are required changes to be applied to the Oracle Database to allow for monitoring by the database administrator.
+
+1. Set up Oracle alert logging
+
+ Use the following Oracle SQL commands while connected to the database as SYSDBA to create a stored procedure under the default Oracle SYSBACKUP database account.
+ This will allow AzAcSnap to output messages to standard output using the PUT_LINE procedure in the DBMS_OUTPUT package, and also to the Oracle database `alert.log`
+ file (using the KSDWRT procedure in the DBMS_SYSTEM package).
+
+ ```bash
+ sqlplus / As SYSDBA
+ ```
+
+ ```sql
+ GRANT EXECUTE ON DBMS_SYSTEM TO SYSBACKUP;
+ CREATE PROCEDURE sysbackup.azmessage(in_msg IN VARCHAR2)
+ AS
+ v_timestamp VARCHAR2(32);
+ BEGIN
+ SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS')
+ INTO v_timestamp FROM DUAL;
+ SYS.DBMS_SYSTEM.KSDWRT(SYS.DBMS_SYSTEM.ALERT_FILE, in_msg);
+ END azmessage;
+ /
+ SHOW ERRORS
+ QUIT
+ ```
+++
+### Configuring AzAcSnap
+
+This section explains how to configure AzAcSnap for the specified database.
+
+> [!NOTE]
+> Support for Oracle is Preview feature.
+> This section's content supplements [Configure Azure Application Consistent Snapshot tool](azacsnap-cmd-ref-configure.md) website page.
+
+### Details of required values
+
+The following sections provide detailed guidance on the various values required for the configuration file.
+
+# [Oracle](#tab/oracle)
+
+#### Oracle Database values for configuration
+
+When adding an Oracle database to the configuration, the following values are required:
+
+- **Oracle DB Server's Address** = The database server hostname or IP address.
+- **SID** = The database System ID.
+- **Oracle Connect String** = The Connect String used by `sqlplus` to connect to Oracle and enable/disable backup mode.
+++
+## Backint co-existence
+
+> [!NOTE]
+> Support for co-existence with SAP HANA's Backint interface is a Preview feature.
+> This section's content supplements [Configure Azure Application Consistent Snapshot tool](azacsnap-cmd-ref-configure.md) website page.
+
+[Azure Backup](/azure/backup/) service provides an alternate backup tool for SAP HANA, where database and log backups are streamed into the
+Azure Backup Service. Some customers would like to combine the streaming backint-based backups with regular snapshot-based backups. However, backint-based
+backups block other methods of backup, such as using a files-based backup or a storage snapshot-based backup (for example, AzAcSnap). Guidance is provided on
+the Azure Backup site on how to [Run SAP HANA native client backup to local disk on a database with Azure Backup enabled](/azure/backup/sap-hana-db-manage#run-sap-hana-native-client-backup-to-local-disk-on-a-database-with-azure-backup-enabled).
+
+The process described in the Azure Backup documentation has been implemented with AzAcSnap to automatically do the following steps:
+
+1. force a log backup flush to backint
+1. wait for running backups to complete
+1. disable the backint-based backup
+1. put SAP HANA into a consistent state for backup
+1. take a storage snapshot-based backup
+1. release SAP HANA
+1. re-enable the backint-based backup.
+
+By default this option is disabled, but it can be enabled by running `azacsnap -c configure ΓÇôconfiguration edit` and answering ΓÇÿyΓÇÖ (yes) to the question
+ΓÇ£Do you need AzAcSnap to automatically disable/enable backint during snapshot? (y/n) [n]ΓÇ¥. This will set the autoDisableEnableBackint value to true in the
+JSON configuration file (for example, `azacsnap.json`). It is also possible to change this value by editing the configuration file directly.
+
+Refer to this partial snippet of the configuration file to see where this value is placed and the correct format:
+
+```output
+ "database": [
+ {
+ "hana": {
+ "serverAddress": "127.0.0.1",
+ "sid": "P40",
+ "instanceNumber": "00",
+ "hdbUserStoreName": "AZACSNAP",
+ "savePointAbortWaitSeconds": 600,
+ "autoDisableEnableBackint": true,
+```
+
+## Azure Managed Disk
+
+> [!NOTE]
+> Support for Azure Managed Disk as a storage back-end is a Preview feature.
+> This section's content supplements [Configure Azure Application Consistent Snapshot tool](azacsnap-cmd-ref-configure.md) website page.
+
+Microsoft provides a number of storage options for deploying databases such as SAP HANA. Many of these are detailed on the
+[Azure Storage types for SAP workload](/azure/virtual-machines/workloads/sap/planning-guide-storage) web page. Additionally there is a
+[Cost conscious solution with Azure premium storage](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#cost-conscious-solution-with-azure-premium-storage).
+
+AzAcSnap is able to take application consistent database snapshots when deployed on this type of architecture (that is, a VM with Managed Disks). However, the setup
+for this platform is slightly more complicated as in this scenario we need to block I/O to the mountpoint (using `xfs_freeze`) before taking a snapshot of the Managed
+Disks in the mounted Logical Volume(s).
+
+> [!IMPORTANT]
+> The Linux system must have `xfs_freeze` available to block disk I/O.
+
+Architecture at a high level:
+1. Azure Managed Disks attached to the VM using the Azure portal
+1. Logical Volume is created from these Managed Disks.
+1. Logical Volume mounted to a Linux directory.
+1. Service Principal should be created in the same way as for Azure NetApp Files in [AzAcSnap installation](azacsnap-installation.md?tabs=azure-netapp-files%2Csap-hana#enable-communication-with-storage).
+1. Install and Configure AzAcSnap
+ > [!NOTE]
+ > The configurator has a new option to define the mountpoint for the Logical Volume. This parameter gets passed to `xfs_freeze` to block the I/O (this
+ > happens after the database is put into backup mode). After the I/O cache has been flushed (dependent on Linux kernel parameter `fs.xfs.xfssyncd_centisecs`).
+1. Install and Configure `xfs_freeze` to be run as a non-privileged user:
+ 1. Create an executable file called $HOME/bin/xfs_freeze with the following content
+
+ ```bash
+ #!/bin/sh
+ /usr/bin/sudo /usr/sbin/xfs_freeze $1 $2
+ ```
+
+ 1. Create a sudoers file called `/etc/sudoers.d/azacsnap` to allow the azacsnap user to run `xfs_freeze` with the following content:
+
+ ```bash
+ #
+ # What: azacsnap
+ # Why: Allow the azacsnap user to run "specific" commands with elevated privileges.
+ #
+ # User_Alias = SAP HANA Backup administrator user.
+ User_Alias AZACSNAP = azacsnap
+ #
+ AZACSNAP ALL=(ALL) NOPASSWD: /usr/sbin/xfs_freeze
+ ```
+
+ 1. Test the azacsnap user can freeze and unfreeze I/O to the target mountpoint by running the following as the azacsnap user.
+
+ > [!NOTE]
+ > In this example we run each command twice to show it worked the first time as there is no command to confirm if `xfs_freeze` has frozen I/O.
+
+ Freeze I/O.
+
+ ```bash
+ su - azacsnap
+ xfs_freeze -f /hana/data
+ xfs_freeze -f /hana/data
+ ```
+
+ ```output
+ xfs_freeze: cannot freeze filesystem at /hana/data: Device or resource busy
+ ```
+
+ Unfreeze I/O.
+
+ ```bash
+ su - azacsnap
+ xfs_freeze -u /hana/data
+ xfs_freeze -u /hana/data
+ ```
+
+ ```output
+ xfs_freeze: cannot unfreeze filesystem mounted at /hana/data: Invalid argument
+ ```
+
+### Example configuration file
+
+Here is an example config file, note the hierarchy for the dataVolume, mountpoint, azureManagedDisks:
+
+```output
+{
+ "version": "5.1 Preview",
+ "logPath": "./logs",
+ "securityPath": "./security",
+ "comments": [],
+ "database": [
+ {
+ "hana": {
+ "serverAddress": "127.0.0.1",
+ "sid": "P40",
+ "instanceNumber": "00",
+ "hdbUserStoreName": "AZACSNAP",
+ "savePointAbortWaitSeconds": 600,
+ "autoDisableEnableBackint": false,
+ "hliStorage": [],
+ "anfStorage": [],
+ "amdStorage": [
+ {
+ "dataVolume": [
+ {
+ "mountPoint": "/hana/data",
+ "azureManagedDisks": [
+ {
+ "resourceId": "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.Compute/disks/<disk01>",
+ "authFile": "azureauth.json"
+ },
+ {
+ "resourceId": "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.Compute/disks/<disk02>",
+ "authFile": "azureauth.json"
+ }
+ ]
+ }
+ ],
+ "otherVolume": []
+ }
+ ]
+ },
+ "oracle": null
+ }
+ ]
+}
+```
+
+### Virtual machine storage layout
+
+The storage hierarchy looks like the following example for SAP HANA:
+
+- SAP HANA Database data files:
+ ```output
+ /hana/data/mnt00001
+ ```
+
+- Mountpoint:
+ ```output
+ /dev/mapper/hanadata-hanadata on /hana/data type xfs
+ ```
+
+- Logical Volume
+ ```bash
+ lvdisplay
+ ```
+
+ ```output
+ Logical volume
+ LV Path /dev/hanadata/hanadata
+ LV Name hanadata
+ VG Name hanadata
+ ```
+
+- Volume Group
+ ```bash
+ vgdisplay
+ ```
+
+ ```output
+ Volume group
+ VG Name hanadata
+ System ID
+ Format lvm2
+ Metadata Areas 2
+ Metadata Sequence No 2
+ VG Access read/write
+ VG Status resizable
+ MAX LV 0
+ Cur LV 1
+ Open LV 1
+ Max PV 0
+ Cur PV 2
+ Act PV 2
+ VG Size 1023.99 GiB
+ ```
+
+- Physical Volume(s) (attached Azure Managed Disks)
+ ```bash
+ pvdisplay
+ ```
+
+ ```output
+ Physical volume
+ PV Name /dev/sdd
+ VG Name hanadata
+ PV Size 512.00 GiB / not usable 4.00 MiB
+ Allocatable yes (but full)
+ PE Size 4.00 MiB
+ Total PE 131071
+ Free PE 0
+ Allocated PE 131071
+ PV UUID K3yhxN-2713-lk4k-c3Pc-xOJQ-sCkD-8ZE6YX
+ Physical volume
+ PV Name /dev/sdc
+ VG Name hanadata
+ PV Size 512.00 GiB / not usable 4.00 MiB
+ Allocatable yes (but full)
+ PE Size 4.00 MiB
+ Total PE 131071
+ Free PE 0
+ Allocated PE 131071
+ PV UUID RNCylW-F3OG-G93c-1XL3-W6pw-M0XB-2mYFGV
+ ```
+
+Installing and setting up the Azure VM and Azure Managed Disks in this way follows Microsoft guidance to create LVM stripes of the Managed Disks on the VM.
+
+With the Azure VM setup as described, AzAcSnap can be run with Azure Managed Disks in a similar way to other supported storage back-ends (for example, Azure NetApp Files, Azure Large Instance (Bare Metal)). Because AzAcSnap communicates with the Azure Resource Manager to take snapshots, it also needs a Service Principal with the correct permissions to take managed disk snapshots.
+
+This capability allows customers to test/trial AzAcSnap on a smaller system and scale-up to Azure NetApp Files and/or Azure Large Instance (Bare Metal).
+
+Supported `azacsnap` command functionality with Azure Managed Disks is 'configure', 'test', 'backup', 'delete', 'details', but not yet 'restore'.
+
+### Restore from an Azure Managed Disk snapshot
+
+Although `azacsnap` is currently missing the `-c restore` option for Azure Managed Disks, itΓÇÖs possible to restore manually as follows:
+
+1. Creating disks from the snapshots via the Azure portal.
+
+ > [!NOTE]
+ > Be sure to create the disks in the same Availability Zone as the target VM.
+
+1. Connect the disks to the VM via the Azure portal.
+1. Log in to the VM as the `root` user and scan for the newly attached disks using dmesg or pvscan:
+
+ 1. Using `dmesg`
+
+ ```bash
+ dmesg | tail -n30
+ ```
+
+ ```output
+ [2510054.252801] scsi 5:0:0:2: Direct-Access Msft Virtual Disk 1.0 PQ:0 ANSI: 5
+ [2510054.262358] scsi 5:0:0:2: Attached scsi generic sg4 type 0
+ [2510054.268514] sd 5:0:0:2: [sde] 1073741824 512-byte logical blocks: (550 GB/512 GiB)
+ [2510054.272583] sd 5:0:0:2: [sde] 4096-byte physical blocks
+ [2510054.275465] sd 5:0:0:2: [sde] Write Protect is off
+ [2510054.277915] sd 5:0:0:2: [sde] Mode Sense: 0f 00 10 00
+ [2510054.278566] sd 5:0:0:2: [sde] Write cache: disabled, read cache: enabled, supports DPO and FUA
+ [2510054.314269] sd 5:0:0:2: [sde] Attached SCSI disk
+ [2510054.573135] scsi 5:0:0:3: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5
+ [2510054.579931] scsi 5:0:0:3: Attached scsi generic sg5 type 0
+ [2510054.584505] sd 5:0:0:3: [sdf] 1073741824 512-byte logical blocks: (550 GB/512 GiB)
+ [2510054.589293] sd 5:0:0:3: [sdf] 4096-byte physical blocks
+ [2510054.592237] sd 5:0:0:3: [sdf] Write Protect is off
+ [2510054.594735] sd 5:0:0:3: [sdf] Mode Sense: 0f 00 10 00
+ [2510054.594839] sd 5:0:0:3: [sdf] Write cache: disabled, read cache: enabled, supports DPO and FUA
+ [2510054.627310] sd 5:0:0:3: [sdf] Attached SCSI disk
+ ```
+
+ 1. Using `pvscan`
+
+ ```bash
+ saphana:~ # pvscan
+ ```
+
+ ```output
+ WARNING: scan found duplicate PVID RNCylWF3OGG93c1XL3W6pwM0XB2mYFGV on /dev/sde
+ WARNING: scan found duplicate PVID K3yhxN2713lk4kc3PcxOJQsCkD8ZE6YX on /dev/sdf
+ WARNING: Not using device /dev/sde for PV RNCylW-F3OG-G93c-1XL3-W6pw-M0XB-2mYFGV.
+ WARNING: Not using device /dev/sdf for PV K3yhxN-2713-lk4k-c3Pc-xOJQ-sCkD-8ZE6YX.
+ WARNING: PV RNCylW-F3OG-G93c-1XL3-W6pw-M0XB-2mYFGV prefers device /dev/sdc because device is used by LV.
+ WARNING: PV K3yhxN-2713-lk4k-c3Pc-xOJQ-sCkD-8ZE6YX prefers device /dev/sdd because device is used by LV.
+ PV /dev/sdd VG hanadata lvm2 [512.00 GiB / 0 free]
+ PV /dev/sdc VG hanadata lvm2 [512.00 GiB / 0 free]
+ Total: 2 [1023.99 GiB] / in use: 2 [1023.99 GiB] / in no VG: 0 [0 ]
+ ```
+
+1. Import a Volume Group Clone from the disks using `vgimportclone` as the `root` user:
+
+ ```bash
+ vgimportclone --basevgname hanadata_adhoc /dev/sde /dev/sdf
+ ```
+
+ ```output
+ WARNING: scan found duplicate PVID RNCylWF3OGG93c1XL3W6pwM0XB2mYFGV on /dev/sde
+ WARNING: scan found duplicate PVID K3yhxN2713lk4kc3PcxOJQsCkD8ZE6YX on /dev/sdf
+ WARNING: Not using device /dev/sde for PV RNCylW-F3OG-G93c-1XL3-W6pw-M0XB-2mYFGV.
+ WARNING: Not using device /dev/sdf for PV K3yhxN-2713-lk4k-c3Pc-xOJQ-sCkD-8ZE6YX.
+ WARNING: PV RNCylW-F3OG-G93c-1XL3-W6pw-M0XB-2mYFGV prefers device /dev/sdc because device is used by LV.
+ WARNING: PV K3yhxN-2713-lk4k-c3Pc-xOJQ-sCkD-8ZE6YX prefers device /dev/sdd because device is used by LV.
+ ```
+
+1. Activate the Logical Volume using `pvscan` and `vgchange` as `root` user:
+
+ ```bash
+ pvscan --cache
+ ```
+
+ ```output
+ pvscan[23761] PV /dev/sdc online.
+ pvscan[23761] PV /dev/sdd online.
+ pvscan[23761] PV /dev/sde online.
+ pvscan[23761] PV /dev/sdf online.
+ ```
+
+ ```bash
+ vgchange -ay hanadata_adhoc
+ ```
+
+ ```output
+ 1 logical volume(s) in volume group "hanadata_adhoc" now active
+ ```
+
+1. Mount the logical volume as the `root` user.
+
+ > [!IMPORTANT]
+ > Use the `mount -o rw,nouuid` options, otherwise volume mounting will fail due to duplicate UUIDs on the VM.
+
+ ```bash
+ mount -o rw,nouuid /dev/hanadata_adhoc/hanadata /mnt/hanadata_adhoc
+ ```
+
+1. Then access the data
+
+ ```bash
+ ls /mnt/hanadata_adhoc/
+ ```
+
+ ```output
+ software write-test.txt
+ ```
++
+## RunBefore and RunAfter capability
+
+> [!NOTE]
+> Support for `azacsnap` to run shell commands before and after `azacsnap` executes is a Preview feature.
+> This section's content supplements [What is Azure Application Consistent Snapshot tool](azacsnap-introduction.md) website page.
+
+A new capability for AzAcSnap to execute external commands before or after its main execution.
+
+`--runbefore` will run a shell command before the main execution of azacsnap and provides some of the azacsnap command-line parameters to the shell environment.
+By default, `azacsnap` will wait up to 30 seconds for the external shell command to complete before killing the process and returning to azacsnap normal execution.
+This can be overridden by adding a number to wait in seconds after a `%` character (for example, `--runbefore "mycommand.sh%60"` will wait up to 60 seconds for `mycommand.sh`
+to complete).
+
+`--runafter` will run a shell command after the main execution of azacsnap and provides some of the azacsnap command-line parameters to the shell environment.
+By default, `azacsnap` will wait up to 30 seconds for the external shell command to complete before killing the process and returning to azacsnap normal execution.
+This can be overridden by adding a number to wait in seconds after a `%` character (for example, `--runafter "mycommand.sh%60"` will wait for up to 60 seconds for `mycommand.sh`
+to complete).
+
+The following list of environment variables is generated by `azacsnap` and passed to the shell forked to run the commands provided as parameters to `--runbefore` and `--runafter`:
+
+- `$azCommand` = the command option passed to -c (for example, backup, test, etc.).
+- `$azConfigFileName` = the configuration filename.
+- `$azPrefix` = the --prefix value.
+- `$azRetention` = the --retention value.
+- `$azSid` = the --dbsid value.
+- `$azSnapshotName` = the snapshot name generated by azacsnap
+
+> [!NOTE]
+> There is only a value for `$azSnapshotName` in the `--runafter` option.
+
+### Example usage
+
+An example usage for this new feature is to upload a snapshot to Azure Blob for archival purposes using the azcopy tool ([Copy or move data to Azure Storage by using AzCopy](/azure/storage/common/storage-use-azcopy-v10)).
+
+The following crontab entry is a single line and runs `azacsnap` at five past midnight. Note the call to `snapshot-to-blob.sh` passing the snapshot name and snapshot prefix:
+
+```output
+5 0 * * * ( . ~/.bash_profile ; cd /home/azacsnap/bin ; ./azacsnap -c backup --volume data --prefix daily --retention 1 --configfile HANA.json --trim --ssl openssl --runafter 'env ; ./snapshot-to-blob.sh $azSnapshotName $azPrefix')
+```
+
+This example shell script has a special stanza at the end to prevent AzAcSnap from killing the external command due to the timeout described earlier. This allows for
+a long running command, such as uploading large files with azcopy, to be run without being prematurely stopped.
+
+The snapshots need to be mounted on the system doing the copy, with at a minimum read-only privileges. The base location of the mount point for the snapshots should
+be provided to the `sourceDir` variable in the script.
+
+```bash
+cat snapshot-to-blob.sh
+```
+
+```output
+#!/bin/sh
+# _START_ Change these
+saskeyFile="$HOME/bin/blob-credentials.saskey"
+# the snapshots need to be mounted locally for copying, put source directory here
+sourceDir=/mnt/saphana1/hana_data_PR1/.snapshot
+# _END_ Change these
+
+# do not change any of the following
+#
+if [ -r $saskeyFile ]; then
+ . $saskeyFile
+else
+ echo "Credential file '$saskeyFile' not found, exiting!"
+fi
+
+# Log files
+archiveLog="logs/`basename $0`.log"
+echo "-- Started ($0 $snapshotName $prefix) @ `date "+%d-%h-%Y %H:%M"`" >> $archiveLog
+env >> $archiveLog
+#
+if [ "$1" == "" -o "$2" == "" ]; then
+ echo "Usage: $0 <snapshotName> <prefix>"
+ exit 1
+fi
+
+blobStore="`echo $portalGeneratedSas | cut -f1 -d'?'`"
+blobSasKey="`echo $portalGeneratedSas | cut -f2 -d'?'`"
+snapshotName=$1
+prefix=$2
+
+# Archive naming (daily.1, daily.2, etc...)
+dayOfWeek=`date "+%u"`
+monthOfYear=`date "+%m"`
+archiveBlobTgz="$prefix.$dayOfWeek.tgz"
+
+runCmd(){
+ echo "[RUNCMD] $1" >> $archiveLog
+ bash -c "$1"
+}
+
+main() {
+ # Check sourceDir and snapshotName exist
+ if [ ! -d "$sourceDir/$snapshotName" ]; then
+ echo "$sourceDir/$snapshotName not found, exiting!" | tee -a $archiveLog
+ exit 1
+ fi
+
+ # Copy snapshot to blob store
+ echo " Starting copy of $snapshotName to $blobStore/$archiveBlobTgz" >> $archiveLog
+ runCmd "cd $sourceDir/$snapshotName && tar zcvf - * | azcopy cp \"$blobStore/$archiveBlobTgz?$blobSasKey\" --from-to PipeBlob && cd -"
+ echo " Completed copy of $snapshotName $blobStore/$archiveBlobTgz" >> $archiveLog
+ echo " Current list of files stored in $blobStore" >> $archiveLog
+ runCmd "azcopy list \"$blobStore?$blobSasKey\" --properties LastModifiedTime " >> $archiveLog
+
+ # Complete
+ echo "-- Finished ($0 $snapshotName $prefix) @ `date "+%d-%h-%Y %H:%M"`" >> $archiveLog
+ echo "--" >> $archiveLog
+ # col 12345678901234567890123456789012345678901234567890123456789012345678901234567890
+}
+
+# background ourselves so AzAcSnap exits cleanly
+echo "Backgrounding '$0 $@' to prevent blocking azacsnap"
+echo "Logging to $archiveLog"
+{
+ trap '' HUP
+ # the script
+ main
+} < > 2>&1 &
+```
+
+The saskeyFile contains the following example SAS Key (content changed for security):
+
+```bash
+cat blob-credentials.saskey
+```
+
+```output
+# we need a generated SAS key, get this from the portal with read,add,create,write,list permissions
+portalGeneratedSas="https://<targetstorageaccount>.blob.core.windows.net/<blob-store>?sp=racwl&st=2021-06-10T21:10:38Z&se=2021-06-11T05:10:38Z&spr=https&sv=2020-02-10&sr=c&sig=<key-material>"
+```
+
+## Next steps
+
+- [Get started](azacsnap-get-started.md)
+- [Test AzAcSnap](azacsnap-cmd-ref-test.md)
+- [Back up using AzAcSnap](azacsnap-cmd-ref-backup.md)
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-release-notes.md
This page lists major changes made to AzAcSnap to provide new functionality or resolve defects.
+## Jan-2022
+
+AzAcSnap v5.1 Preview (Build: 20220125.85030) has been released with the following new features:
+
+- Oracle Database support
+- Backint Co-existence
+- Azure Managed Disk
+- RunBefore and RunAfter capability
+
+For details on the preview features and how to use them go to [AzAcSnap Preview](azacsnap-preview.md).
+ ## Aug-2021
-### AzAcSnap v5.0.2 (Build_20210827.19086) - Patch update to v5.0.1
+### AzAcSnap v5.0.2 (Build: 20210827.19086) - Patch update to v5.0.1
-AzAcSnap v5.0.2 (Build_20210827.19086) is provided as a patch update to the v5.0 branch with the following fixes and improvements:
+AzAcSnap v5.0.2 (Build: 20210827.19086) is provided as a patch update to the v5.0 branch with the following fixes and improvements:
-- Ignore `ssh` 255 exit codes. In some cases the `ssh` command, which is used to communicate with storage on Azure Large Instance, would emit an exit code of 255 when there were no errors or execution failures (refer `man ssh` "EXIT STATUS") - subsequently AzAcSnap would trap this as a failure and abort. With this update additional verification is done to validate correct execution, this includes parsing `ssh` STDOUT and STDERR for errors in addition to traditional Exit Code checks.-- Fix installer hdbuserstore source path check. The installer would check for the existence of an incorrect source directory for the hdbuserstore for the user running the install - this is fixed to check for `~/.hdb`. This is applicable to systems (e.g. Azure Large Instance) where the hdbuserstore was pre-configured for the `root` user before installing `azacsnap`.
+- Ignore `ssh` 255 exit codes. In some cases the `ssh` command, which is used to communicate with storage on Azure Large Instance, would emit an exit code of 255 when there were no errors or execution failures (refer `man ssh` "EXIT STATUS") - subsequently AzAcSnap would trap this as a failure and abort. With this update additional verification is done to validate correct execution, this includes parsing `ssh` STDOUT and STDERR for errors in addition to traditional exit code checks.
+- Fix the installer's check for the location of the hdbuserstore. The installer would check for the existence of an incorrect source directory for the hdbuserstore for the user running the install - this is fixed to check for `~/.hdb`. This is applicable to systems (for example, Azure Large Instance) where the hdbuserstore was pre-configured for the `root` user before installing `azacsnap`.
- Installer now shows the version it will install/extract (if the installer is run without any arguments). Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer and review how to [get started](azacsnap-get-started.md).
Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer
AzAcSnap v5.0.1 (Build: 20210524.14837) is provided as a patch update to the v5.0 branch with the following fixes and improvements: -- Improved exit code handling. In some cases an exit code of 0 (zero) was emitted even when there was an execution failure where it should have been non-zero. Exit codes should now only be zero on successfully running `azacsnap` to completion and non-zero in case of any failure. Additionally, AzAcSnap's internal error handling has been extended to capture and emit the exit code of the external commands (e.g. hdbsql, ssh) run by AzAcSnap, if they are the cause of the failure.
+- Improved exit code handling. In some cases an exit code of 0 (zero) was emitted, even when there was an execution failure and it should have been non-zero. Exit codes should now only be zero on successfully running `azacsnap` to completion and non-zero in case of any failure.
+- AzAcSnap's internal error handling has been extended to capture and emit the exit code of the external commands run by AzAcSnap.
## April-2021
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
This section provides references to SAP on Azure solutions.
* [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files for SAP applications](../virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files.md) * [High availability for SAP NetWeaver on Azure VMs on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications](../virtual-machines/workloads/sap/high-availability-guide-rhel-netapp-files.md) * [High availability for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files (SMB) for SAP applications](../virtual-machines/workloads/sap/high-availability-guide-windows-netapp-files-smb.md)
+* [Using Windows DFS-N to support flexible SAPMNT share creation for SMB-based file share](../virtual-machines/workloads/sap/high-availability-guide-windows-dfs.md)
* [High availability for SAP NetWeaver on Azure VMs on Red Hat Enterprise Linux for SAP applications multi-SID guide](../virtual-machines/workloads/sap/high-availability-guide-rhel-multi-sid.md) ### SAP HANA
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
na Previously updated : 01/14/2022 Last updated : 01/25/2022
Azure NetApp Files is updated regularly. This article provides a summary about t
* [LDAP search scope](configure-ldap-extended-groups.md#ldap-search-scope)
- You might be using the Unix security style with a dual-protocol volume or LDAP with extended groups features in combination with large LDAP topologies. In this case, you might encounter "access denied" errors on Linux clients when interacting with such Azure NetApp Files volumes. You can now use the **LDAP Search Scope** option to specify the LDAP search scope to avoid "access denied" errors.
+ You might be using the Unix security style with a dual-protocol volume or Lightweight Directory Access Protocol (LDAP) with extended groups features in combination with large LDAP topologies. In this case, you might encounter "access denied" errors on Linux clients when interacting with such Azure NetApp Files volumes. You can now use the **LDAP Search Scope** option to specify the LDAP search scope to avoid "access denied" errors.
* [Active Directory Domain Services (ADDS) LDAP user-mapping with NFS extended groups](configure-ldap-extended-groups.md) now generally available (GA)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [NFS protocol version conversion](convert-nfsv3-nfsv41.md) (Preview)
- In some cases, you might need to transition from one NFS protocol version to another. For instance, when you want an existing NFS NFSv3 volume to take advantage of NFSv4.1 features, you might want to convert the protocol version from NFSv3 to NFSv4.1. Likewise, you might want to convert an existing NFSv4.1 volume to NFSv3 for performance or simplicity reasons. Azure NetApp Files now provides an option that enables you to convert an NFS volume between NFSv3 and NFSv4.1, without requiring creation of new volumes and performing data copies. The conversion operations preserve the data and update the volume export policies as part of the operation.
+ In some cases, you might need to transition from one NFS protocol version to another. For example, when you want an existing NFS NFSv3 volume to take advantage of NFSv4.1 features, you might want to convert the protocol version from NFSv3 to NFSv4.1. Likewise, you might want to convert an existing NFSv4.1 volume to NFSv3 for performance or simplicity reasons. Azure NetApp Files now provides an option that enables you to convert an NFS volume between NFSv3 and NFSv4.1. This option doesn't require creating new volumes or performing data copies. The conversion operations preserve the data and update the volume export policies as part of the operation.
* [Single-file snapshot restore](snapshots-restore-file-single.md) (Preview)
- Azure NetApp Files provides ways to quickly restore data from snapshots (mainly at the volume level). See [How Azure NetApp Files snapshots work](snapshots-introduction.md). Options for user file self-restore are available via client-side data copy from the `~snapshot` (Windows) or `.snapshot` (Linux) folders. These operations require data (files and directories) to traverse the network twice (upon read and write). As such, the operations are not time and resource efficient, especially with large data sets. If you do not want to restore the entire snapshot to a new volume, revert a volume, or copy large files across the network, you now have the option to use the single-file snapshot restore feature to restore individual files directly on the service from a volume snapshot without requiring data copy using an external client. This approach will drastically reduce RTO and network resource usage when restoring large files.
+ Azure NetApp Files provides ways to quickly restore data from snapshots (mainly at the volume level). See [How Azure NetApp Files snapshots work](snapshots-introduction.md). Options for user file self-restore are available via client-side data copy from the `~snapshot` (Windows) or `.snapshot` (Linux) folders. These operations require data (files and directories) to traverse the network twice (upon read and write). As such, the operations aren't time and resource efficient, especially with large data sets. If you don't want to restore the entire snapshot to a new volume, revert a volume, or copy large files across the network, you can use the single-file snapshot restore feature to restore individual files directly on the service from a volume snapshot without requiring data copy via an external client. This approach will drastically reduce RTO and network resource usage when restoring large files.
* Features that are now generally available (GA)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Application volume group for SAP HANA](application-volume-group-introduction.md) (Preview)
- Application volume group (AVG) for SAP HANA enables you to deploy all volumes required to install and operate an SAP HANA database according to best practices, including the use of proximity placement group (PPG) with VMs to achieve automated, low-latency deployments. Application volume group for SAP HANA has implemented many technical improvements that simplify and standardize the entire process to help you streamline volume deployments for SAP HANA.
+ Application volume group (AVG) for SAP HANA enables you to deploy all volumes required to install and operate an SAP HANA database according to best practices, including the use of proximity placement group (PPG) with VMs to achieve automated, low-latency deployments. AVG for SAP HANA has implemented many technical improvements that simplify and standardize the entire process to help you streamline volume deployments for SAP HANA.
## October 2021
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Standard network features](configure-network-features.md) (Preview)
- Azure NetApp Files now supports **Standard** network features for volumes that customers have been asking for since the inception. This capability has been made possible by innovative hardware and software integration. Standard network features provide an enhanced virtual networking experience through a variety of features for a seamless and consistent experience along with security posture of all their workloads including Azure NetApp Files.
+ Azure NetApp Files now supports **Standard** network features for volumes that customers have been asking for since the inception. This capability has been made possible by innovative hardware and software integration. Standard network features provide an enhanced virtual networking experience through a variety of features for a seamless and consistent experience with security posture of all their workloads including Azure NetApp Files.
You can now choose *Standard* or *Basic* network features when creating a new Azure NetApp Files volume. Upon choosing Standard network features, you can take advantage of the following supported features for Azure NetApp Files volumes and delegated subnets: * Increased IP limits for the VNets with Azure NetApp Files volumes at par with VMs
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Azure NetApp Files backup](backup-introduction.md) (Preview)
- Azure NetApp Files online snapshots are now enhanced with backup of snapshots. With this new backup capability, you can vault your Azure NetApp Files snapshots to cost efficient and ZRS-enabled Azure storage in a fast and cost-effective way, further protecting your data from accidental deletion. Azure NetApp Files backup extends ONTAP's built-in snapshot technology. When snapshots are vaulted to Azure storage, only changed blocks relative to previously vaulted snapshots are copied and stored, in an efficient format. Vaulted snapshots, however, are still represented in full and can be restored to a new volume individually and directly, eliminating the need for an iterative, full-incremental recovery process. This advanced technology minimizes the amount of data required to store to and retrieve from Azure storage, therefore saving data transfer and storage costs. It also shortens the backup vaulting time, so you can achieve a smaller Restore Point Objective (RPO). You can now choose to keep a minimum number of snapshots online on the Azure NetApp Files service for the most immediate, near-instantaneous data recovery needs, and build up a longer history of snapshots at a lower cost for long-term retention purposes in the Azure NetApp Files backup vault. See [How Azure NetApp Files snapshots work](snapshots-introduction.md) for details.
+ Azure NetApp Files online snapshots are now enhanced with backup of snapshots. With this new backup capability, you can vault your Azure NetApp Files snapshots to cost-efficient and ZRS-enabled Azure storage in a fast and cost-effective way. This approach further protects your data from accidental deletion.
+
+ Azure NetApp Files backup extends ONTAP's built-in snapshot technology. When snapshots are vaulted to Azure storage, only changed blocks relative to previously vaulted snapshots are copied and stored, in an efficient format. Vaulted snapshots are still represented in full. They can be restored to a new volume individually and directly, eliminating the need for an iterative, full-incremental recovery process. This advanced technology minimizes the amount of data required to store to and retrieve from Azure storage, therefore saving data transfer and storage costs. It also shortens the backup vaulting time, so you can achieve a smaller Restore Point Objective (RPO). You can keep a minimum number of snapshots online on the Azure NetApp Files service for the most immediate, near-instantaneous data-recovery needs. In doing so, you can build up a longer history of snapshots at a lower cost for long-term retention in the Azure NetApp Files backup vault.
+
+ For more information, see [How Azure NetApp Files snapshots work](snapshots-introduction.md).
* [**Administrators**](create-active-directory-connections.md#create-an-active-directory-connection) option in Active Directory connections (Preview)
Azure NetApp Files is updated regularly. This article provides a summary about t
You can now set the Unix permissions and the change ownership mode (`Chown Mode`) options on Azure NetApp Files NFS volumes or dual-protocol volumes with the Unix security style. You can specify these settings during volume creation or after volume creation.
- The change ownership mode (`Chown Mode`) functionality enables you to set the ownership management capabilities of files and directories. You can specify or modify the setting under a volume's export policy. Two options for `Chown Mode` are available: *Restricted* (default), where only the root user can change the ownership of files and directories, and *Unrestricted*, where non-root users can change the ownership for files and directories that they own.
+ The change ownership mode (`Chown Mode`) functionality enables you to set the ownership management capabilities of files and directories. You can specify or modify the setting under a volume's export policy. Two options for `Chown Mode` are available:
+ * *Restricted* (default), where only the root user can change the ownership of files and directories
+ * *Unrestricted*, where non-root users can change the ownership for files and directories that they own
The Azure NetApp Files Unix Permissions functionality enables you to specify change permissions for the mount path.
- These new features provide options to move access control of certain files and directories into the hands of the data user instead of the service operator.
+ These new features put access control of certain files and directories in the hands of the data user instead of the service operator.
* [Dual-protocol (NFSv4.1 and SMB) volume](create-volumes-dual-protocol.md) (Preview)
- Azure NetApp Files already supports dual-protocol access to NFSv3 and SMB volumes as of [July 2020](#july-2020). You can now create an Azure NetApp Files volume that allows simultaneous dual-protocol (NFSv4.1 and SMB) access with support for LDAP user mapping. This feature enables use cases where you might have a Linux-based workload using NFSv4.1 for its access, and the workload generates and stores data in an Azure NetApp Files volume. At the same time, your staff might need to use Windows-based clients and software to analyze the newly generated data from the same Azure NetApp Files volume. The simultaneous dual-protocol access feature removes the need to copy the workload-generated data to a separate volume with a different protocol for post-analysis, saving storage cost and operational time. This feature is free of charge (normal Azure NetApp Files storage cost still applies) and is generally available. Learn more from the [simultaneous dual-protocol NFSv4.1/SMB access](create-volumes-dual-protocol.md) documentation.
+ Azure NetApp Files already supports dual-protocol access to NFSv3 and SMB volumes as of [July 2020](#july-2020). You can now create an Azure NetApp Files volume that allows simultaneous dual-protocol (NFSv4.1 and SMB) access with support for LDAP user mapping. This feature enables use cases where you might have a Linux-based workload using NFSv4.1 for its access, and the workload generates and stores data in an Azure NetApp Files volume. At the same time, your staff might need to use Windows-based clients and software to analyze the newly generated data from the same Azure NetApp Files volume. The simultaneous dual-protocol access removes the need to copy the workload-generated data to a separate volume with a different protocol for post-analysis, saving storage cost and operational time. This feature is free of charge (normal Azure NetApp Files storage cost still applies) and is generally available. Learn more from the [simultaneous dual-protocol NFSv4.1/SMB access](create-volumes-dual-protocol.md) documentation.
## June 2021
Azure NetApp Files is updated regularly. This article provides a summary about t
The new Azure NetApp Files **Storage service add-ons** menu option provides an Azure portal ΓÇ£launching padΓÇ¥ for available third-party, ecosystem add-ons to the Azure NetApp Files storage service. With this new portal menu option, you can enter a landing page by clicking an add-on tile to quickly access the add-on.
- **NetApp add-ons** is the first category of add-ons introduced under **Storage service add-ons**. It provides access to **NetApp Cloud Compliance**. Clicking the **NetApp Cloud Compliance** tile opens a new browser and directs you to the add-on installation page.
+ **NetApp add-ons** is the first category of add-ons introduced under **Storage service add-ons**. It provides access to NetApp Cloud Data Sense. Clicking the **Cloud Data Sense** tile opens a new browser and directs you to the add-on installation page.
* [Manual QoS capacity pool](azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type) now generally available (GA)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Shared AD support for multiple accounts to one Active Directory per region per subscription](create-active-directory-connections.md#shared_ad) (Preview)
- To date, Azure NetApp Files supports only a single Active Directory (AD) per region, where only a single NetApp account could be configured to access the AD. The new **Shared AD** feature enables all NetApp accounts to share an AD connection created by one of the NetApp accounts that belong to the same subscription and the same region. For example, using this feature, all NetApp accounts in the same subscription and region can use the common AD configuration to create an SMB volume, a NFSv4.1 Kerberos volume, or a dual-protocol volume. When you use this feature, the AD connection will be visible in all NetApp accounts that are under the same subscription and same region.
+ To date, Azure NetApp Files supports only a single Active Directory (AD) per region, where only a single NetApp account could be configured to access the AD. The new **Shared AD** feature enables all NetApp accounts to share an AD connection created by one of the NetApp accounts that belong to the same subscription and the same region. For example, all NetApp accounts in the same subscription and region can use the common AD configuration to create an SMB volume, a NFSv4.1 Kerberos volume, or a dual-protocol volume. When you use this feature, the AD connection will be visible in all NetApp accounts that are under the same subscription and same region.
## May 2021
Azure NetApp Files is updated regularly. This article provides a summary about t
* [ADDS LDAP over TLS](configure-ldap-over-tls.md) (Preview)
- By default, LDAP communications between client and server applications are not encrypted. This means that it is possible to use a network monitoring device or software to view the communications between an LDAP client and server computers. This scenario might be problematic in non-isolated or shared VNets when an LDAP simple bind is used, because the credentials (user name and password) used to bind the LDAP client to the LDAP server are passed over the network unencrypted. LDAP over TLS (also known as LDAPS) is a protocol that uses TLS to secure communication between LDAP clients and LDAP servers. Azure NetApp Files now supports the secure communication between an Active Directory Domain Server (ADDS) using LDAP over TLS. Azure NetApp Files can now use LDAP over TLS for setting up authenticated sessions between the Active Directory-integrated LDAP servers. You can enable the LDAP over TLS feature for NFS, SMB, and dual-protocol volumes. By default, LDAP over TLS is disabled on Azure NetApp Files.
+ By default, LDAP communications between client and server applications are not encrypted. This means that it is possible to use a network-monitoring device or software to view the communications between an LDAP client and server computers. This scenario might be problematic in non-isolated or shared VNets when an LDAP simple bind is used, because the credentials (username and password) used to bind the LDAP client to the LDAP server are passed over the network unencrypted. LDAP over TLS (also known as LDAPS) is a protocol that uses TLS to secure communication between LDAP clients and LDAP servers. Azure NetApp Files now supports the secure communication between an Active Directory Domain Server (ADDS) using LDAP over TLS. Azure NetApp Files can now use LDAP over TLS for setting up authenticated sessions between the Active Directory-integrated LDAP servers. You can enable the LDAP over TLS feature for NFS, SMB, and dual-protocol volumes. By default, LDAP over TLS is disabled on Azure NetApp Files.
* Support for throughput [metrics](azure-netapp-files-metrics.md)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [SMB Continuous Availability (CA) shares support for FSLogix user profile containers](azure-netapp-files-create-volumes-smb.md#continuous-availability) (Preview)
- [FSLogix](/fslogix/overview) is a set of solutions that enhance, enable, and simplify non-persistent Windows computing environments. FSLogix solutions are appropriate for virtual environments in both public and private clouds. FSLogix solutions can also be used to create more portable computing sessions when you use physical devices. FSLogix can be used to provide dynamic access to persistent user profile containers stored on SMB shared networked storage, including Azure NetApp Files. To further enhance FSLogix resiliency to storage service maintenance events, Azure NetApp Files has extended support for SMB Transparent Failover via [SMB Continuous Availability (CA) shares on Azure NetApp Files](azure-netapp-files-create-volumes-smb.md#continuous-availability) for user profile containers. See Azure NetApp Files [Azure Virtual Desktop solutions](azure-netapp-files-solution-architectures.md#windows-virtual-desktop) for additional information.
+ [FSLogix](/fslogix/overview) is a set of solutions that enhance, enable, and simplify non-persistent Windows computing environments. FSLogix solutions are appropriate for virtual environments in both public and private clouds. FSLogix solutions can also be used to create more portable computing sessions when you use physical devices. FSLogix can be used to provide dynamic access to persistent user profile containers stored on SMB shared networked storage, including Azure NetApp Files. To enhance FSLogix resiliency to events of storage service maintenance, Azure NetApp Files has extended support for SMB Transparent Failover via [SMB Continuous Availability (CA) shares on Azure NetApp Files](azure-netapp-files-create-volumes-smb.md#continuous-availability) for user profile containers. For more information, see Azure NetApp Files [Azure Virtual Desktop solutions](azure-netapp-files-solution-architectures.md#windows-virtual-desktop).
* [SMB3 Protocol Encryption](azure-netapp-files-create-volumes-smb.md#smb3-encryption) (Preview)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [SMB Continuous Availability (CA) shares](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) (Preview)
- SMB Transparent Failover enables maintenance operations on the Azure NetApp Files service without interrupting connectivity to server applications storing and accessing data on SMB volumes. To support SMB Transparent Failover, Azure NetApp Files now supports the SMB Continuous Availability shares option for use with SQL Server applications over SMB running on Azure VMs. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. Enabling this feature provides significant SQL Server performance improvements and scale and cost benefits for [Single Instance, Always-On Failover Cluster Instance and Always-On Availability Group deployments](azure-netapp-files-solution-architectures.md#sql-server). See [Benefits of using Azure NetApp Files for SQL Server deployment](solutions-benefits-azure-netapp-files-sql-server.md).
+ SMB Transparent Failover enables maintenance operations on the Azure NetApp Files service without interrupting connectivity to server applications storing and accessing data on SMB volumes. To support SMB Transparent Failover, Azure NetApp Files now supports the SMB Continuous Availability shares option for use with SQL Server applications over SMB running on Azure VMs. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. This feature provides significant performance improvements for SQL Server. It also provides scale and cost benefits for [Single Instance, Always-On Failover Cluster Instance and Always-On Availability Group deployments](azure-netapp-files-solution-architectures.md#sql-server). See [Benefits of using Azure NetApp Files for SQL Server deployment](solutions-benefits-azure-netapp-files-sql-server.md).
* [Automatic resizing of a cross-region replication destination volume](azure-netapp-files-resize-capacity-pools-or-volumes.md#resize-a-cross-region-replication-destination-volume)
- In a cross-region replication relationship, a destination volume is automatically resized based on the size of the source volume. As such, you donΓÇÖt need to resize the destination volume separately. This automatic resizing behavior is applicable when the volumes are in an active replication relationship, or when replication peering is broken with the resync operation. For this feature to work, you need to ensure sufficient headroom in the capacity pools for both the source and the destination volumes.
+ In a cross-region replication relationship, a destination volume is automatically resized based on the size of the source volume. As such, you donΓÇÖt need to resize the destination volume separately. This automatic resizing behavior applies when the volumes are in an active replication relationship. It also applies when replication peering is broken with the resync operation. For this feature to work, you need to ensure sufficient headroom in the capacity pools for both the source and the destination volumes.
## December 2020
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Azure NetApp Files cross-region replication](cross-region-replication-introduction.md) (Preview)
- Azure NetApp Files now supports cross-region replication. With this new disaster recovery capability, you can replicate your Azure NetApp Files volumes from one Azure region to another in a fast and cost-effective way, protecting your data from unforeseeable regional failures. Azure NetApp Files cross region replication leverages NetApp SnapMirror® technology; only changed blocks are sent over the network in a compressed, efficient format. This proprietary technology minimizes the amount of data required to replicate across the regions, therefore saving data transfer costs. It also shortens the replication time, so you can achieve a smaller Restore Point Objective (RPO).
+ Azure NetApp Files now supports cross-region replication. With this new disaster recovery capability, you can replicate your Azure NetApp Files volumes from one Azure region to another in a fast and cost-effective way. It helps you protect your data from unforeseeable regional failures. Azure NetApp Files cross-region replication leverages NetApp SnapMirror® technology; only changed blocks are sent over the network in a compressed, efficient format. This proprietary technology minimizes the amount of data required to replicate across the regions, therefore saving data transfer costs. It also shortens the replication time, so you can achieve a smaller Restore Point Objective (RPO).
* [Manual QoS Capacity Pool](azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type) (Preview)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Dual-protocol (NFSv3 and SMB) volume](create-volumes-dual-protocol.md)
- You can now create an Azure NetApp Files volume that allows simultaneous dual-protocol (NFS v3 and SMB) access with support for LDAP user mapping. This feature enables use cases where you may have a Linux-based workload that generates and stores data in an Azure NetApp Files volume. At the same time, your staff needs to use Windows-based clients and software to analyze the newly generated data from the same Azure NetApp Files volume. The simultaneous dual-protocol access feature removes the need to copy the workload-generated data to a separate volume with a different protocol for post-analysis, saving storage cost, and operational time. This feature is free of charge (normal [Azure NetApp Files storage cost](https://azure.microsoft.com/pricing/details/netapp/) still applies) and is generally available. Learn more from the [simultaneous dual-protocol access documentation](create-volumes-dual-protocol.MD).
+ You can now create an Azure NetApp Files volume that allows simultaneous dual-protocol (NFS v3 and SMB) access with support for LDAP user mapping. This feature enables use cases where you may have a Linux-based workload that generates and stores data in an Azure NetApp Files volume. At the same time, your staff needs to use Windows-based clients and software to analyze the newly generated data from the same Azure NetApp Files volume. The simultaneous dual-protocol access removes the need to copy the workload-generated data to a separate volume with a different protocol for post-analysis. It helps you save storage cost and operational time. This feature is free of charge (normal [Azure NetApp Files storage cost](https://azure.microsoft.com/pricing/details/netapp/) still applies) and is generally available. Learn more from the [simultaneous dual-protocol access documentation](create-volumes-dual-protocol.MD).
* [NFS v4.1 Kerberos encryption in transit](configure-kerberos-encryption.MD)
azure-percept Create And Deploy Manually Azure Precept Devkit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/create-and-deploy-manually-azure-precept-devkit.md
+
+ Title: Manually create and deploy an Azure Precept Devkit
+description: this article shows the audience how to manually create and deploy an Azure Precept Devkit
++++ Last updated : 01/25/2022++++
+# Manually create and deploy an Azure Precept Devkit
+The following guide is to help customers manually deploy a factory fresh IoT Edge deployment to existing Azure Percept devices. We have also included the steps to manually create your Azure Percept IoT Edge device instance.
+
+## Prerequisites
+
+- Highly recommended: Update your Percept Devkit to the [latest version](./software-releases-usb-cable-updates.md)
+- Azure account with IoT Hub created
+- Install [VSCode](https://code.visualstudio.com/Download)
+- Install the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) Extension for VSCode
+- Find the software image version running on your Percept Devkit (see below).
+
+## How to identify your Azure Percept Devkit software version
+
+SSH (Secure Shell) into your devkit and run the following command then write down the output for later reference.
+
+`cat /etc/adu-version`
+
+Example output: 2021.111.124.109
+
+## Create an Azure IoT Edge device for the Devkit
+If you have a device already created, you can skip to the [Manually deploy the deployment.json to the device](#manually-deploy-the-deploymentjson-to-the-azure-percept-device) section.
+1. Go to [Azure portal](https://portal.azure.com) and select the **IoT Hub** where you will create the device.
+2. Navigate to **IoT Edge** and select **Add an IoT Edge device**
+3. On the **Create a Device** screen, name your device in the **Device ID** section and leave all other fields as default, then click the **Save** button.
+![create new device](./media/manually-deploy-azure-precept-devkit-images/create-device.png)
+
+1. Select your newly created device.
+![select new device.](./media/manually-deploy-azure-precept-devkit-images/select-new-device.png)
+
+2. Copy the **Primary Connection String**. We will use this copied text in the Azure Percept Onboarding/setup web pages.
+![Primary Connection String](./media/manually-deploy-azure-precept-devkit-images/primary-connection-string.png)
++
+## Connect to and Setup the Devkit
+<!-- Introduction paragraph -->
+1. Setup your Devkit using the main instructions and **STOP** at to the **Select your preferred configuration** page
+1. Select **Connect to an existing device**
+1. Paste the **Primary Connection String** that you copied from the earlier steps.
+2. Click Finish.
+3. The **Device setup complete!** page should now display.
+ **If this page does not disappear after 10 secs, donΓÇÖt worry. Just go ahead with the next steps**
+4. You will be disconnected from the DevkitΓÇÖs Wi-Fi hotspot. Reconnect your computer to your main Wi-Fi (if needed).
++
+## Manually deploy the deployment.json to the Azure Percept device
+The deployment.json files are a representation of all default modules necessary to begin using the Azure Percept devkit.
+1. Download the appropriate deployment.json from [GitHub](https://github.com/microsoft/azure-percept-advanced-development/tree/main/default-configuration) for your reported software version. Refer to the [How to Identify your Azure Percept Devkit software version](#how-to-identify-your-azure-percept-devkit-software-version) section above.
+ 1. For 2021.111.124.xxx and later --> use [default-deployment-2112.json](https://github.com/microsoft/azure-percept-advanced-development/blob/main/default-configuration/default-deployment-2112.json)
+ 2. For 2021.109.129.xxx and lower use -> use [default-deployment-2108.json](https://github.com/microsoft/azure-percept-advanced-development/blob/main/default-configuration/default-deployment-2108.json)
+2. Launch VSCode and Sign into Azure. Be sure you have installed the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) Extension
+3. Connect to your subscription and select your IoTHub
+4. Locate your IoT Edge Device then right click it and choose **Create deployment for a Single Device**.
+![find edge device](./media/manually-deploy-azure-precept-devkit-images/iot-edge-device.png) ![create deployment for edge device](./media/manually-deploy-azure-precept-devkit-images/create-deployment.png)
+
+5. Navigate to the "Deployment.json" you saved from step 1 and use this.
+6. Deployment will take 1-5 mins to fully complete.
+ 1. For those interested in watching the IoT Edge log as this deployment is going on, you can SSH into your Azure Percept devkit and watch the iotedge logs by issuing the command below.
+ `sudo journalctl -u iotedge -f`
+7. Your Devkit is now ready to use!
++
+<!-- 5. Next steps
+Required. Provide at least one next step and no more than three. Include some
+context so the customer can determine why they would click the link.
+-->
+
+## Next steps
+Navigate to the [Azure Percept portal](https://ms.portal.azure.com/#blade/AzureEdgeDevices/Main/overview) for more AI models.
azure-percept Quickstart Percept Dk Set Up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/quickstart-percept-dk-set-up.md
Title: Set up the Azure Percept DK device
-description: Set up you Azure Percept DK and connect it to Azure IoT Hub
+description: Set up your Azure Percept DK and connect it to Azure IoT Hub
To verify if your Azure account is an ΓÇ£ownerΓÇ¥ or ΓÇ£contributorΓÇ¥ within th
> [!NOTE] > You **cannot** reuse an existing IoT Edge device name when going through the **Create New Device** flow. If you wish to reuse the same name and deploy the default Percept modules, you must first delete the existing cloud-side device instance from the Azure IoT Hub before proceeding. - 1. The device modules will now be deployed to your device. ΓÇô this can take a few minutes. :::image type="content" source="./media/quickstart-percept-dk-setup/main-finalize.png" alt-text="Finalizing setup.":::
azure-portal Networking Quota Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/networking-quota-requests.md
# Increase networking quotas
-This article shows how to request increases for VM-family vCPU quotas in the [Azure portal](https://portal.azure.com).
+This article shows how to request increases for networking quotas in the [Azure portal](https://portal.azure.com).
To view your current networking usage and quota in the Azure portal, open your subscription, then select **Usage + quotas**. You can also use the following options to view your network usage and limits.
azure-sql Connectivity Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connectivity-settings.md
The connectivity settings are accessible from the **Firewalls and virtual networ
## Deny public network access
-When **Deny public network access** is set to **Yes**, only connections via private endpoints are allowed. When this setting is **No** (default), customers can connect by using either public endpoints (with IP-based firewall rules or with virtual-network-based firewall rules) or private endpoints (by using Azure Private Link), as outlined in the [network access overview](network-access-controls-overview.md).
-
- ![Diagram showing connectivity when Deny public network access is set to yes versus when Deny public network access is set to no.][2]
-
-Any attempts to set **Deny public network access** to **Yes** without any existing private endpoints at the logical server will fail with an error message similar to:
-
-```output
-Error 42102
-Unable to set Deny Public Network Access to Yes since there is no private endpoint enabled to access the server.
-Please set up private endpoints and retry the operation.
-```
-
-> [!NOTE]
-> To define virtual network firewall rules on a logical server that has already been configured with private endpoints, set **Deny public network access** to **No**.
+The defaukt for this setting is **No** so that customers can connect by using either public endpoints (with IP-based server- level firewall rules or with virtual-network firewall rules) or private endpoints (by using Azure Private Link), as outlined in the [network access overview](network-access-controls-overview.md).
When **Deny public network access** is set to **Yes**, only connections via private endpoints are allowed. All connections via public endpoints will be denied with an error message similar to:
The public network interface on this server is not accessible.
To connect to this server, use the Private Endpoint from inside your virtual network. ```
-When **Deny public network access** is set to **Yes**, any attempts to add or update firewall rules will be denied with an error message similar to:
+When **Deny public network access** is set to **Yes**, any attempts to add, remove or edit any firewall rules will be denied with an error message similar to:
```output Error 42101 Unable to create or modify firewall rules when public network interface for the server is disabled. To manage server or database level firewall rules, please enable the public network interface. ```
+Ensure that **Deny public network access** is set to **No** to be able to add, remove or edit any firewall rules for Azure Sql
## Change public network access via PowerShell
azure-sql Features Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/features-comparison.md
The following table lists the major features of SQL Server and provides informat
| [OPENQUERY](/sql/t-sql/functions/openquery-transact-sql)|No|Yes, only to SQL Database, SQL Managed Instance and SQL Server. See [T-SQL differences](../managed-instance/transact-sql-tsql-differences-sql-server.md)| | [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql)|Yes, only to import from Azure Blob storage. |Yes, only to SQL Database, SQL Managed Instance and SQL Server, and to import from Azure Blob storage. See [T-SQL differences](../managed-instance/transact-sql-tsql-differences-sql-server.md)| | [Operators](/sql/t-sql/language-elements/operators-transact-sql) | Most - see individual operators |Yes - see [T-SQL differences](../managed-instance/transact-sql-tsql-differences-sql-server.md) |
-| [Polybase](/sql/relational-databases/polybase/polybase-guide) | No. You can query data in the files placed on Azure Blob Storage using `OPENROWSET` function or use [an external table that references a serverless SQL pool in Synapse Analytics](https://devblogs.microsoft.com/azure-sql/read-azure-storage-files-using-synapse-sql-external-tables/). | No. You can query data in the files placed on Azure Blob Storage using `OPENROWSET` function, [a linked server that references a serverless SQL pool in Synapse Analytics](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance/), or an external table (in public preview) that references [a serverless SQL pool in Synapse Analytics](https://devblogs.microsoft.com/azure-sql/read-azure-storage-files-using-synapse-sql-external-tables/) or SQL Server. |
+| [Polybase](/sql/relational-databases/polybase/polybase-guide) | No. You can query data in the files placed on Azure Blob Storage using `OPENROWSET` function or use [an external table that references a serverless SQL pool in Synapse Analytics](https://devblogs.microsoft.com/azure-sql/read-azure-storage-files-using-synapse-sql-external-tables/). | No. You can query data in the files placed on Azure Blob Storage using `OPENROWSET` function, a linked server that references [serverless SQL pool in Synapse Analytics](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance/), [SQL Database](https://techcommunity.microsoft.com/t5/azure-database-support-blog/lesson-learned-63-it-is-possible-to-create-linked-server-in/ba-p/369168), or SQL Server. |
| [Query Notifications](/sql/relational-databases/native-client/features/working-with-query-notifications) | No | Yes | | [Machine Learning Services](/sql/advanced-analytics/what-is-sql-server-machine-learning) (_Formerly R Services_)| No | Yes, see [Machine Learning Services in Azure SQL Managed Instance](../managed-instance/machine-learning-services-overview.md) | | [Recovery models](/sql/relational-databases/backup-restore/recovery-models-sql-server) | Only Full Recovery that guarantees high availability is supported. Simple and Bulk Logged recovery models are not available. | Only Full Recovery that guarantees high availability is supported. Simple and Bulk Logged recovery models are not available. |
azure-sql Sql Data Sync Data Sql Server Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-data-sync-data-sql-server-sql-database.md
Yes. You must have a SQL Database account to host the hub database.
Not directly. You can sync between SQL Server databases indirectly, however, by creating a Hub database in Azure, and then adding the on-premises databases to the sync group.
-### Can I use Data Sync to sync between databases in SQL Database that belong to different subscriptions
+### Can I configure Data Sync to sync between databases in Azure SQL Database that belong to different subscriptions
-Yes. You can sync between databases that belong to resource groups owned by different subscriptions, even if the subscriptions belong to different tenants.
+Yes. You can configure sync between databases that belong to resource groups owned by different subscriptions, even if the subscriptions belong to different tenants.
-- If the subscriptions belong to the same tenant, and you have permission to all subscriptions, you can configure the sync group in the Azure portal.
+- If the subscriptions belong to the same tenant and you have permission to all subscriptions, you can configure the sync group in the Azure portal.
- Otherwise, you have to use PowerShell to add the sync members.
-### Can I use Data Sync to sync between databases in SQL Database that belong to different clouds (like Azure Public Cloud and Azure China 21Vianet)
+### Can I setup Data Sync to sync between databases in SQL Database that belong to different clouds (like Azure Public Cloud and Azure China 21Vianet)
-Yes. You can sync between databases that belong to different clouds. You have to use PowerShell to add the sync members that belong to the different subscriptions.
+Yes. You can setup sync between databases that belong to different clouds. You have to use PowerShell to add the sync members that belong to the different subscriptions.
### Can I use Data Sync to seed data from my production database to an empty database, and then sync them
cognitive-services Quickstart Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/quickstart-sdk.md
description: This quickstart shows you how to create and manage your knowledge b
Previously updated : 06/18/2020++ Last updated : 01/26/2022
+ms.devlang: csharp, java, javascript, python
zone_pivot_groups: qnamaker-quickstart
Get started with the QnA Maker client library. Follow these steps to install the
[!INCLUDE [QnA Maker Java client library quickstart](../includes/quickstart-sdk-java.md)] ::: zone-end --- ## Clean up resources If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
The following table lists the prebuilt neural voices supported in each language.
| Language | Locale | Gender | Voice name | Style support | ||||||
-| Afrikaans (South Africa) | `af-ZA` | Female | `af-ZA-AdriNeural` <sup>New</sup> | General |
-| Afrikaans (South Africa) | `af-ZA` | Male | `af-ZA-WillemNeural` <sup>New</sup> | General |
-| Amharic (Ethiopia) | `am-ET` | Female | `am-ET-MekdesNeural` <sup>New</sup> | General |
-| Amharic (Ethiopia) | `am-ET` | Male | `am-ET-AmehaNeural` <sup>New</sup> | General |
-| Arabic (Algeria) | `ar-DZ` | Female | `ar-DZ-AminaNeural` <sup>New</sup> | General |
-| Arabic (Algeria) | `ar-DZ` | Male | `ar-DZ-IsmaelNeural` <sup>New</sup> | General |
-| Arabic (Bahrain) | `ar-BH` | Female | `ar-BH-LailaNeural` <sup>New</sup> | General |
-| Arabic (Bahrain) | `ar-BH` | Male | `ar-BH-AliNeural` <sup>New</sup> | General |
+| Afrikaans (South Africa) | `af-ZA` | Female | `af-ZA-AdriNeural` | General |
+| Afrikaans (South Africa) | `af-ZA` | Male | `af-ZA-WillemNeural` | General |
+| Amharic (Ethiopia) | `am-ET` | Female | `am-ET-MekdesNeural` | General |
+| Amharic (Ethiopia) | `am-ET` | Male | `am-ET-AmehaNeural` | General |
+| Arabic (Algeria) | `ar-DZ` | Female | `ar-DZ-AminaNeural` | General |
+| Arabic (Algeria) | `ar-DZ` | Male | `ar-DZ-IsmaelNeural` | General |
+| Arabic (Bahrain) | `ar-BH` | Female | `ar-BH-LailaNeural` | General |
+| Arabic (Bahrain) | `ar-BH` | Male | `ar-BH-AliNeural` | General |
| Arabic (Egypt) | `ar-EG` | Female | `ar-EG-SalmaNeural` | General | | Arabic (Egypt) | `ar-EG` | Male | `ar-EG-ShakirNeural` | General |
-| Arabic (Iraq) | `ar-IQ` | Female | `ar-IQ-RanaNeural` <sup>New</sup> | General |
-| Arabic (Iraq) | `ar-IQ` | Male | `ar-IQ-BasselNeural` <sup>New</sup> | General |
-| Arabic (Jordan) | `ar-JO` | Female | `ar-JO-SanaNeural` <sup>New</sup> | General |
-| Arabic (Jordan) | `ar-JO` | Male | `ar-JO-TaimNeural` <sup>New</sup> | General |
-| Arabic (Kuwait) | `ar-KW` | Female | `ar-KW-NouraNeural` <sup>New</sup> | General |
-| Arabic (Kuwait) | `ar-KW` | Male | `ar-KW-FahedNeural` <sup>New</sup> | General |
-| Arabic (Libya) | `ar-LY` | Female | `ar-LY-ImanNeural` <sup>New</sup> | General |
-| Arabic (Libya) | `ar-LY` | Male | `ar-LY-OmarNeural` <sup>New</sup> | General |
-| Arabic (Morocco) | `ar-MA` | Female | `ar-MA-MounaNeural` <sup>New</sup> | General |
-| Arabic (Morocco) | `ar-MA` | Male | `ar-MA-JamalNeural` <sup>New</sup> | General |
-| Arabic (Qatar) | `ar-QA` | Female | `ar-QA-AmalNeural` <sup>New</sup> | General |
-| Arabic (Qatar) | `ar-QA` | Male | `ar-QA-MoazNeural` <sup>New</sup> | General |
+| Arabic (Iraq) | `ar-IQ` | Female | `ar-IQ-RanaNeural` | General |
+| Arabic (Iraq) | `ar-IQ` | Male | `ar-IQ-BasselNeural` | General |
+| Arabic (Jordan) | `ar-JO` | Female | `ar-JO-SanaNeural` | General |
+| Arabic (Jordan) | `ar-JO` | Male | `ar-JO-TaimNeural` | General |
+| Arabic (Kuwait) | `ar-KW` | Female | `ar-KW-NouraNeural` | General |
+| Arabic (Kuwait) | `ar-KW` | Male | `ar-KW-FahedNeural` | General |
+| Arabic (Libya) | `ar-LY` | Female | `ar-LY-ImanNeural` | General |
+| Arabic (Libya) | `ar-LY` | Male | `ar-LY-OmarNeural` | General |
+| Arabic (Morocco) | `ar-MA` | Female | `ar-MA-MounaNeural` | General |
+| Arabic (Morocco) | `ar-MA` | Male | `ar-MA-JamalNeural` | General |
+| Arabic (Qatar) | `ar-QA` | Female | `ar-QA-AmalNeural` | General |
+| Arabic (Qatar) | `ar-QA` | Male | `ar-QA-MoazNeural` | General |
| Arabic (Saudi Arabia) | `ar-SA` | Female | `ar-SA-ZariyahNeural` | General | | Arabic (Saudi Arabia) | `ar-SA` | Male | `ar-SA-HamedNeural` | General |
-| Arabic (Syria) | `ar-SY` | Female | `ar-SY-AmanyNeural` <sup>New</sup> | General |
-| Arabic (Syria) | `ar-SY` | Male | `ar-SY-LaithNeural` <sup>New</sup> | General |
-| Arabic (Tunisia) | `ar-TN` | Female | `ar-TN-ReemNeural` <sup>New</sup> | General |
-| Arabic (Tunisia) | `ar-TN` | Male | `ar-TN-HediNeural` <sup>New</sup> | General |
-| Arabic (United Arab Emirates) | `ar-AE` | Female | `ar-AE-FatimaNeural` <sup>New</sup> | General |
-| Arabic (United Arab Emirates) | `ar-AE` | Male | `ar-AE-HamdanNeural` <sup>New</sup> | General |
-| Arabic (Yemen) | `ar-YE` | Female | `ar-YE-MaryamNeural` <sup>New</sup> | General |
-| Arabic (Yemen) | `ar-YE` | Male | `ar-YE-SalehNeural` <sup>New</sup> | General |
-| Bangla (Bangladesh) | `bn-BD` | Female | `bn-BD-NabanitaNeural` <sup>New</sup> | General |
-| Bangla (Bangladesh) | `bn-BD` | Male | `bn-BD-PradeepNeural` <sup>New</sup> | General |
+| Arabic (Syria) | `ar-SY` | Female | `ar-SY-AmanyNeural` | General |
+| Arabic (Syria) | `ar-SY` | Male | `ar-SY-LaithNeural` | General |
+| Arabic (Tunisia) | `ar-TN` | Female | `ar-TN-ReemNeural` | General |
+| Arabic (Tunisia) | `ar-TN` | Male | `ar-TN-HediNeural` | General |
+| Arabic (United Arab Emirates) | `ar-AE` | Female | `ar-AE-FatimaNeural` | General |
+| Arabic (United Arab Emirates) | `ar-AE` | Male | `ar-AE-HamdanNeural` | General |
+| Arabic (Yemen) | `ar-YE` | Female | `ar-YE-MaryamNeural` | General |
+| Arabic (Yemen) | `ar-YE` | Male | `ar-YE-SalehNeural` | General |
+| Bangla (Bangladesh) | `bn-BD` | Female | `bn-BD-NabanitaNeural` | General |
+| Bangla (Bangladesh) | `bn-BD` | Male | `bn-BD-PradeepNeural` | General |
+| Bengali (India) | `bn-IN` | Female | `bn-IN-TanishaaNeural` <sup>New</sup> | General |
+| Bengali (India) | `bn-IN` | Male | `bn-IN-BashkarNeural` <sup>New</sup> | General |
| Bulgarian (Bulgaria) | `bg-BG` | Female | `bg-BG-KalinaNeural` | General | | Bulgarian (Bulgaria) | `bg-BG` | Male | `bg-BG-BorislavNeural` | General |
-| Burmese (Myanmar) | `my-MM` | Female | `my-MM-NilarNeural` <sup>New</sup> | General |
-| Burmese (Myanmar) | `my-MM` | Male | `my-MM-ThihaNeural` <sup>New</sup> | General |
+| Burmese (Myanmar) | `my-MM` | Female | `my-MM-NilarNeural` | General |
+| Burmese (Myanmar) | `my-MM` | Male | `my-MM-ThihaNeural` | General |
| Catalan (Spain) | `ca-ES` | Female | `ca-ES-AlbaNeural` | General | | Catalan (Spain) | `ca-ES` | Female | `ca-ES-JoanaNeural` | General | | Catalan (Spain) | `ca-ES` | Male | `ca-ES-EnricNeural` | General |
The following table lists the prebuilt neural voices supported in each language.
| English (Australia) | `en-AU` | Male | `en-AU-WilliamNeural` | General | | English (Canada) | `en-CA` | Female | `en-CA-ClaraNeural` | General | | English (Canada) | `en-CA` | Male | `en-CA-LiamNeural` | General |
-| English (Hong Kong) | `en-HK` | Female | `en-HK-YanNeural` | General |
-| English (Hong Kong) | `en-HK` | Male | `en-HK-SamNeural` | General |
+| English (Hongkong) | `en-HK` | Female | `en-HK-YanNeural` | General |
+| English (Hongkong) | `en-HK` | Male | `en-HK-SamNeural` | General |
| English (India) | `en-IN` | Female | `en-IN-NeerjaNeural` | General | | English (India) | `en-IN` | Male | `en-IN-PrabhatNeural` | General | | English (Ireland) | `en-IE` | Female | `en-IE-EmilyNeural` | General | | English (Ireland) | `en-IE` | Male | `en-IE-ConnorNeural` | General |
-| English (Kenya) | `en-KE` | Female | `en-KE-AsiliaNeural` <sup>New</sup> | General |
-| English (Kenya) | `en-KE` | Male | `en-KE-ChilembaNeural` <sup>New</sup> | General |
+| English (Kenya) | `en-KE` | Female | `en-KE-AsiliaNeural` | General |
+| English (Kenya) | `en-KE` | Male | `en-KE-ChilembaNeural` | General |
| English (New Zealand) | `en-NZ` | Female | `en-NZ-MollyNeural` | General | | English (New Zealand) | `en-NZ` | Male | `en-NZ-MitchellNeural` | General |
-| English (Nigeria) | `en-NG` | Female | `en-NG-EzinneNeural` <sup>New</sup> | General |
-| English (Nigeria) | `en-NG` | Male | `en-NG-AbeoNeural` <sup>New</sup> | General |
+| English (Nigeria) | `en-NG` | Female | `en-NG-EzinneNeural` | General |
+| English (Nigeria) | `en-NG` | Male | `en-NG-AbeoNeural` | General |
| English (Philippines) | `en-PH` | Female | `en-PH-RosaNeural` | General | | English (Philippines) | `en-PH` | Male | `en-PH-JamesNeural` | General | | English (Singapore) | `en-SG` | Female | `en-SG-LunaNeural` | General | | English (Singapore) | `en-SG` | Male | `en-SG-WayneNeural` | General | | English (South Africa) | `en-ZA` | Female | `en-ZA-LeahNeural` | General | | English (South Africa) | `en-ZA` | Male | `en-ZA-LukeNeural` | General |
-| English (Tanzania) | `en-TZ` | Female | `en-TZ-ImaniNeural` <sup>New</sup> | General |
-| English (Tanzania) | `en-TZ` | Male | `en-TZ-ElimuNeural` <sup>New</sup> | General |
+| English (Tanzania) | `en-TZ` | Female | `en-TZ-ImaniNeural` | General |
+| English (Tanzania) | `en-TZ` | Male | `en-TZ-ElimuNeural` | General |
| English (United Kingdom) | `en-GB` | Female | `en-GB-LibbyNeural` | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-SoniaNeural` | General |
| English (United Kingdom) | `en-GB` | Female | `en-GB-MiaNeural` <sup>Retired on 30 October 2021, see below</sup> | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-SoniaNeural` | General |
| English (United Kingdom) | `en-GB` | Male | `en-GB-RyanNeural` | General | | English (United States) | `en-US` | Female | `en-US-AmberNeural` | General | | English (United States) | `en-US` | Female | `en-US-AriaNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
The following table lists the prebuilt neural voices supported in each language.
| English (United States) | `en-US` | Male | `en-US-JacobNeural` | General | | Estonian (Estonia) | `et-EE` | Female | `et-EE-AnuNeural` | General | | Estonian (Estonia) | `et-EE` | Male | `et-EE-KertNeural` | General |
-| Filipino (Philippines) | `fil-PH` | Female | `fil-PH-BlessicaNeural` <sup>New</sup> | General |
-| Filipino (Philippines) | `fil-PH` | Male | `fil-PH-AngeloNeural` <sup>New</sup> | General |
+| Filipino (Philippines) | `fil-PH` | Female | `fil-PH-BlessicaNeural` | General |
+| Filipino (Philippines) | `fil-PH` | Male | `fil-PH-AngeloNeural` | General |
| Finnish (Finland) | `fi-FI` | Female | `fi-FI-NooraNeural` | General | | Finnish (Finland) | `fi-FI` | Female | `fi-FI-SelmaNeural` | General | | Finnish (Finland) | `fi-FI` | Male | `fi-FI-HarriNeural` | General |
The following table lists the prebuilt neural voices supported in each language.
| French (France) | `fr-FR` | Male | `fr-FR-HenriNeural` | General | | French (Switzerland) | `fr-CH` | Female | `fr-CH-ArianeNeural` | General | | French (Switzerland) | `fr-CH` | Male | `fr-CH-FabriceNeural` | General |
-| Galician (Spain) | `gl-ES` | Female | `gl-ES-SabelaNeural` <sup>New</sup> | General |
-| Galician (Spain) | `gl-ES` | Male | `gl-ES-RoiNeural` <sup>New</sup> | General |
+| Galician (Spain) | `gl-ES` | Female | `gl-ES-SabelaNeural` | General |
+| Galician (Spain) | `gl-ES` | Male | `gl-ES-RoiNeural` | General |
| German (Austria) | `de-AT` | Female | `de-AT-IngridNeural` | General | | German (Austria) | `de-AT` | Male | `de-AT-JonasNeural` | General | | German (Germany) | `de-DE` | Female | `de-DE-KatjaNeural` | General |
The following table lists the prebuilt neural voices supported in each language.
| Hindi (India) | `hi-IN` | Male | `hi-IN-MadhurNeural` | General | | Hungarian (Hungary) | `hu-HU` | Female | `hu-HU-NoemiNeural` | General | | Hungarian (Hungary) | `hu-HU` | Male | `hu-HU-TamasNeural` | General |
+| Icelandic (Iceland) | `is-IS` | Female | `is-IS-GudrunNeural` <sup>New</sup> | General |
+| Icelandic (Iceland) | `is-IS` | Male | `is-IS-GunnarNeural` <sup>New</sup> | General |
| Indonesian (Indonesia) | `id-ID` | Female | `id-ID-GadisNeural` | General | | Indonesian (Indonesia) | `id-ID` | Male | `id-ID-ArdiNeural` | General | | Irish (Ireland) | `ga-IE` | Female | `ga-IE-OrlaNeural` | General |
The following table lists the prebuilt neural voices supported in each language.
| Italian (Italy) | `it-IT` | Male | `it-IT-DiegoNeural` | General | | Japanese (Japan) | `ja-JP` | Female | `ja-JP-NanamiNeural` | General | | Japanese (Japan) | `ja-JP` | Male | `ja-JP-KeitaNeural` | General |
-| Javanese (Indonesia) | `jv-ID` | Female | `jv-ID-SitiNeural` <sup>New</sup> | General |
-| Javanese (Indonesia) | `jv-ID` | Male | `jv-ID-DimasNeural` <sup>New</sup> | General |
-| Khmer (Cambodia) | `km-KH` | Female | `km-KH-SreymomNeural` <sup>New</sup> | General |
-| Khmer (Cambodia) | `km-KH` | Male | `km-KH-PisethNeural` <sup>New</sup> | General |
+| Javanese (Indonesia) | `jv-ID` | Female | `jv-ID-SitiNeural` | General |
+| Javanese (Indonesia) | `jv-ID` | Male | `jv-ID-DimasNeural` | General |
+| Kannada (India) | `kn-IN` | Female | `kn-IN-SapnaNeural` <sup>New</sup> | General |
+| Kannada (India) | `kn-IN` | Male | `kn-IN-GaganNeural` <sup>New</sup> | General |
+| Kazakh (Kazakhstan) | `kk-KZ` | Female | `kk-KZ-AigulNeural` <sup>New</sup> | General |
+| Kazakh (Kazakhstan) | `kk-KZ` | Male | `kk-KZ-DauletNeural` <sup>New</sup> | General |
+| Khmer (Cambodia) | `km-KH` | Female | `km-KH-SreymomNeural` | General |
+| Khmer (Cambodia) | `km-KH` | Male | `km-KH-PisethNeural` | General |
| Korean (Korea) | `ko-KR` | Female | `ko-KR-SunHiNeural` | General | | Korean (Korea) | `ko-KR` | Male | `ko-KR-InJoonNeural` | General |
+| Lao (Laos) | `lo-LA` | Female | `lo-LA-KeomanyNeural` <sup>New</sup> | General |
+| Lao (Laos) | `lo-LA` | Male | `lo-LA-ChanthavongNeural` <sup>New</sup> | General |
| Latvian (Latvia) | `lv-LV` | Female | `lv-LV-EveritaNeural` | General | | Latvian (Latvia) | `lv-LV` | Male | `lv-LV-NilsNeural` | General | | Lithuanian (Lithuania) | `lt-LT` | Female | `lt-LT-OnaNeural` | General | | Lithuanian (Lithuania) | `lt-LT` | Male | `lt-LT-LeonasNeural` | General |
+| Macedonian (Republic of North Macedonia) | `mk-MK` | Female | `mk-MK-MarijaNeural` <sup>New</sup> | General |
+| Macedonian (Republic of North Macedonia) | `mk-MK` | Male | `mk-MK-AleksandarNeural` <sup>New</sup> | General |
| Malay (Malaysia) | `ms-MY` | Female | `ms-MY-YasminNeural` | General | | Malay (Malaysia) | `ms-MY` | Male | `ms-MY-OsmanNeural` | General |
+| Malayalam (India) | `ml-IN` | Female | `ml-IN-SobhanaNeural` <sup>New</sup> | General |
+| Malayalam (India) | `ml-IN` | Male | `ml-IN-MidhunNeural` <sup>New</sup> | General |
| Maltese (Malta) | `mt-MT` | Female | `mt-MT-GraceNeural` | General | | Maltese (Malta) | `mt-MT` | Male | `mt-MT-JosephNeural` | General | | Marathi (India) | `mr-IN` | Female | `mr-IN-AarohiNeural` | General |
The following table lists the prebuilt neural voices supported in each language.
| Norwegian (Bokmål, Norway) | `nb-NO` | Female | `nb-NO-IselinNeural` | General | | Norwegian (Bokmål, Norway) | `nb-NO` | Female | `nb-NO-PernilleNeural` | General | | Norwegian (Bokmål, Norway) | `nb-NO` | Male | `nb-NO-FinnNeural` | General |
-| Persian (Iran) | `fa-IR` | Female | `fa-IR-DilaraNeural` <sup>New</sup> | General |
-| Persian (Iran) | `fa-IR` | Male | `fa-IR-FaridNeural` <sup>New</sup> | General |
+| Pashto (Afghanistan) | `ps-AF` | Female | `ps-AF-LatifaNeural` <sup>New</sup> | General |
+| Pashto (Afghanistan) | `ps-AF` | Male | `ps-AF-GulNawazNeural` <sup>New</sup> | General |
+| Persian (Iran) | `fa-IR` | Female | `fa-IR-DilaraNeural` | General |
+| Persian (Iran) | `fa-IR` | Male | `fa-IR-FaridNeural` | General |
| Polish (Poland) | `pl-PL` | Female | `pl-PL-AgnieszkaNeural` | General | | Polish (Poland) | `pl-PL` | Female | `pl-PL-ZofiaNeural` | General | | Polish (Poland) | `pl-PL` | Male | `pl-PL-MarekNeural` | General |
The following table lists the prebuilt neural voices supported in each language.
| Russian (Russia) | `ru-RU` | Female | `ru-RU-DariyaNeural` | General | | Russian (Russia) | `ru-RU` | Female | `ru-RU-SvetlanaNeural` | General | | Russian (Russia) | `ru-RU` | Male | `ru-RU-DmitryNeural` | General |
+| Serbian (Serbia, Cyrillic) | `sr-RS` | Female | `sr-RS-SophieNeural` <sup>New</sup> | General |
+| Serbian (Serbia, Cyrillic) | `sr-RS` | Male | `sr-RS-NicholasNeural` <sup>New</sup> | General |
+| Sinhala (Sri Lanka) | `si-LK` | Female | `si-LK-ThiliniNeural` <sup>New</sup> | General |
+| Sinhala (Sri Lanka) | `si-LK` | Male | `si-LK-SameeraNeural` <sup>New</sup> | General |
| Slovak (Slovakia) | `sk-SK` | Female | `sk-SK-ViktoriaNeural` | General | | Slovak (Slovakia) | `sk-SK` | Male | `sk-SK-LukasNeural` | General | | Slovenian (Slovenia) | `sl-SI` | Female | `sl-SI-PetraNeural` | General | | Slovenian (Slovenia) | `sl-SI` | Male | `sl-SI-RokNeural` | General |
-| Somali (Somalia) | `so-SO` | Female | `so-SO-UbaxNeural` <sup>New</sup> | General |
-| Somali (Somalia) | `so-SO`| Male | `so-SO-MuuseNeural` <sup>New</sup> | General |
+| Somali (Somalia) | `so-SO` | Female | `so-SO-UbaxNeural` | General |
+| Somali (Somalia) | `so-SO`| Male | `so-SO-MuuseNeural` | General |
| Spanish (Argentina) | `es-AR` | Female | `es-AR-ElenaNeural` | General | | Spanish (Argentina) | `es-AR` | Male | `es-AR-TomasNeural` | General |
-| Spanish (Bolivia) | `es-BO` | Female | `es-BO-SofiaNeural` <sup>New</sup> | General |
-| Spanish (Bolivia) | `es-BO` | Male | `es-BO-MarceloNeural` <sup>New</sup> | General |
-| Spanish (Chile) | `es-CL` | Female | `es-CL-CatalinaNeural` <sup>New</sup> | General |
-| Spanish (Chile) | `es-CL` | Male | `es-CL-LorenzoNeural` <sup>New</sup> | General |
+| Spanish (Bolivia) | `es-BO` | Female | `es-BO-SofiaNeural` | General |
+| Spanish (Bolivia) | `es-BO` | Male | `es-BO-MarceloNeural` | General |
+| Spanish (Chile) | `es-CL` | Female | `es-CL-CatalinaNeural` | General |
+| Spanish (Chile) | `es-CL` | Male | `es-CL-LorenzoNeural` | General |
| Spanish (Colombia) | `es-CO` | Female | `es-CO-SalomeNeural` | General | | Spanish (Colombia) | `es-CO` | Male | `es-CO-GonzaloNeural` | General |
-| Spanish (Costa Rica) | `es-CR` | Female | `es-CR-MariaNeural` <sup>New</sup> | General |
-| Spanish (Costa Rica) | `es-CR` | Male | `es-CR-JuanNeural` <sup>New</sup> | General |
-| Spanish (Cuba) | `es-CU` | Female | `es-CU-BelkysNeural` <sup>New</sup> | General |
-| Spanish (Cuba) | `es-CU` | Male | `es-CU-ManuelNeural` <sup>New</sup> | General |
-| Spanish (Dominican Republic) | `es-DO` | Female | `es-DO-RamonaNeural` <sup>New</sup> | General |
-| Spanish (Dominican Republic) | `es-DO` | Male | `es-DO-EmilioNeural` <sup>New</sup> | General |
-| Spanish (Ecuador) | `es-EC` | Female | `es-EC-AndreaNeural` <sup>New</sup> | General |
-| Spanish (Ecuador) | `es-EC` | Male | `es-EC-LuisNeural` <sup>New</sup> | General |
-| Spanish (El Salvador) | `es-SV` | Female | `es-SV-LorenaNeural` <sup>New</sup> | General |
-| Spanish (El Salvador) | `es-SV` | Male | `es-SV-RodrigoNeural` <sup>New</sup> | General |
-| Spanish (Equatorial Guinea) | `es-GQ` | Female | `es-GQ-TeresaNeural` <sup>New</sup> | General |
-| Spanish (Equatorial Guinea) | `es-GQ` | Male | `es-GQ-JavierNeural` <sup>New</sup> | General |
-| Spanish (Guatemala) | `es-GT` | Female | `es-GT-MartaNeural` <sup>New</sup> | General |
-| Spanish (Guatemala) | `es-GT` | Male | `es-GT-AndresNeural` <sup>New</sup> | General |
-| Spanish (Honduras) | `es-HN` | Female | `es-HN-KarlaNeural` <sup>New</sup> | General |
-| Spanish (Honduras) | `es-HN` | Male | `es-HN-CarlosNeural` <sup>New</sup> | General |
+| Spanish (Costa Rica) | `es-CR` | Female | `es-CR-MariaNeural` | General |
+| Spanish (Costa Rica) | `es-CR` | Male | `es-CR-JuanNeural` | General |
+| Spanish (Cuba) | `es-CU` | Female | `es-CU-BelkysNeural` | General |
+| Spanish (Cuba) | `es-CU` | Male | `es-CU-ManuelNeural` | General |
+| Spanish (Dominican Republic) | `es-DO` | Female | `es-DO-RamonaNeural` | General |
+| Spanish (Dominican Republic) | `es-DO` | Male | `es-DO-EmilioNeural` | General |
+| Spanish (Ecuador) | `es-EC` | Female | `es-EC-AndreaNeural` | General |
+| Spanish (Ecuador) | `es-EC` | Male | `es-EC-LuisNeural` | General |
+| Spanish (El Salvador) | `es-SV` | Female | `es-SV-LorenaNeural` | General |
+| Spanish (El Salvador) | `es-SV` | Male | `es-SV-RodrigoNeural` | General |
+| Spanish (Equatorial Guinea) | `es-GQ` | Female | `es-GQ-TeresaNeural` | General |
+| Spanish (Equatorial Guinea) | `es-GQ` | Male | `es-GQ-JavierNeural` | General |
+| Spanish (Guatemala) | `es-GT` | Female | `es-GT-MartaNeural` | General |
+| Spanish (Guatemala) | `es-GT` | Male | `es-GT-AndresNeural` | General |
+| Spanish (Honduras) | `es-HN` | Female | `es-HN-KarlaNeural` | General |
+| Spanish (Honduras) | `es-HN` | Male | `es-HN-CarlosNeural` | General |
| Spanish (Mexico) | `es-MX` | Female | `es-MX-DaliaNeural` | General | | Spanish (Mexico) | `es-MX` | Male | `es-MX-JorgeNeural` | General |
-| Spanish (Nicaragua) | `es-NI` | Female | `es-NI-YolandaNeural` <sup>New</sup> | General |
-| Spanish (Nicaragua) | `es-NI` | Male | `es-NI-FedericoNeural` <sup>New</sup> | General |
-| Spanish (Panama) | `es-PA` | Female | `es-PA-MargaritaNeural` <sup>New</sup> | General |
-| Spanish (Panama) | `es-PA` | Male | `es-PA-RobertoNeural` <sup>New</sup> | General |
-| Spanish (Paraguay) | `es-PY` | Female | `es-PY-TaniaNeural` <sup>New</sup> | General |
-| Spanish (Paraguay) | `es-PY` | Male | `es-PY-MarioNeural` <sup>New</sup> | General |
-| Spanish (Peru) | `es-PE` | Female | `es-PE-CamilaNeural` <sup>New</sup> | General |
-| Spanish (Peru) | `es-PE` | Male | `es-PE-AlexNeural` <sup>New</sup> | General |
-| Spanish (Puerto Rico) | `es-PR` | Female | `es-PR-KarinaNeural` <sup>New</sup> | General |
-| Spanish (Puerto Rico) | `es-PR` | Male | `es-PR-VictorNeural` <sup>New</sup> | General |
+| Spanish (Nicaragua) | `es-NI` | Female | `es-NI-YolandaNeural` | General |
+| Spanish (Nicaragua) | `es-NI` | Male | `es-NI-FedericoNeural` | General |
+| Spanish (Panama) | `es-PA` | Female | `es-PA-MargaritaNeural` | General |
+| Spanish (Panama) | `es-PA` | Male | `es-PA-RobertoNeural` | General |
+| Spanish (Paraguay) | `es-PY` | Female | `es-PY-TaniaNeural` | General |
+| Spanish (Paraguay) | `es-PY` | Male | `es-PY-MarioNeural` | General |
+| Spanish (Peru) | `es-PE` | Female | `es-PE-CamilaNeural` | General |
+| Spanish (Peru) | `es-PE` | Male | `es-PE-AlexNeural` | General |
+| Spanish (Puerto Rico) | `es-PR` | Female | `es-PR-KarinaNeural` | General |
+| Spanish (Puerto Rico) | `es-PR` | Male | `es-PR-VictorNeural` | General |
| Spanish (Spain) | `es-ES` | Female | `es-ES-ElviraNeural` | General | | Spanish (Spain) | `es-ES` | Male | `es-ES-AlvaroNeural` | General |
-| Spanish (Uruguay) | `es-UY` | Female | `es-UY-ValentinaNeural` <sup>New</sup> | General |
-| Spanish (Uruguay) | `es-UY` | Male | `es-UY-MateoNeural` <sup>New</sup> | General |
+| Spanish (Uruguay) | `es-UY` | Female | `es-UY-ValentinaNeural` | General |
+| Spanish (Uruguay) | `es-UY` | Male | `es-UY-MateoNeural` | General |
| Spanish (US) | `es-US` | Female | `es-US-PalomaNeural` | General | | Spanish (US) | `es-US` | Male | `es-US-AlonsoNeural` | General |
-| Spanish (Venezuela) | `es-VE` | Female | `es-VE-PaolaNeural` <sup>New</sup> | General |
-| Spanish (Venezuela) | `es-VE` | Male | `es-VE-SebastianNeural` <sup>New</sup> | General |
-| Sundanese (Indonesia) | `su-ID` | Female | `su-ID-TutiNeural` <sup>New</sup> | General |
-| Sundanese (Indonesia) | `su-ID` | Male | `su-ID-JajangNeural` <sup>New</sup> | General |
+| Spanish (Venezuela) | `es-VE` | Female | `es-VE-PaolaNeural` | General |
+| Spanish (Venezuela) | `es-VE` | Male | `es-VE-SebastianNeural` | General |
+| Sundanese (Indonesia) | `su-ID` | Female | `su-ID-TutiNeural` | General |
+| Sundanese (Indonesia) | `su-ID` | Male | `su-ID-JajangNeural` | General |
| Swahili (Kenya) | `sw-KE` | Female | `sw-KE-ZuriNeural` | General | | Swahili (Kenya) | `sw-KE` | Male | `sw-KE-RafikiNeural` | General |
-| Swahili (Tanzania) | `sw-TZ` | Female | `sw-TZ-RehemaNeural` <sup>New</sup> | General |
-| Swahili (Tanzania) | `sw-TZ` | Male | `sw-TZ-DaudiNeural` <sup>New</sup> | General |
+| Swahili (Tanzania) | `sw-TZ` | Female | `sw-TZ-RehemaNeural` | General |
+| Swahili (Tanzania) | `sw-TZ` | Male | `sw-TZ-DaudiNeural` | General |
| Swedish (Sweden) | `sv-SE` | Female | `sv-SE-HilleviNeural` | General | | Swedish (Sweden) | `sv-SE` | Female | `sv-SE-SofieNeural` | General | | Swedish (Sweden) | `sv-SE` | Male | `sv-SE-MattiasNeural` | General | | Tamil (India) | `ta-IN` | Female | `ta-IN-PallaviNeural` | General | | Tamil (India) | `ta-IN` | Male | `ta-IN-ValluvarNeural` | General |
-| Tamil (Singapore) | `ta-SG` | Female | `ta-SG-VenbaNeural` <sup>New</sup> | General |
-| Tamil (Singapore) | `ta-SG` | Male | `ta-SG-AnbuNeural` <sup>New</sup> | General |
-| Tamil (Sri Lanka) | `ta-LK` | Female | `ta-LK-SaranyaNeural` <sup>New</sup> | General |
-| Tamil (Sri Lanka) | `ta-LK` | Male | `ta-LK-KumarNeural` <sup>New</sup> | General |
+| Tamil (Singapore) | `ta-SG` | Female | `ta-SG-VenbaNeural` | General |
+| Tamil (Singapore) | `ta-SG` | Male | `ta-SG-AnbuNeural` | General |
+| Tamil (Sri Lanka) | `ta-LK` | Female | `ta-LK-SaranyaNeural` | General |
+| Tamil (Sri Lanka) | `ta-LK` | Male | `ta-LK-KumarNeural` | General |
| Telugu (India) | `te-IN` | Female | `te-IN-ShrutiNeural` | General | | Telugu (India) | `te-IN` | Male | `te-IN-MohanNeural` | General | | Thai (Thailand) | `th-TH` | Female | `th-TH-AcharaNeural` | General |
The following table lists the prebuilt neural voices supported in each language.
| Turkish (Turkey) | `tr-TR` | Male | `tr-TR-AhmetNeural` | General | | Ukrainian (Ukraine) | `uk-UA` | Female | `uk-UA-PolinaNeural` | General | | Ukrainian (Ukraine) | `uk-UA` | Male | `uk-UA-OstapNeural` | General |
-| Urdu (India) | `ur-IN` | Female | `ur-IN-GulNeural` <sup>New</sup> | General |
-| Urdu (India) | `ur-IN` | Male | `ur-IN-SalmanNeural` <sup>New</sup> | General |
+| Urdu (India) | `ur-IN` | Female | `ur-IN-GulNeural` | General |
+| Urdu (India) | `ur-IN` | Male | `ur-IN-SalmanNeural` | General |
| Urdu (Pakistan) | `ur-PK` | Female | `ur-PK-UzmaNeural` | General | | Urdu (Pakistan) | `ur-PK` | Male | `ur-PK-AsadNeural` | General |
-| Uzbek (Uzbekistan) | `uz-UZ` | Female | `uz-UZ-MadinaNeural` <sup>New</sup> | General |
-| Uzbek (Uzbekistan) | `uz-UZ` | Male | `uz-UZ-SardorNeural` <sup>New</sup> | General |
+| Uzbek (Uzbekistan) | `uz-UZ` | Female | `uz-UZ-MadinaNeural` | General |
+| Uzbek (Uzbekistan) | `uz-UZ` | Male | `uz-UZ-SardorNeural` | General |
| Vietnamese (Vietnam) | `vi-VN` | Female | `vi-VN-HoaiMyNeural` | General | | Vietnamese (Vietnam) | `vi-VN` | Male | `vi-VN-NamMinhNeural` | General | | Welsh (United Kingdom) | `cy-GB` | Female | `cy-GB-NiaNeural` | General | | Welsh (United Kingdom) | `cy-GB` | Male | `cy-GB-AledNeural` | General |
-| Zulu (South Africa) | `zu-ZA` | Female | `zu-ZA-ThandoNeural` <sup>New</sup> | General |
-| Zulu (South Africa) | `zu-ZA` | Male | `zu-ZA-ThembaNeural` <sup>New</sup> | General |
+| Zulu (South Africa) | `zu-ZA` | Female | `zu-ZA-ThandoNeural` | General |
+| Zulu (South Africa) | `zu-ZA` | Male | `zu-ZA-ThembaNeural` | General |
> [!IMPORTANT] > The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021.
The following neural voices are in public preview.
| Language | Locale | Gender | Voice name | Style support | |-||--|-||
-| English (United States) | `en-US` | Female | `en-US-JennyMultilingualNeural` <sup>New</sup> | General,multilingual capabilities available [using SSML](speech-synthesis-markup.md#create-an-ssml-document) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaochenNeural` <sup>New</sup> | Optimized for spontaneous conversation |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoyanNeural` <sup>New</sup> | Optimized for customer service |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoshuangNeural` <sup>New</sup> | Child voice,optimized for child story and chat; multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles)|
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoqiuNeural` <sup>New</sup> | Optimized for narrating |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaochenNeural` | Optimized for spontaneous conversation |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoqiuNeural` | Optimized for narrating |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoshuangNeural` | Child voice,optimized for child story and chat; multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles)|
+| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoyanNeural` | Optimized for customer service |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-AbbiNeural` <sup>New</sup> | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-BellaNeural` <sup>New</sup> | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-HollieNeural` <sup>New</sup> | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-OliviaNeural` <sup>New</sup> | General |
+| English (United Kingdom) | `en-GB` | Girl | `en-GB-MaisieNeural` <sup>New</sup> | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-AlfieNeural` <sup>New</sup> | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-ElliotNeural` <sup>New</sup> | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-EthanNeural` <sup>New</sup> | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-NoahNeural` <sup>New</sup> | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-OliverNeural` <sup>New</sup> | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-ThomasNeural` <sup>New</sup> | General |
+| English (United States) | `en-US` | Female | `en-US-JennyMultilingualNeural` | General,multi-lingual capabilities available [using SSML](speech-synthesis-markup.md#create-an-ssml-document) |
+| French (France) | `fr-FR` | Female | `fr-FR-BrigitteNeural` <sup>New</sup> | General |
+| French (France) | `fr-FR` | Female | `fr-FR-CelesteNeural` <sup>New</sup> | General |
+| French (France) | `fr-FR` | Female | `fr-FR-CoralieNeural` <sup>New</sup> | General |
+| French (France) | `fr-FR` | Female | `fr-FR-JacquelineNeural` <sup>New</sup> | General |
+| French (France) | `fr-FR` | Female | `fr-FR-JosephineNeural` <sup>New</sup> | General |
+| French (France) | `fr-FR` | Female | `fr-FR-YvetteNeural` <sup>New</sup> | General |
+| French (France) | `fr-FR` | Girl | `fr-FR-EloiseNeural` <sup>New</sup> | General |
+| French (France) | `fr-FR` | Male | `fr-FR-AlainNeural` <sup>New</sup> | General |
+| French (France) | `fr-FR` | Male | `fr-FR-ClaudeNeural` <sup>New</sup> | General |
+| French (France) | `fr-FR` | Male | `fr-FR-JeromeNeural` <sup>New</sup> | General |
+| French (France) | `fr-FR` | Male | `fr-FR-MauriceNeural` <sup>New</sup> | General |
+| French (France) | `fr-FR` | Male | `fr-FR-YvesNeural` <sup>New</sup> | General |
+| German (Germany) | `de-DE` | Female | `de-DE-AmalaNeural` <sup>New</sup> | General |
+| German (Germany) | `de-DE` | Female | `de-DE-ElkeNeural` <sup>New</sup> | General |
+| German (Germany) | `de-DE` | Female | `de-DE-KlarissaNeural` <sup>New</sup> | General |
+| German (Germany) | `de-DE` | Female | `de-DE-LouisaNeural` <sup>New</sup> | General |
+| German (Germany) | `de-DE` | Female | `de-DE-MajaNeural` <sup>New</sup> | General |
+| German (Germany) | `de-DE` | Female | `de-DE-TanjaNeural` <sup>New</sup> | General |
+| German (Germany) | `de-DE` | Girl | `de-DE-GiselaNeural` <sup>New</sup> | General |
+| German (Germany) | `de-DE` | Male | `de-DE-BerndNeural` <sup>New</sup> | General |
+| German (Germany) | `de-DE` | Male | `de-DE-ChristophNeural` <sup>New</sup> | General |
+| German (Germany) | `de-DE` | Male | `de-DE-KasperNeural` <sup>New</sup> | General |
+| German (Germany) | `de-DE` | Male | `de-DE-KillianNeural` <sup>New</sup> | General |
+| German (Germany) | `de-DE` | Male | `de-DE-KlausNeural` <sup>New</sup> | General |
+| German (Germany) | `de-DE` | Male | `de-DE-RalfNeural` <sup>New</sup> | General |
> [!IMPORTANT] > Voices in public preview are only available in three service regions: East US, West Europe, and Southeast Asia.
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
See below for information about changes to Speech services and resources.
* Speaker Recognition service is generally available (GA). With [Speaker Recognition](./speaker-recognition-overview.md) you can accurately verify and identify speakers by their unique voice characteristics. * Speech SDK 1.19.0 release including Speaker Recognition support, Mac M1 ARM support, OpenSSL linking in Linux is dynamic, and Ubuntu 16.04 is no longer supported. * Custom Neural Voice extended to support [49 locales](./language-support.md#custom-neural-voice).
+* Prebuilt Neural Voice added new [languages and variants](./language-support.md#prebuilt-neural-voices).
* Commitment Tiers added to [pricing options](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). - ## Release notes **Choose a service or resource**
cognitive-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/question-answering/concepts/best-practices.md
Title: Best practices - question answering
description: Use these best practices to improve your project and provide better results to your application/chat bot's end users. ++ Previously updated : 11/02/2021- Last updated : 01/26/2022+ # Question answering best practices
Use these best practices to improve your knowledge base and provide better resul
## Extraction
-The question answering is continually improving the algorithms that extract question answer pairs from content and expanding the list of supported file and HTML formats. In general, FAQ pages should be stand-alone and not combined with other information. Product manuals should have clear headings and preferably an index page.
+Question answering is continually improving the algorithms that extract question answer pairs from content and expanding the list of supported file and HTML formats. In general, FAQ pages should be stand-alone and not combined with other information. Product manuals should have clear headings and preferably an index page.
## Creating good questions and answers
-### Good questions
+WeΓÇÖve used the following list of question and answer pairs as representation of a knowledge base to highlight best practices when authoring knowledge bases for question answering.
-The best questions are simple. Consider the key word or phrase for each question then create a simple question for that key word or phrase.
+| Question | Answer |
+|-|-|
+| I want to buy a car |There are three options to buy a car.|
+| I want to purchase software license |Software license can be purchased online at no cost.|
+| What is the price of Microsoft stock? | $200. |
+| How to buy Microsoft Services | Microsoft services can be bought online.|
+| Want to sell car | Please send car pics and document.|
+| How to get access to identification card? | Apply via company portal to get identification card.|
-Add as many alternate questions as you need but keep the alterations simple. Adding more words or phrasings that are not part of the main goal of the question does not help the question answering algorithms find a match.
+### When should you add alternate questions to question and answer pairs?
-### Add relevant alternative questions
+Question answering employs a transformer-based ranker that takes care of user queries that are semantically similar to the question in the knowledge base. For example, consider the following question answer pair:
-Your user may enter questions with either a conversational style of text, `How do I add a toner cartridge to my printer?` or a keyword search such as `toner cartridge`. The project should have both styles of questions in order to correctly return the best answer. If you aren't sure what keywords a customer is entering, use the [Azure Monitor](../how-to/analytics.md) data to analyze queries.
+*Question: What is the price of Microsoft Stock?*
+*Answer: $200.*
-### Good answers
+The service can return the expected response for semantically similar queries such as:
-The best answers are simple answers but not too simple. Do not use answers such as `yes` and `no`. If your answer should link to other sources or provide a rich experience with media and links, use metadata tagging to distinguish between answers, then submit the query with metadata tags in the `strictFilters` property to get the correct answer version.
+ΓÇ£How much is Microsoft stock worth?
+ΓÇ£How much is Microsoft share value?ΓÇ¥
+ΓÇ£How much does a Microsoft share cost?ΓÇ¥
+ΓÇ£What is the market value of a Microsoft stock?ΓÇ¥
+ΓÇ£What is the market value of a Microsoft share?ΓÇ¥
-|Answer|Follow-up prompts|
-|--|--|
-|Power down the Surface laptop with the power button on the keyboard.|* Key-combinations to sleep, shut down, and restart.<br>* How to hard-boot a Surface laptop<br>* How to change the BIOS for a Surface laptop<br>* Differences between sleep, shut down and restart|
-|Customer service is available via phone, Skype, and text message 24 hours a day.|* Contact information for sales.<br> * Office and store locations and hours for an in-person visit.<br> * Accessories for a Surface laptop.|
+However, itΓÇÖs important to understand that the confidence score with which the system returns the correct response will vary based on the input query and how different it is from the original question answer pair.
+
+There are certain scenarios that require the customer to add an alternate question. When itΓÇÖs already verified that for a particular query the correct answer isnΓÇÖt returned despite being present in the knowledge base, we advise adding that query as an alternate question to the intended question answer pair.
+
+### How many alternate questions per question answer pair is optimal?
+
+Users can add up to 10 alternate questions. Alternate questions beyond the first 10 arenΓÇÖt considered by our core ranker. However, theyΓÇÖre evaluated in the other processing layers resulting in better output overall. For example, all the alternate questions will be considered in preprocessing step to look for the exact match.
+
+Semantic understanding in question answering should be able to take care of similar alternate questions.
+
+The return on investment will start diminishing once you exceed 10 questions. Even if youΓÇÖre adding more than 10 alternate questions, try to make the initial 10 questions as semantically dissimilar as possible so that all kinds of intents for the answer are captured by these 10 questions. For the knowledge base at the beginning of this section, in question answer pair #1, adding alternate questions such as ΓÇ£How can I buy a carΓÇ¥, ΓÇ£I wanna buy a carΓÇ¥ arenΓÇÖt required. Whereas adding alternate questions such as ΓÇ£How to purchase a carΓÇ¥, ΓÇ£What are the options of buying a vehicleΓÇ¥ can be useful.
+
+### When to add synonyms to a knowledge base?
+
+Question answering provides the flexibility to use synonyms at the knowledge base level, unlike QnA Maker where synonyms are shared across knowledge bases for the entire service.
+
+For better relevance, you need to provide a list of acronyms that the end user intends to use interchangeably. The following is a list of acceptable acronyms:
+
+`MSFT` ΓÇô Microsoft
+`ID` ΓÇô Identification
+`ETA` ΓÇô Estimated time of Arrival
+
+Other than acronyms, if you think your words are similar in context of a particular domain and generic language models wonΓÇÖt consider them similar, itΓÇÖs better to add them as synonyms. For instance, if an auto company producing a car model X receives queries such as ΓÇ£my carΓÇÖs audio isnΓÇÖt workingΓÇ¥ and the knowledge base has questions on ΓÇ£fixing audio for car XΓÇ¥, then we need to add ΓÇÿXΓÇÖ and ΓÇÿcarΓÇÖ as synonyms.
+
+The transformer-based model already takes care of most of the common synonym cases, for example: `Purchase ΓÇô Buy`, `Sell - Auction`, `Price ΓÇô Value`. For another example, consider the following question answer pair: Q: ΓÇ£What is the price of Microsoft Stock?ΓÇ¥ A: ΓÇ£$200ΓÇ¥.
+
+If we receive user queries like ΓÇ£Microsoft stock valueΓÇ¥,ΓÇ¥ Microsoft share valueΓÇ¥, ΓÇ£Microsoft stock worthΓÇ¥, ΓÇ£Microsoft share worthΓÇ¥, ΓÇ£stock valueΓÇ¥, etc., you should be able to get the correct answer even though these queries have words like "share", "value", and "worth", which arenΓÇÖt originally present in the knowledge base.
+
+### How are lowercase/uppercase characters treated?
+
+Question answering takes casing into account but it's intelligent enough to understand when itΓÇÖs to be ignored. You shouldnΓÇÖt be seeing any perceivable difference due to wrong casing.
+
+### How are question answer pairs prioritized for multi-turn questions?
+
+When a knowledge base has hierarchical relationships (either added manually or via extraction) and the previous response was an answer related to other question answer pairs, for the next query we give slight preference to all the children question answer pairs, sibling question answer pairs, and grandchildren question answer pairs in that order. Along with any query, the [Question Answering REST API](https://docs.microsoft.com/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) expects a `context` object with the property `previousQnAId`, which denotes the last top answer. Based on this previous `QnAID`, all the related `QnAs` are boosted.
+
+### How are accents treated?
+
+Accents are supported for all major European languages. If the query has an incorrect accent, the confidence score might be slightly different, but the service still returns the relevant answer and takes care of minor errors by leveraging fuzzy search.
+
+### How is punctuation in a user query treated?
+
+Punctuation is ignored in a user query before sending it to the ranking stack. Ideally it shouldn’t impact the relevance scores. Punctuation that is ignored is as follows: `,?:;\"'(){}[]-+。./!*؟`
## Chit-Chat
If you add your own chit-chat question answer pairs, make sure to add metadata s
Question answering REST API uses both questions and the answer to search for best answers to a user's query.
-### Searching questions only when answer is not relevant
+### Searching questions only when answer isnΓÇÖt relevant
Use the [`RankerType=QuestionOnly`](#choosing-ranker-type) if you don't want to search answers.
-An example of this is when the knowledge base is a catalog of acronyms as questions with their full form as the answer. The value of the answer will not help to search for the appropriate answer.
+An example of this is when the knowledge base is a catalog of acronyms as questions with their full form as the answer. The value of the answer wonΓÇÖt help to search for the appropriate answer.
## Ranking/Scoring
-Make sure you are making the best use of the supported ranking features. Doing so will improve the likelihood that a given user query is answered with an appropriate response.
+Make sure youΓÇÖre making the best use of the supported ranking features. Doing so will improve the likelihood that a given user query is answered with an appropriate response.
### Choosing a threshold
Alternate questions to improve the likelihood of a match with a user query. Alte
|Original query|Alternate queries|Change| |--|--|--| |Is parking available?|Do you have a car park?|sentence structure|
- |Hi|Yo<br>Hey there!|word-style or slang|
+ |Hi|Yo<br>Hey there|word-style or slang|
### Use metadata tags to filter questions and answers
-Metadata adds the ability for a client application to know it should not take all answers but instead to narrow down the results of a user query based on metadata tags. The project/knowledge base answer can differ based on the metadata tag, even if the query is the same. For example, *"where is parking located"* can have a different answer if the location of the restaurant branch is different - that is, the metadata is *Location: Seattle* versus *Location: Redmond*.
+Metadata adds the ability for a client application to know it shouldnΓÇÖt take all answers but instead to narrow down the results of a user query based on metadata tags. The project/knowledge base answer can differ based on the metadata tag, even if the query is the same. For example, *"where is parking located"* can have a different answer if the location of the restaurant branch is different - that is, the metadata is *Location: Seattle* versus *Location: Redmond*.
### Use synonyms
-While there is some support for synonyms in the English language, use case-insensitive [word alterations](../tutorials/adding-synonyms.md) to add synonyms to keywords that take different forms.
+While thereΓÇÖs some support for synonyms in the English language, use case-insensitive [word alterations](../tutorials/adding-synonyms.md) to add synonyms to keywords that take different forms.
|Original word|Synonyms| |--|--|
Question answering allows users to collaborate on a project/knowledge base. User
## Active learning
-[Active learning](../tutorials/active-learning.md) does the best job of suggesting alternative questions when it has a wide range of quality and quantity of user-based queries. It is important to allow client-applications' user queries to participate in the active learning feedback loop without censorship. Once questions are suggested in the Language Studio portal, you can review and accept or reject those suggestions.
+[Active learning](../tutorials/active-learning.md) does the best job of suggesting alternative questions when it has a wide range of quality and quantity of user-based queries. ItΓÇÖs important to allow client-applications' user queries to participate in the active learning feedback loop without censorship. Once questions are suggested in the Language Studio portal, you can review and accept or reject those suggestions.
## Next steps
cognitive-services Export Import Refresh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/question-answering/how-to/export-import-refresh.md
+
+ Title: Export/import/refresh | question answering projects and knowledge bases
+description: Learn about backing up your question answering projects and knowledge bases
+++++
+recommendations: false
Last updated : 01/25/2022+
+# Export-import-refresh in question answering
+
+You may want to create a copy of your question answering project or related question and answer pairs for several reasons:
+
+* To implement a backup and restore process
+* To integrate with your CI/CD pipeline
+* To move your data to different regions
+
+## Prerequisites
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+* A [language resource](https://aka.ms/create-language-resource) with the custom question answering feature enabled. Remember your Azure Active Directory ID, Subscription, language resource name you selected when you created the resource.
+
+## Export a project
+
+1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
+
+2. Scroll down to the **Answer questions** section and select **Open custom question answering**.
+
+3. Select the project you wish to export > Select **Export** > YouΓÇÖll have the option to export as an **Excel** or **TSV** file.
+
+4. YouΓÇÖll be prompted to save your exported file locally as a zip file.
+
+### Export a project programmatically
+
+To automate the export process, use the [export functionality of the authoring API](./authoring.md#export-project-metadata-and-assets)
+
+## Import a project
+
+1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
+
+2. Scroll down to the **Answer questions** section and select **Open custom question answering**.
+
+3. Select **Import** and specify the file type you selected for the export process. Either **Excel**, or **TSV**.
+
+4. Select Choose File and browse to the local zipped copy of your project that you exported previously.
+
+5. Provide a unique name for the project youΓÇÖre importing.
+
+6. Remember that a project that has only been imported still needs to be deployed/published if you want it to be live.
+
+### Import a project programmatically
+
+To automate the import process, use the [import functionality of the authoring API](./authoring.md#import-project)
+
+## Refresh source url
+
+1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
+
+2. Scroll down to the **Answer questions** section and select **Open custom question answering**.
+
+3. Select the project that contains the source you want to refresh > select manage sources.
+
+4. We recommend having a backup of your project/question answer pairs prior to running each refresh so that you can always roll-back if needed.
+
+5. Select a url-based source to refresh > Select **Refresh URL**.
+
+### Refresh a URL programmatically
+
+To automate the URL refresh process, use the [update sources functionality of the authoring API](./authoring.md#update-sources)
+
+The update sources example in the [Authoring API docs](./authoring.md#update-sources) shows the syntax for adding a new URL-based source. An example query for an update would be as follows:
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy knowledge base** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the following code sample, you would only need to add the region-specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy knowledge base** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project/knowledge base where you would like to update sources.|
+
+```bash
+curl -X PATCH -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '[
+ {
+ "op": "replace",
+ "value": {
+ "displayName": "source5",
+ "sourceKind": "url",
+ "sourceUri": https://download.microsoft.com/download/7/B/1/7B10C82E-F520-4080-8516-5CF0D803EEE0/surface-book-user-guide-EN.pdf,
+ "refresh": "true"
+ }
+ }
+]' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/sources?api-version=2021-10-01'
+```
+
+## Export questions and answers
+
+ItΓÇÖs also possible to export/import a specific knowledge base of question and answers rather than the entire question answering project.
+
+1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
+
+2. Scroll down to the **Answer questions** section and select **Open custom question answering**.
+
+3. Select the project that contains the knowledge base question and answer pairs you want to export.
+
+4. Select **Edit knowledge base**.
+
+5. To the right of show columns are `...` an ellipsis button. > Select the `...` > a dropdown will reveal the option to export/import questions and answers.
+
+ Depending on the size of your web browser, you may experience the UI differently. Smaller browsers will see two separate ellipsis buttons.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of selecting multiple UI ellipsis buttons to get to import/export question and answer pair option](../media/export-import-refresh/export-questions.png)
+
+## Import questions and answers
+
+ItΓÇÖs also possible to export/import a specific knowledge base of question and answers rather than the entire question answering project.
+
+1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
+
+2. Scroll down to the **Answer questions** section and select **Open custom question answering**.
+
+3. Select the project that contains the knowledge base question and answer pairs you want to export.
+
+4. Select **Edit knowledge base**.
+
+5. To the right of show columns are `...` an ellipsis button. > Select the `...` > a dropdown will reveal the option to export/import questions and answers.
+
+ Depending on the size of your web browser, you may experience the UI differently. Smaller browsers will see two separate ellipsis buttons.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of selecting multiple UI ellipsis buttons to get to import/export question and answer pair option](../media/export-import-refresh/export-questions.png)
+
+## Next steps
+
+* [Learn how to use the Authoring API](./authoring.md)
cosmos-db How To Configure Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-configure-private-endpoints.md
The following limitations apply when you're using Private Link with an Azure Cos
* A network administrator should be granted at least the `Microsoft.DocumentDB/databaseAccounts/PrivateEndpointConnectionsApproval/action` permission at the Azure Cosmos account scope to create automatically approved private endpoints.
+* Currently, you can't approve a rejected private endpoint connection. Instead, re-create the private endpoint to resume the private connectivity. The Cosmos DB private link service automatically approves the re-created private endpoint.
+ ### Limitations to private DNS zone integration Unless you're using a private DNS zone group, DNS records in the private DNS zone are not removed automatically when you delete a private endpoint or you remove a region from the Azure Cosmos account. You must manually remove the DNS records before:
cosmos-db Sql Api Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-python.md
ms.devlang: python Previously updated : 04/06/2021 Last updated : 01/25/2022
|**API documentation**|[Python API reference documentation](/python/api/azure-cosmos/azure.cosmos?preserve-view=true&view=azure-python)| |**SDK installation instructions**|[Python SDK installation instructions](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos)| |**Get started**|[Get started with the Python SDK](create-sql-api-python.md)|
-|**Current supported platform**|[Python 2.7](https://www.python.org/downloads/) and [Python 3.6+](https://www.python.org/downloads/)|
+|**Current supported platform**|[Python 3.6+](https://www.python.org/downloads/)|
-## Release history
-
-## 4.2.0
-
-**Bug fixes**
-- Fixed bug where continuation token is not honored when query_iterable is used to get results by page.-- Fixed bug where resource tokens not being honored for document reads and deletes. -
-**New features**
-- Added support for passing `partitionKey` while querying Change-Feed.-
-## 4.1.0
--- Added deprecation warning for "lazy" indexing mode. The backend no longer allows creating containers with this mode and will set them to consistent instead.-
-**New features**
-- Added the ability to set the analytical storage TTL when creating a new container.-
-**Bug fixes**
-- Fixed support for `dicts` as inputs for get_client APIs.-- Fixed Python 2/3 compatibility in query iterators.-- Fixed type hint error.-- Fixed bug where options headers were not added to upsert_item function. -- Fixed error raised when a non-string ID is used in an item. It now raises TypeError rather than AttributeError.--
-## 4.0.0
-
-* Stable release.
-* Added HttpLoggingPolicy to pipeline to enable passing in a custom logger for request and response headers.
-
-### 4.0.0b6
-
-* Fixed bug in synchronized_request for media APIs.
-* Removed MediaReadMode and MediaRequestTimeout from ConnectionPolicy as media requests are not supported.
-
-### 4.0.0b5
-
-* azure.cosmos.errors module deprecated and replaced by azure.cosmos.exceptions
-* The access condition parameters (`access_condition`, `if_match`, `if_none_match`) have been deprecated in favor of separate `match_condition` and `etag` parameters.
-* Fixed bug in routing map provider.
-* Added query Distinct, Offset, and Limit support.
-* Default document query execution context now used for
-
- * Change Feed queries
- * single partition queries (`partitionkey`, `partitionKeyRangeId` is present in options)
- * Non-document queries
-
-* Errors out for aggregates on multiple partitions, with enable cross partition query set to true, but no "value" keyword present
-* Hits query plan endpoint for other scenarios to fetch query plan
-* Added `__repr__` support for Cosmos entity objects.
-* Updated documentation.
-
-### 4.0.0b4
-
-* Added support for a `timeout` keyword argument to all operations to specify an absolute timeout in seconds within which the operation must be completed. If the timeout value is exceeded, a `azure.cosmos.errors.CosmosClientTimeoutError` will be raised.
-
-* Added a new `ConnectionRetryPolicy` to manage retry behavior during HTTP connection errors.
-
-* Added new constructor and per-operation configuration keyword arguments:
-
- * `retry_total` - Maximum retry attempts.
- * `retry_backoff_max` - Maximum retry wait time in seconds.
- * `retry_fixed_interval` - Fixed retry interval in milliseconds.
- * `retry_read` - Maximum number of sockets read retry attempts.
- * `retry_connect` - Maximum number of connection error retry attempts.
- * `retry_status` - Maximum number of retry attempts on error status codes.
- * `retry_on_status_codes` - A list of specific status codes to retry on.
- * `retry_backoff_factor` - Factor to calculate wait time between retry attempts.
-
-### 4.0.0b3
-
-* Added `create_database_if_not_exists()` and `create_container_if_not_exists` functionalities to CosmosClient and Database respectively.
-
-### 4.0.0b2
-
-* Version 4.0.0b2 is the second iteration in our efforts to build a client library that suits the Python language best practices.
-
-**Breaking changes**
-
-* The client connection has been adapted to consume the HTTP pipeline defined in `azure.core.pipeline`.
-
-* Interactive objects have now been renamed as proxies. This includes:
-
- * `Database` -> `DatabaseProxy`
- * `User` -> `UserProxy`
- * `Container` -> `ContainerProxy`
- * `Scripts` -> `ScriptsProxy`
-
-* The constructor of `CosmosClient` has been updated:
-
- * The `auth` parameter has been renamed to `credential` and will now take an authentication type directly. This means the primary key value, a dictionary of resource tokens, or a list of permissions can be passed in. However the old dictionary format is still supported.
-
- * The `connection_policy` parameter has been made a keyword only parameter, and while it is still supported, each of the individual attributes of the policy can now be passed in as explicit keyword arguments:
-
- * `request_timeout`
- * `media_request_timeout`
- * `connection_mode`
- * `media_read_mode`
- * `proxy_config`
- * `enable_endpoint_discovery`
- * `preferred_locations`
- * `multiple_write_locations`
-
-* A new constructor has been added to `CosmosClient` to enable creation via a connection string retrieved from the Azure portal.
-
-* Some `read_all` operations have been renamed to `list` operations:
-
- * `CosmosClient.read_all_databases` -> `CosmosClient.list_databases`
- * `Container.read_all_conflicts` -> `ContainerProxy.list_conflicts`
- * `Database.read_all_containers` -> `DatabaseProxy.list_containers`
- * `Database.read_all_users` -> `DatabaseProxy.list_users`
- * `User.read_all_permissions` -> `UserProxy.list_permissions`
-
-* All operations that take `request_options` or `feed_options` parameters, these have been moved to keyword only parameters. In addition, while these options dictionaries are still supported, each of the individual options within the dictionary are now supported as explicit keyword arguments.
-
-* The error hierarchy is now inherited from `azure.core.AzureError`:
-
- * `HTTPFailure` has been renamed to `CosmosHttpResponseError`
- * `JSONParseFailure` has been removed and replaced by `azure.core.DecodeError`
- * Added additional errors for specific response codes:
- * `CosmosResourceNotFoundError` for status 404
- * `CosmosResourceExistsError` for status 409
- * `CosmosAccessConditionFailedError` for status 412
-
-* `CosmosClient` can now be run in a context manager to handle closing the client connection.
-
-* Iterable responses (for example, query responses and list responses) are now of type `azure.core.paging.ItemPaged`. The method `fetch_next_block` has been replaced by a secondary iterator, accessed by the `by_page` method.
-
-### 4.0.0b1
-
-Version 4.0.0b1 is the first preview of our efforts to create a user-friendly client library that suits the Python language best practices. For more information about this, and preview releases of other Azure SDK libraries, please visit https://aka.ms/azure-sdk-preview1-python.
-
-**Breaking changes: New API design**
-
-* Operations are now scoped to a particular client:
-
- * `CosmosClient`: This client handles account-level operations. This includes managing service properties and listing the databases within an account.
- * `Database`: This client handles database-level operations. This includes creating and deleting containers, users, and stored procedures. It can be accessed from a cosmosClient` instance by name.
- * `Container`: This client handles operations for a particular container. This includes querying and inserting items and managing properties.
- * `User`: This client handles operations for a particular user. This includes adding and deleting permissions and managing user properties.
-
- These clients can be accessed by navigating down the client hierarchy using the `get_<child>_client` method. For full details on the new API, please see the [reference documentation](https://aka.ms/azsdk-python-cosmos-ref).
-
-* Clients are accessed by name rather than by ID. No need to concatenate strings to create links.
-
-* No more need to import types and methods from individual modules. The public API surface area is available directly in the `azure.cosmos` package.
+> [!IMPORTANT]
+> * Versions 4.3.0b2 and higher only support Python 3.6+. Python 2 is not supported.
-* Individual request properties can be provided as keyword arguments rather than constructing a separate `RequestOptions` instance.
-
-### 3.0.2
-
-* Added Support for MultiPolygon Datatype
-* Bug Fix in Session Read Retry Policy
-* Bug Fix for Incorrect padding issues while decoding base 64 strings
-
-### 3.0.1
-
-* Bug fix in LocationCache
-* Bug fix endpoint retry logic
-* Fixed documentation
-
-### 3.0.0
-
-* Multi-regions write support added
-* Naming changes
- * DocumentClient to CosmosClient
- * Collection to Container
- * Document to Item
- * Package name updated to "azure-cosmos"
- * Namespace updated to "azure.cosmos"
-
-### 2.3.3
-
-* Added support for proxy
-* Added support for reading change feed
-* Added support for collection quota headers
-* Bugfix for large session tokens issue
-* Bugfix for ReadMedia API
-* Bugfix in partition key range cache
-
-### 2.3.2
-
-* Added support for default retries on connection issues.
-
-### 2.3.1
-
-* Updated documentation to reference Azure Cosmos DB instead of Azure DocumentDB.
-
-### 2.3.0
-
-* This SDK version requires the latest version of Azure Cosmos DB Emulator available for download from https://aka.ms/cosmosdb-emulator.
-
-### 2.2.1
-
-* bugfix for aggregate dict
-* bugfix for trimming slashes in the resource link
-* tests for unicode encoding
-
-### 2.2.0
-
-* Added support for Request Unit per Minute (RU/m) feature.
-* Added support for a new consistency level called ConsistentPrefix.
-
-### 2.1.0
-
-* Added support for aggregation queries (COUNT, MIN, MAX, SUM, and AVG).
-* Added an option for disabling SSL verification when running against DocumentDB Emulator.
-* Removed the restriction of dependent requests module to be exactly 2.10.0.
-* Lowered minimum throughput on partitioned collections from 10,100 RU/s to 2500 RU/s.
-* Added support for enabling script logging during stored procedure execution.
-* REST API version bumped to '2017-01-19' with this release.
-
-### 2.0.1
-
-* Made editorial changes to documentation comments.
-
-### 2.0.0
-
-* Added support for Python 3.5.
-* Added support for connection pooling using the requests module.
-* Added support for session consistency.
-* Added support for TOP/ORDERBY queries for partitioned collections.
-
-### 1.9.0
-
-* Added retry policy support for throttled requests. (Throttled requests receive a request rate too large exception, error code 429.)
- By default, DocumentDB retries nine times for each request when error code 429 is encountered, honoring the retryAfter time in the response header.
- A fixed retry interval time can now be set as part of the RetryOptions property on the ConnectionPolicy object if you want to ignore the retryAfter time returned by server between the retries.
- DocumentDB now waits for a maximum of 30 seconds for each request that is being throttled (irrespective of retry count) and returns the response with error code 429.
- This time can also be overridden in the RetryOptions property on ConnectionPolicy object.
-
-* DocumentDB now returns x-ms-throttle-retry-count and x-ms-throttle-retry-wait-time-ms as the response headers in every request to denote the throttle retry count
- and the cumulative time the request waited between the retries.
-
-* Removed the RetryPolicy class and the corresponding property (retry_policy) exposed on the document_client class and instead introduced a RetryOptions class
- exposing the RetryOptions property on ConnectionPolicy class that can be used to override some of the default retry options.
-
-### 1.8.0
-
-* Added the support for geo-replicated database accounts.
-* Test fixes to move the global host and masterKey into the individual test classes.
-
-### 1.7.0
-
-* Added the support for Time To Live(TTL) feature for documents.
-
-### 1.6.1
-
-* Bug fixes related to server-side partitioning to allow special characters in partition key path.
-
-### 1.6.0
-
-* Added the support for server-side partitioned collections feature.
-
-### 1.5.0
-
-* Added Client-side sharding framework to the SDK. Implemented HashPartionResolver and RangePartitionResolver classes.
-
-### 1.4.2
-
-* Implement Upsert. New UpsertXXX methods added to support Upsert feature.
-* Implement ID-Based Routing. No public API changes, all changes internal.
-
-### 1.3.0
-
-* Release skipped to bring version number in alignment with other SDKs
-
-### 1.2.0
-
-* Supports GeoSpatial index.
-* Validates ID property for all resources. Ids for resources cannot contain `?, /, #, \\` characters or end with a space.
-* Adds new header "index transformation progress" to ResourceResponse.
-
-### 1.1.0
-
-* Implements V2 indexing policy
-
-### 1.0.1
-
-* Supports proxy connection
+## Release history
+Release history is maintained in the azure-sdk-for-python repo, for detailed list of releases, see the [changelog file](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cosmos/azure-cosmos/CHANGELOG.md).
## Release & retirement dates
Microsoft provides notification at least **12 months** in advance of retiring an
| Version | Release Date | Retirement Date | | | | |
-| [4.2.0](#420) |Oct 09, 2020 | |
-| [4.1.0](#410) |Aug 10, 2020 | |
-| [4.0.0](#400) |May 20, 2020 | |
-| [3.0.2](#302) |Nov 15, 2018 | |
-| [3.0.1](#301) |Oct 04, 2018 | |
-| [2.3.3](#233) |Sept 08, 2018 |August 31, 2022 |
-| [2.3.2](#232) |May 08, 2018 |August 31, 2022 |
-| [2.3.1](#231) |December 21, 2017 |August 31, 2022 |
-| [2.3.0](#230) |November 10, 2017 |August 31, 2022 |
-| [2.2.1](#221) |Sep 29, 2017 |August 31, 2022 |
-| [2.2.0](#220) |May 10, 2017 |August 31, 2022 |
-| [2.1.0](#210) |May 01, 2017 |August 31, 2022 |
-| [2.0.1](#201) |October 30, 2016 |August 31, 2022 |
-| [2.0.0](#200) |September 29, 2016 |August 31, 2022 |
-| [1.9.0](#190) |July 07, 2016 |August 31, 2022 |
-| [1.8.0](#180) |June 14, 2016 |August 31, 2022 |
-| [1.7.0](#170) |April 26, 2016 |August 31, 2022 |
-| [1.6.1](#161) |April 08, 2016 |August 31, 2022 |
-| [1.6.0](#160) |March 29, 2016 |August 31, 2022 |
-| [1.5.0](#150) |January 03, 2016 |August 31, 2022 |
-| [1.4.2](#142) |October 06, 2015 |August 31, 2022 |
+| 4.2.0 |Oct 09, 2020 | |
+| 4.1.0 |Aug 10, 2020 | |
+| 4.0.0 |May 20, 2020 | |
+| 3.0.2 |Nov 15, 2018 | |
+| 3.0.1 |Oct 04, 2018 | |
+| 2.3.3 |Sept 08, 2018 |August 31, 2022 |
+| 2.3.2 |May 08, 2018 |August 31, 2022 |
+| 2.3.1 |December 21, 2017 |August 31, 2022 |
+| 2.3.0 |November 10, 2017 |August 31, 2022 |
+| 2.2.1 |Sep 29, 2017 |August 31, 2022 |
+| 2.2.0 |May 10, 2017 |August 31, 2022 |
+| 2.1.0 |May 01, 2017 |August 31, 2022 |
+| 2.0.1 |October 30, 2016 |August 31, 2022 |
+| 2.0.0 |September 29, 2016 |August 31, 2022 |
+| 1.9.0 |July 07, 2016 |August 31, 2022 |
+| 1.8.0 |June 14, 2016 |August 31, 2022 |
+| 1.7.0 |April 26, 2016 |August 31, 2022 |
+| 1.6.1 |April 08, 2016 |August 31, 2022 |
+| 1.6.0 |March 29, 2016 |August 31, 2022 |
+| 1.5.0 |January 03, 2016 |August 31, 2022 |
+| 1.4.2 |October 06, 2015 |August 31, 2022 |
| 1.4.1 |October 06, 2015 |August 31, 2022 |
-| [1.2.0](#120) |August 06, 2015 |August 31, 2022 |
-| [1.1.0](#110) |July 09, 2015 |August 31, 2022 |
-| [1.0.1](#101) |May 25, 2015 |August 31, 2022 |
+| 1.2.0 |August 06, 2015 |August 31, 2022 |
+| 1.1.0 |July 09, 2015 |August 31, 2022 |
+| 1.0.1 |May 25, 2015 |August 31, 2022 |
| 1.0.0 |April 07, 2015 |August 31, 2022 | | 0.9.4-prelease |January 14, 2015 |February 29, 2016 | | 0.9.3-prelease |December 09, 2014 |February 29, 2016 |
Microsoft provides notification at least **12 months** in advance of retiring an
## Next steps
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
+To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
# Self-service exchanges and refunds for Azure Reservations
-Azure Reservations provide flexibility to help meet your evolving needs. You can exchange reservations for another reservation of the same type. For example, you can return multiple compute reservations including Azure Dedicated Host, Azure VMware Solution, and Azure Virtual Machines with each other all at once. In other words, reservation products are interchangeable with each other if they're the same type of reservation. In an other example, you can exchange multiple SQL database reservation types including Managed Instances and Elastic Pool with each other.
-
+Azure Reservations provide flexibility to help meet your evolving needs. Reservation products are interchangeable with each other if they're the same type of reservation. For example, you can exchange multiple compute reservations including Azure Dedicated Host, Azure VMware Solution, and Azure Virtual Machines with each other all at once. In other words, In an other example, you can exchange multiple SQL database reservation types including Managed Instances and Elastic Pool with each other.
However, you can't exchange dissimilar reservations. For example, you can't exchange a Cosmos DB reservation for SQL Database. You can also exchange a reservation to purchase another reservation of a similar type in a different region. For example, you can exchange a reservation that's in West US 2 for one that's in West Europe.
When you exchange a reservation, you can change your term from one-year to three
You can also refund reservations, but the sum total of all canceled reservation commitment in your billing scope (such as EA, Microsoft Customer Agreement, and Microsoft Partner Agreement) can't exceed USD 50,000 in a 12 month rolling window.
-Azure Databricks reserved capacity, Azure VMware solution by CloudSimple reservation, Azure Red Hat Open Shift reservation, Red Hat plans and, SUSE Linux plans aren't eligible for refunds.
+Azure Databricks reserved capacity, Synapse Analytics Pre-purchase plan, Azure VMware solution by CloudSimple reservation, Azure Red Hat Open Shift reservation, Red Hat plans and, SUSE Linux plans aren't eligible for refunds.
> [!NOTE] > - **You must have owner access on the Reservation Order to exchange or refund an existing reservation**. You can [Add or change users who can manage a reservation](./manage-reserved-vm-instance.md#who-can-manage-a-reservation-by-default).
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/prepare-buy-reservation.md
Resources that run in a subscription with other offer types don't receive the re
You can purchase reservations from Azure portal, APIs, PowerShell, CLI. Read the following articles that apply to you when you're ready to make a reservation purchase: - [App Service](prepay-app-service.md)
+- [App Service - JBoss EA Integrated Support](prepay-jboss-eap-integrated-support-app-service.md)
- [Azure Cache for Redis](../../azure-cache-for-redis/cache-reserved-pricing.md)-- [Cosmos DB](../../cosmos-db/cosmos-db-reserved-capacity.md)
+- [Azure Data Factory](../../data-factory/data-flow-understand-reservation-charges.md?toc=/azure/cost-management-billing/reservations/toc.json)
+- [Azure Database for MariaDB](../../mariadb/concept-reserved-pricing.md)
+- [Azure Database for MySQL](../../mysql/concept-reserved-pricing.md)
+- [Azure Database for PostgreSQL](../../postgresql/concept-reserved-pricing.md)
+- [Azure Blob storage](../../storage/blobs/storage-blob-reserved-capacity.md?toc=/azure/cost-management-billing/reservations/toc.json)
+- [Azure Files](../../storage/files/files-reserve-capacity.md?toc=/azure/cost-management-billing/reservations/toc.json)
+- [Azure VMware Solution](../../azure-vmware/reserved-instance.md?toc=/azure/cost-management-billing/reservations/toc.json)
+- [Cosmos DB](../../cosmos-db/cosmos-db-reserved-capacity.md?toc=/azure/cost-management-billing/reservations/toc.json)
- [Databricks](prepay-databricks-reserved-capacity.md)-- [Data Explorer](/azure/data-explorer/pricing-reserved-capacity)-- [Disk Storage](../../virtual-machines/disks-reserved-capacity.md)
+- [Data Explorer](/azure/data-explorer/pricing-reserved-capacity?toc=/azure/cost-management-billing/reservations/toc.json)
- [Dedicated Host](../../virtual-machines/prepay-dedicated-hosts-reserved-instances.md)-- [Software plans](../../virtual-machines/linux/prepay-suse-software-charges.md)-- [Storage](../../storage/blobs/storage-blob-reserved-capacity.md)-- [SQL Database](../../azure-sql/database/reserved-capacity-overview.md)-- [Azure Database for PostgreSQL](../../postgresql/concept-reserved-pricing.md)-- [Azure Database for MySQL](../../mysql/concept-reserved-pricing.md)-- [Azure Database for MariaDB](../../mariadb/concept-reserved-pricing.md)-- [Azure Synapse Analytics](prepay-sql-data-warehouse-charges.md)-- [Azure VMware Solution](../../azure-vmware/reserved-instance.md)-- [Virtual machines](../../virtual-machines/prepay-reserved-vm-instances.md)
+- [Disk Storage](../../virtual-machines/disks-reserved-capacity.md)
+- [SAP HANA Large Instances](prepay-hana-large-instances-reserved-capacity.md)
+- [Software plans](../../virtual-machines/linux/prepay-suse-software-charges.md?toc=/azure/cost-management-billing/reservations/toc.json)
+- [SQL Database](../../azure-sql/database/reserved-capacity-overview.md?toc=/azure/cost-management-billing/reservations/toc.json)
+- [Synapse Analytics - data warehouse](prepay-sql-data-warehouse-charges.md)
+- [Synapse Analytics - Pre-purchase](synapse-analytics-pre-purchase-plan.md)
+- [Virtual machines](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json)
## Buy reservations with monthly payments
cost-management-billing Prepay Jboss Eap Integrated Support App Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/prepay-jboss-eap-integrated-support-app-service.md
When you purchase a JBoss EAP Integrated Support reservation, the discount is au
## Buy a JBoss EAP Integrated Support reservation
-You can buy a reservation for JBoss EAP Integrated Support in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/documentation/filters/%7B%22reservedResourceType%22%3A%22VirtualMachines%22%7D). Pay for the reservation [up front or with monthly payments](prepare-buy-reservation.md)
+You can buy a reservation for JBoss EAP Integrated Support in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/documentation/filters/%7B%22reservedResourceType%22%3A%22VirtualMachines%22%7D). Pay for the reservation [up front or with monthly payments](prepare-buy-reservation.md).
- You must be in an Owner role for at least one EA subscription or a subscription with a pay-as-you-go rate. - For EA subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin for the subscription.
data-factory Create Shared Self Hosted Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-shared-self-hosted-integration-runtime-powershell.md
Previously updated : 06/10/2020 Last updated : 01/26/2022 # Create a shared self-hosted integration runtime in Azure Data Factory
data-factory Format Delta https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-delta.md
Previously updated : 03/26/2020 Last updated : 01/26/2022
data-factory How To Send Email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-send-email.md
For the **Send Email (V2)** action, customize how you wish to format the email,
:::image type="content" source="media/how-to-send-email/logic-app-email-action.png" alt-text="Shows the Logic App workflow designer for the Send Email (V2) action.":::
-Save the workflow. Make a note of the Workflow URL for your new workflow then:
+Save the workflow. Browse to the Overview page for the workflow. Make a note of the Workflow URL for your new workflow then:
:::image type="content" source="media/how-to-send-email/logic-app-workflow-url.png" alt-text="Shows the Logic App workflow Overview tab with the Workflow URL highlighted.":::
+> [!NOTE]
+> To find the Workflow URL you must browse to the workflow itself, not just the logic app that contains it. From the Workflows page of your logic app instance, choose the workflow and then navigate to its Overview page.
+ ## Create a pipeline to trigger your Logic App email workflow Once you create the Logic App workflow to send email, you can trigger it from a pipeline using a **Web** activity.
data-factory Monitor Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-programmatically.md
description: Learn how to monitor a pipeline in a data factory by using differen
Previously updated : 01/16/2018 Last updated : 01/26/2022
For a complete walk-through of creating and monitoring a pipeline using PowerShe
For complete documentation on PowerShell cmdlets, see [Data Factory PowerShell cmdlet reference](/powershell/module/az.datafactory). ## Next steps
-See [Monitor pipelines using Azure Monitor](monitor-using-azure-monitor.md) article to learn about using Azure Monitor to monitor Data Factory pipelines.
+See [Monitor pipelines using Azure Monitor](monitor-using-azure-monitor.md) article to learn about using Azure Monitor to monitor Data Factory pipelines.
data-factory Naming Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/naming-rules.md
Previously updated : 10/15/2020 Last updated : 01/26/2022 # Azure Data Factory - naming rules
data-factory Quickstart Create Data Factory Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-powershell.md
ms.devlang: powershell Previously updated : 04/10/2020 Last updated : 01/26/2022
data-factory Solution Template Move Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/solution-template-move-files.md
Title: Move files between file-based storage
description: Learn how to use a solution template to move files between file-based storage by using Azure Data Factory. -+ Previously updated : 7/12/2019 Last updated : 01/26/2022 # Move files with Azure Data Factory
The template defines four parameters:
1. Go to the **Move files** template. Select existing connection or create a **New** connection to your source file store where you want to move files from. Be aware that **DataSource_Folder** and **DataSource_File** are reference to the same connection of your source file store.
- :::image type="content" source="media/solution-template-move-files/move-files1.png" alt-text="Create a new connection to the source":::
+
+ :::image type="content" source="media/solution-template-move-files/move-files-1-small.png" alt-text="Create a new connection to the source" lightbox="media/solution-template-move-files/move-files-1.png":::
2. Select existing connection or create a **New** connection to your destination file store where you want to move files to.
- :::image type="content" source="media/solution-template-move-files/move-files2.png" alt-text="Create a new connection to the destination":::
+ :::image type="content" source="media/solution-template-move-files/move-files-2-small.png" alt-text="Create a new connection to the destination" lightbox="media/solution-template-move-files/move-files-2.png":::
3. Select **Use this template** tab.
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-containers-enable.md
Title: How to enable Microsoft Defender for Containers in Microsoft Defender for
description: Enable the container protections of Microsoft Defender for Containers zone_pivot_groups: k8s-host Previously updated : 01/02/2022 Last updated : 01/25/2022 # Enable Microsoft Defender for Containers
Learn about this plan in [Overview of Microsoft Defender for Containers](defende
> [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] ::: zone-end ++ ::: zone pivot="defender-for-container-aks" [!INCLUDE [Enable plan for AKS](./includes/defender-for-containers-enable-plan-aks.md)] ::: zone-end
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/kubernetes-workload-protections.md
Title: Workload protections for your Kubernetes workloads description: Learn how to use Microsoft Defender for Cloud's set of Kubernetes workload protection security recommendations Previously updated : 12/12/2021 Last updated : 01/26/2022 # Protect your Kubernetes workloads
Defender for Cloud offers more container security features if you enable Microso
Microsoft Defender for Cloud includes a bundle of recommendations that are available when you've installed the **Azure Policy add-on for Kubernetes**.
+## Prerequisites
+
+Validate the following endpoints are configured for outbound access so that the Azure Policy add-on for Kubernetes can connect to Azure Policy to synchronize Kubernetes policies:
+
+See [Required FQDN/application rules for Azure policy](../aks/limit-egress-traffic.md#azure-policy) for the required FQDN/application rules.
+ ### Step 1: Deploy the add-on To configure the recommendations, install the **Azure Policy add-on for Kubernetes**.
defender-for-cloud Update Regulatory Compliance Packages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/update-regulatory-compliance-packages.md
Microsoft tracks the regulatory standards themselves and automatically improves
By default, every subscription has the **Azure Security Benchmark** assigned. This is the Microsoft-authored, Azure-specific guidelines for security and compliance best practices based on common compliance frameworks. [Learn more about Azure Security Benchmark](/security/benchmark/azure/introduction).
-You can also add standards such as:
+Available regulatory standards:
-- NIST SP 800-53-- SWIFT CSP CSCF-v2020-- UK Official and UK NHS
+- PCI-DSS v3.2.1:2018
+- SOC TSP
+- NIST SP 800-53 R4
+- NIST SP 800 171 R2
+- UK OFFICIAL and UK NHS
- Canada Federal PBMM-- Azure CIS 1.3.0-- CMMC Level 3
+- Azure CIS 1.1.0
+- HIPAA/HITRUST
+- SWIFT CSP CSCF v2020
+- ISO 27001:2013
- New Zealand ISM Restricted
+- CMMC Level 3
+- Azure CIS 1.3.0
+- NIST SP 800-53 R5
+- FedRAMP H
+- FedRAMP M
-Standards are added to the dashboard as they become available.
-
+> [!TIP]
+> Standards are added to the dashboard as they become available. The preceding list might not contain recently added standards.
## Add a regulatory standard to your dashboard
To add standards to your dashboard:
1. To add the standards relevant to your organization, expand the **Industry & regulatory standards** section and select **Add more standards**.
-1. From the **Add regulatory compliance standards** page, you can search for any of the available standards, including:
-
- - **NIST SP 800-53**
- - **NIST SP 800 171**
- - **SWIFT CSP CSCF v2020**
- - **UKO and UK NHS**
- - **Canada Federal PBMM**
- - **HIPAA HITRUST**
- - **Azure CIS 1.3.0**
- - **CMMC Level 3**
- - **New Zealand ISM Restricted**
+1. From the **Add regulatory compliance standards** page, you can search for any of the available standards:
![Adding regulatory standards to Microsoft Defender for Cloud's regulatory compliance dashboard.](./media/update-regulatory-compliance-packages/dynamic-regulatory-compliance-additional-standards.png)
digital-twins How To Integrate Time Series Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-time-series-insights.md
Before you can set up a relationship with Time Series Insights, you'll need to s
## Solution architecture
-You will be attaching Time Series Insights to Azure Digital Twins through the path below.
+You'll be attaching Time Series Insights to Azure Digital Twins through the following path.
:::row::: :::column:::
You will be attaching Time Series Insights to Azure Digital Twins through the pa
:::column-end::: :::row-end:::
-## Create event hub namespace
+## Create Event Hubs namespace
-Before creating the event hubs, you'll first create an event hub namespace that will receive events from your Azure Digital Twins instance. You can either use the Azure CLI instructions below, or use the Azure portal by following [Create an event hub using Azure portal](../event-hubs/event-hubs-create.md). To see what regions support Event Hubs, visit [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-hubs).
+Before creating the event hubs, you'll first create an Event Hubs namespace that will receive events from your Azure Digital Twins instance. You can either use the Azure CLI instructions below, or use the Azure portal by following [Create an event hub using Azure portal](../event-hubs/event-hubs-create.md). To see what regions support Event Hubs, visit [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-hubs).
```azurecli-interactive az eventhubs namespace create --name <name-for-your-Event-Hubs-namespace> --resource-group <your-resource-group> --location <region>
To set up the twins hub, you'll complete the following steps in this section:
Create the **twins hub** with the following CLI command. Specify a name for your twins hub. ```azurecli-interactive
-az eventhubs eventhub create --name <name-for-your-twins-hub> --resource-group <your-resource-group> --namespace-name <your-Event-Hubs-namespace-from-above>
+az eventhubs eventhub create --name <name-for-your-twins-hub> --resource-group <your-resource-group> --namespace-name <your-Event-Hubs-namespace-from-earlier>
``` ### Create twins hub authorization rule
az eventhubs eventhub create --name <name-for-your-twins-hub> --resource-group <
Create an [authorization rule](/cli/azure/eventhubs/eventhub/authorization-rule#az_eventhubs_eventhub_authorization_rule_create) with send and receive permissions. Specify a name for the rule. ```azurecli-interactive
-az eventhubs eventhub authorization-rule create --rights Listen Send --name <name-for-your-twins-hub-auth-rule> --resource-group <your-resource-group> --namespace-name <your-Event-Hubs-namespace-from-earlier> --eventhub-name <your-twins-hub-from-above>
+az eventhubs eventhub authorization-rule create --rights Listen Send --name <name-for-your-twins-hub-auth-rule> --resource-group <your-resource-group> --namespace-name <your-Event-Hubs-namespace-from-earlier> --eventhub-name <your-twins-hub-from-earlier>
``` ### Create twins hub endpoint
az eventhubs eventhub authorization-rule create --rights Listen Send --name <nam
Create an Azure Digital Twins [endpoint](concepts-route-events.md#create-an-endpoint) that links your event hub to your Azure Digital Twins instance. Specify a name for your twins hub endpoint. ```azurecli-interactive
-az dt endpoint create eventhub --dt-name <your-Azure-Digital-Twins-instance-name> --eventhub-resource-group <your-resource-group> --eventhub-namespace <your-Event-Hubs-namespace-from-earlier> --eventhub <your-twins-hub-name-from-above> --eventhub-policy <your-twins-hub-auth-rule-from-earlier> --endpoint-name <name-for-your-twins-hub-endpoint>
+az dt endpoint create eventhub --dt-name <your-Azure-Digital-Twins-instance-name> --eventhub-resource-group <your-resource-group> --eventhub-namespace <your-Event-Hubs-namespace-from-earlier> --eventhub <your-twins-hub-name-from-earlier> --eventhub-policy <your-twins-hub-auth-rule-from-earlier> --endpoint-name <name-for-your-twins-hub-endpoint>
``` ### Create twins hub event route
Azure Digital Twins instances can emit [twin update events](./concepts-event-not
Create a [route](concepts-route-events.md#create-an-event-route) in Azure Digital Twins to send twin update events to your endpoint from above. The filter in this route will only allow twin update messages to be passed to your endpoint. Specify a name for the twins hub event route. ```azurecli-interactive
-az dt route create --dt-name <your-Azure-Digital-Twins-instance-name> --endpoint-name <your-twins-hub-endpoint-from-above> --route-name <name-for-your-twins-hub-event-route> --filter "type = 'Microsoft.DigitalTwins.Twin.Update'"
+az dt route create --dt-name <your-Azure-Digital-Twins-instance-name> --endpoint-name <your-twins-hub-endpoint-from-earlier> --route-name <name-for-your-twins-hub-event-route> --filter "type = 'Microsoft.DigitalTwins.Twin.Update'"
``` ### Get twins hub connection string
az dt route create --dt-name <your-Azure-Digital-Twins-instance-name> --endpoint
Get the [twins event hub connection string](../event-hubs/event-hubs-get-connection-string.md), using the authorization rules you created above for the twins hub. ```azurecli-interactive
-az eventhubs eventhub authorization-rule keys list --resource-group <your-resource-group> --namespace-name <your-Event-Hubs-namespace-from-earlier> --eventhub-name <your-twins-hub-from-above> --name <your-twins-hub-auth-rule-from-earlier>
+az eventhubs eventhub authorization-rule keys list --resource-group <your-resource-group> --namespace-name <your-Event-Hubs-namespace-from-earlier> --eventhub-name <your-twins-hub-from-earlier> --name <your-twins-hub-auth-rule-from-earlier>
``` Take note of the **primaryConnectionString** value from the result to configure the twins hub app setting later in this article.
Create the **time series hub** using the following command. Specify a name for t
Create an [authorization rule](/cli/azure/eventhubs/eventhub/authorization-rule#az_eventhubs_eventhub_authorization_rule_create) with send and receive permissions. Specify a name for the time series hub auth rule. ```azurecli-interactive
-az eventhubs eventhub authorization-rule create --rights Listen Send --name <name-for-your-time-series-hub-auth-rule> --resource-group <your-resource-group> --namespace-name <your-Event-Hub-namespace-from-earlier> --eventhub-name <your-time-series-hub-name-from-above>
+az eventhubs eventhub authorization-rule create --rights Listen Send --name <name-for-your-time-series-hub-auth-rule> --resource-group <your-resource-group> --namespace-name <your-Event-Hub-namespace-from-earlier> --eventhub-name <your-time-series-hub-name-from-earlier>
``` ### Get time series hub connection string
Also, take note of the following values to use them later to create a Time Serie
## Create a function
-In this section, you'll create an Azure function that will convert twin update events from their original form as JSON Patch documents to JSON objects, containing only updated and added values from your twins.
+In this section, you'll create an Azure function that will convert twin update events from their original form as JSON Patch documents to JSON objects that only contain updated and added values from your twins.
1. First, create a new function app project in Visual Studio. For instructions on how to do so, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project).
az functionapp config appsettings set --settings "EventHubAppSetting-TSI=<your-t
## Create and connect a Time Series Insights instance
-In this section, you'll set up Time Series Insights instance to receive data from your time series hub. For more information about this process, see [Set up an Azure Time Series Insights Gen2 PAYG environment](../time-series-insights/tutorial-set-up-environment.md). Follow the steps below to create a time series insights environment.
+In this section, you'll set up Time Series Insights instance to receive data from your time series hub. For more information about this process, see [Set up an Azure Time Series Insights Gen2 PAYG environment](../time-series-insights/tutorial-set-up-environment.md). Follow the steps below to create a Time Series Insights environment.
1. In the [Azure portal](https://portal.azure.com), search for *Time Series Insights environments*, and select the **Create** button. Choose the following options to create the time series environment.
event-hubs Event Hubs Dedicated Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-dedicated-overview.md
Title: Overview of dedicated event hubs - Azure Event Hubs | Microsoft Docs
+ Title: Overview of Azure Event Hubs dedicated tier
description: This article provides an overview of dedicated Azure Event Hubs, which offers single-tenant deployments of event hubs. Previously updated : 09/23/2021 Last updated : 01/26/2022 # Overview of Event Hubs Dedicated
For more quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas
## How to onboard
-The self-serve experience to [create an Event Hubs cluster](event-hubs-dedicated-cluster-create-portal.md) through the [Azure portal](https://aka.ms/eventhubsclusterquickstart) is now in Preview. If you have any questions or need help with onboarding to Event Hubs Dedicated, contact the [Event Hubs team](mailto:askeventhubs@microsoft.com).
+Event Hubs dedicated tier is generally available (GA). The self-serve experience to create an Event Hubs cluster through the [Azure portal](event-hubs-dedicated-cluster-create-portal.md) is currently in Preview. You can also request for the cluster to be created by contacting the [Event Hubs team](mailto:askeventhubs@microsoft.com).
## FAQs
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-about-virtual-network-gateways.md
Each virtual network can have only one virtual network gateway per gateway type.
## <a name="gwsku"></a>Gateway SKUs [!INCLUDE [expressroute-gwsku-include](../../includes/expressroute-gwsku-include.md)]
-If you want to upgrade your gateway to a more powerful gateway SKU, in most cases you can use the 'Resize-AzVirtualNetworkGateway' PowerShell cmdlet. This will work for upgrades to Standard and HighPerformance SKUs. However, to upgrade a non Availability Zone (AZ) gateway to the UltraPerformance SKU, you will need to recreate the gateway. Recreating a gateway incurs downtime. You do not need to delete and recreate the gateway to upgrade an AZ-enabled SKU.
+If you want to upgrade your gateway to a more powerful gateway SKU, you can use the 'Resize-AzVirtualNetworkGateway' PowerShell cmdlet or perform the upgrade directly in the ExpressRoute virtual network gateway configuration blade in the Azure Portal. The following upgrades are supported:
+
+- Standard to High Performance
+- Standard to Ultra Performance
+- High Performance to Ultra Performance
+- ErGw1Az to ErGw2Az
+- ErGw1Az to ErGw3Az
+- ErGw2Az to ErGw3Az
+- Default to Standard
+
+Additionally, you can downgrade the virtual network gateway SKU. The following downgrades are supported:
+- High Performance to Standard
+- ErGw2Az to ErGw1Az
+
+For all other downgrade scenarios, you will need to delete and recreate the gateway. Recreating a gateway incurs downtime.
+ ### <a name="gatewayfeaturesupport"></a>Feature support by gateway SKU The following table shows the features supported across each gateway type.
expressroute How To Configure Custom Bgp Communities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/how-to-configure-custom-bgp-communities.md
BGP communities are groupings of IP prefixes tagged with a community value. This
Name = 'myVirtualNetwork' ResourceGroupName = 'myERRG' }
- Get-AzVirtualNewtork @virtualnetwork
+ Get-AzVirtualNetwork @virtualnetwork
``` ## Applying or updating the custom BGP value for an existing virtual network
BGP communities are groupings of IP prefixes tagged with a community value. This
## Next steps - [Verify ExpressRoute connectivity](expressroute-troubleshooting-expressroute-overview.md).-- [Troubleshoot your network performance](expressroute-troubleshooting-network-performance.md)
+- [Troubleshoot your network performance](expressroute-troubleshooting-network-performance.md)
firewall-manager Fqdn Filtering Network Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/fqdn-filtering-network-rules.md
Title: Azure Firewall Manager filtering in network rules (preview)
+ Title: Azure Firewall Manager filtering in network rules
description: How to use FQDN filtering in network rules Previously updated : 07/30/2020 Last updated : 01/26/2022
-# FQDN filtering in network rules (preview)
+# FQDN filtering in network rules
-> [!IMPORTANT]
-> FQDN filtering in network rules is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-A fully qualified domain name (FQDN) represents a domain name of a host or IP address(es). You can use FQDNs in network rules based on DNS resolution in Azure Firewall and Firewall policy. This capability allows you to filter outbound traffic with any TCP/UDP protocol (including NTP, SSH, RDP, and more). You must enable DNS Proxy to use FQDNs in your network rules. For more information see [Azure Firewall policy DNS settings (preview)](dns-settings.md).
+A fully qualified domain name (FQDN) represents a domain name of a host or IP address(es). You can use FQDNs in network rules based on DNS resolution in Azure Firewall and Firewall policy. This capability allows you to filter outbound traffic with any TCP/UDP protocol (including NTP, SSH, RDP, and more). You must enable DNS Proxy to use FQDNs in your network rules. For more information, see [Azure Firewall policy DNS settings](dns-settings.md).
## How it works
Once you define which DNS server your organization needs (Azure DNS or your own
WhatΓÇÖs the difference between using domain names in application rules compared to that of network rules? -- FQDN filtering in application rules for HTTP/S and MSSQL is based on an application level transparent proxy and the SNI header. As such, it can discern between two FQDNs that are resolved to the same IP address. This is not the case with FQDN filtering in network rules. Always use application rules when possible.
+- FQDN filtering in application rules for HTTP/S and MSSQL is based on an application level transparent proxy and the SNI header. As such, it can discern between two FQDNs that are resolved to the same IP address. This isn't the case with FQDN filtering in network rules. Always use application rules when possible.
- In application rules, you can use HTTP/S and MSSQL as your selected protocols. In network rules, you can use any TCP/UDP protocol with your destination FQDNs. ## Next steps
frontdoor Rules Match Conditions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/rules-match-conditions.md
Previously updated : 11/09/2021 Last updated : 01/16/2022
+zone_pivot_groups: front-door-tiers
# Azure Front Door rules match conditions
-In Azure Front Door [Rules Engine](front-door-rules-engine.md) and Azure Front Door Standard/Premium [Rule Set](standard-premium/concept-rule-set.md), a rule consists of none or some match conditions and an action. This article provides detailed descriptions of match conditions you can use in Azure Front Door Rule Set or Rules Engine.
+
+In Azure Front Door Standard/Premium [rule sets](standard-premium/concept-rule-set.md), a rule consists of none or some match conditions and an action. This article provides detailed descriptions of match conditions you can use in Azure Front Door rule sets.
+++
+In Azure Front Door [rules engines](front-door-rules-engine.md), a rule consists of none or some match conditions and an action. This article provides detailed descriptions of match conditions you can use in Azure Front Door rules engines.
+ The first part of a rule is a match condition or set of match conditions. A rule can consist of up to 10 match conditions. A match condition identifies specific types of requests for which defined actions are done. If you use multiple match conditions, the match conditions are grouped together by using AND logic. For all match conditions that support multiple values, OR logic is used.
You can use a match condition to:
* Filter requests from request file name and file extension. * Filter requests from request URL, protocol, path, query string, post args, etc. + > [!IMPORTANT] > Azure Front Door Standard/Premium (Preview) is currently in public preview. > This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). + ## Device type
-Use the **device type** match condition to identify requests that have been made from a mobile device or desktop device.
+Use the **device type** match condition to identify requests that have been made from a mobile device or desktop device.
### Properties
In this example, we match all requests that have been detected as coming from a
+## HTTP version
+
+Use the **HTTP version** match condition to identify requests that have been made by using a specific version of the HTTP protocol.
+
+> [!NOTE]
+> The **request cookies** match condition is only available on Azure Front Door Standard/Premium.
+
+### Properties
+
+| Property | Supported values |
+|-||
+| Operator | <ul><li>In the Azure portal: `Equal`, `Not Equal`</li><li>In ARM templates: `Equal`; use the `negateCondition` property to specify _Not Equal_ |
+| Value | `2.0`, `1.1`, `1.0`, `0.9` |
+
+### Example
+
+In this example, we match all requests that have been sent by using the HTTP 2.0 protocol.
+
+# [Portal](#tab/portal)
++
+# [JSON](#tab/json)
+
+```json
+{
+ "name": "HttpVersion",
+ "parameters": {
+ "operator": "Equal",
+ "negateCondition": false,
+ "matchValues": [
+ "2.0"
+ ],
+ "@odata.type": "#Microsoft.Azure.Cdn.Models.DeliveryRuleHttpVersionConditionParameters"
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ name: 'HttpVersion'
+ parameters: {
+ operator: 'Equal'
+ negateCondition: false
+ matchValues: [
+ '2.0'
+ ]
+ '@odata.type': '#Microsoft.Azure.Cdn.Models.DeliveryRuleHttpVersionConditionParameters'
+ }
+}
+```
+++
+## Request cookies
+
+Use the **request cookies** match condition to identify requests that have include a specific cookie.
+
+> [!NOTE]
+> The **request cookies** match condition is only available on Azure Front Door Standard/Premium.
+
+### Properties
+
+| Property | Supported values |
+|-||
+| Cookie name | A string value representing the name of the cookie. |
+| Operator | Any operator from the [standard operator list](#operator-list). |
+| Value | One or more string or integer values representing the value of the request header to match. If multiple values are specified, they're evaluated using OR logic. |
+| Case transform | Any transform from the [standard string transforms list](#string-transform-list). |
+
+### Example
+
+In this example, we match all requests that have include a cookie named `deploymentStampId` with a value of `1`.
+
+# [Portal](#tab/portal)
++
+# [JSON](#tab/json)
+
+```json
+{
+ "name": "Cookies",
+ "parameters": {
+ "selector": "deploymentStampId",
+ "operator": "Equal",
+ "negateCondition": false,
+ "matchValues": [
+ "1"
+ ],
+ "transforms": [],
+ "@odata.type": "#Microsoft.Azure.Cdn.Models.DeliveryRuleCookiesConditionParameters"
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ name: 'Cookies'
+ parameters: {
+ selector: 'deploymentStampId'
+ operator: 'Equal'
+ negateCondition: false
+ matchValues: [
+ '1'
+ ]
+ '@odata.type': '#Microsoft.Azure.Cdn.Models.DeliveryRuleCookiesConditionParameters'
+ }
+}
+```
+++ ## Post args Use the **post args** match condition to identify requests based on the arguments provided within a POST request's body. A single match condition matches a single argument from the POST request's body. You can specify multiple values to match, which will be combined using OR logic.
Use the **post args** match condition to identify requests based on the argument
| Post args | A string value representing the name of the POST argument. | | Operator | Any operator from the [standard operator list](#operator-list). | | Value | One or more string or integer values representing the value of the POST argument to match. If multiple values are specified, they're evaluated using OR logic. |
-| Case transform | `Lowercase`, `Uppercase` |
+| Case transform | Any transform from the [standard string transforms list](#string-transform-list). |
### Example
Use the **query string** match condition to identify requests that contain a spe
|-|-| | Operator | Any operator from the [standard operator list](#operator-list). | | Query string | One or more string or integer values representing the value of the query string to match. Don't include the `?` at the start of the query string. If multiple values are specified, they're evaluated using OR logic. |
-| Case transform | `Lowercase`, `Uppercase` |
+| Case transform | Any transform from the [standard string transforms list](#string-transform-list). |
### Example
The **request body** match condition identifies requests based on specific text
|-|-| | Operator | Any operator from the [standard operator list](#operator-list). | | Value | One or more string or integer values representing the value of the request body text to match. If multiple values are specified, they're evaluated using OR logic. |
-| Case transform | `Lowercase`, `Uppercase` |
+| Case transform | Any transform from the [standard string transforms list](#string-transform-list). |
### Example
The **request file name** match condition identifies requests that include the s
|-|-| | Operator | Any operator from the [standard operator list](#operator-list). | | Value | One or more string or integer values representing the value of the request file name to match. If multiple values are specified, they're evaluated using OR logic. |
-| Case transform | `Lowercase`, `Uppercase` |
+| Case transform | Any transform from the [standard string transforms list](#string-transform-list). |
### Example
The **request file extension** match condition identifies requests that include
|-|-| | Operator | Any operator from the [standard operator list](#operator-list). | | Value | One or more string or integer values representing the value of the request file extension to match. Don't include a leading period. If multiple values are specified, they're evaluated using OR logic. |
-| Case transform | `Lowercase`, `Uppercase` |
+| Case transform | Any transform from the [standard string transforms list](#string-transform-list). |
### Example
The **request header** match condition identifies requests that include a specif
| Header name | A string value representing the name of the POST argument. | | Operator | Any operator from the [standard operator list](#operator-list). | | Value | One or more string or integer values representing the value of the request header to match. If multiple values are specified, they're evaluated using OR logic. |
-| Case transform | `Lowercase`, `Uppercase` |
+| Case transform | Any transform from the [standard string transforms list](#string-transform-list). |
### Example
The **request path** match condition identifies requests that include the specif
|-|-| | Operator | Any operator from the [standard operator list](#operator-list). | | Value | One or more string or integer values representing the value of the request path to match. Don't include the leading slash. If multiple values are specified, they're evaluated using OR logic. |
-| Case transform | `Lowercase`, `Uppercase` |
+| Case transform | Any transform from the [standard string transforms list](#string-transform-list). |
### Example
Identifies requests that match the specified URL. The entire URL is evaluated, i
|-|-| | Operator | Any operator from the [standard operator list](#operator-list). | | Value | One or more string or integer values representing the value of the request URL to match. If multiple values are specified, they're evaluated using OR logic. |
-| Case transform | `Lowercase`, `Uppercase` |
+| Case transform | Any transform from the [standard string transforms list](#string-transform-list). |
### Example
Regular expressions don't support the following operations:
* Callouts and embedded code. * Atomic grouping and possessive quantifiers.
+## String transform list
+
+For rules that can transform strings, the following transforms are valid:
+
+| Transform | Description | ARM template support |
+|-|-|-|
+| To lowercase | Converts the string to the lowercase representation. | `Lowercase` |
+| To uppercase | Converts the string to the uppercase representation. | `Uppercase` |
+| Trim | Trims leading and trailing whitespace from the string. | `Trim` |
+| Remove nulls | Removes null values from the string. | `RemoveNulls` |
+| URL encode | URL-encodes the string. | `UrlEncode` |
+| URL decode | URL-decodes the string. | `UrlDecode` |
+ ## Next steps
-Azure Front Door:
* Learn more about Azure Front Door [Rules Engine](front-door-rules-engine.md) * Learn how to [configure your first Rules Engine](front-door-tutorial-rules-engine.md). * Learn more about [Rules Engine actions](front-door-rules-engine-actions.md)
-Azure Front Door Standard/Premium:
+ * Learn more about Azure Front Door Standard/Premium [Rule Set](standard-premium/concept-rule-set.md). * Learn how to [configure your first Rule Set](standard-premium/how-to-configure-rule-set.md). * Learn more about [Rule Set actions](standard-premium/concept-rule-set-actions.md).+
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/definition-structure.md
_common_ properties used by Azure Policy and in built-ins. Each `metadata` prope
> `version` property or in another property as a **boolean**. For more information about the way > Azure Policy versions built-ins, see > [Built-in versioning](https://github.com/Azure/azure-policy/blob/master/built-in-policies/README.md).
+> To learn more about what it means for a policy to be _deprecated_ or in _preview_, see [Preview and deprecated policies](https://github.com/Azure/azure-policy/blob/master/built-in-policies/README.md#preview-and-deprecated-policies).
## Parameters
hpc-cache Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/configuration.md
Consider using a test cache to check and refine your DNS setup before you use it
### Refresh storage target DNS
-If your DNS server updates IP addresses, the associated NFS storage targets will become temporarily unavailable. Read how to update your custom DNS system IP addresses in [View and manage storage targets](manage-storage-targets.md#update-ip-address-specific-configurations-only).
+If your DNS server updates IP addresses, the associated NFS storage targets will become temporarily unavailable. Read how to update your custom DNS system IP addresses in [View and manage storage targets](manage-storage-targets.md#update-ip-address).
## View snapshots for blob storage targets
hpc-cache Hpc Cache Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-create.md
description: How to create an Azure HPC Cache instance
Previously updated : 01/19/2022 Last updated : 01/26/2022
In **Project Details**, select the subscription and resource group that will hos
In **Service Details**, set the cache name and these other attributes: * Location - Select one of the [supported regions](hpc-cache-overview.md#region-availability).+
+ If that region supports [availability zones](../availability-zones/az-overview.md), select the zone that will host your cache resources. Azure HPC Cache is a zonal service.
+ * Virtual network - You can select an existing one or create a new virtual network. * Subnet - Choose or create a subnet with at least 64 IP addresses (/24). This subnet must be used only for this Azure HPC Cache instance.
hpc-cache Manage Storage Targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/manage-storage-targets.md
description: How to suspend, remove, force delete, and flush Azure HPC Cache sto
Previously updated : 01/06/2022 Last updated : 01/26/2022
These options are available:
* **Flush** - Write all cached changes to the back-end storage * **Suspend** - Temporarily stop the storage target from serving requests
-* **Resume** - Put a suspended storage target back into service
+* **Refresh DNS** - Update the storage target IP address from a custom DNS server or from an Azure Storage private endpoint
+* **Invalidate** - Discards cached files from this storage target (**Invalidate can cause data loss**)
* **Force remove** - Delete a storage target, skipping some safety steps (**Force remove can cause data loss**)
+* **Resume** - Put a suspended storage target back into service
* **Delete** - Permanently remove a storage target
-Some storage targets also have a **Refresh DNS** option on this menu, which updates the storage target IP address from a custom DNS server or from an Azure Storage private endpoint.
- Read the rest of this article for more detail about these options. ### Write cached files to the storage target
-The **Flush** option tells the cache to immediately copy any changed files stored in the cache to the back-end storage system. For example, if your client machines are updating a particular file repeatedly, it's held in the cache for quicker access and not written to the long-term storage system for a period ranging from several minutes to more than an hour.
+The **Flush** option tells the cache to immediately copy any changed files stored in the cache to the back-end storage system. For example, if your client machines are updating a particular file repeatedly, it's held in the cache for quicker access. File changes aren't written to the long-term storage system for a period ranging from several minutes to more than an hour.
The **Flush** action tells the cache to write all files to the storage system.
The suspend feature disables client access to a storage target, but doesn't perm
Use **Resume** to un-suspend a storage target.
+### Update IP address
+
+In some situations, you might need to update your storage target's IP address. This can happen in two scenarios:
+
+* Your cache uses a custom DNS system instead of the default setup, and the network infrastructure has changed.
+
+* Your storage target uses a private endpoint to access Azure Blob or NFS-mounted blob storage, and you have updated the endpoint's configuration. (You should suspend storage targets before modifying their private endpoints, as described in the [prerequisites article](hpc-cache-prerequisites.md#work-with-private-endpoints).)
+
+With a custom DNS system, it's possible for your NFS storage target's IP address to change because of back-end DNS changes. If your DNS server changes the back-end storage system's IP address, Azure HPC Cache can lose access to the storage system. Ideally, you should work with the manager of your cache's custom DNS system to plan for any updates, because these changes make storage unavailable.
+
+If you use a private endpoint for secure storage access, the endpoint's IP addresses can change if you modify its configuration. If you need to change your private endpoint configuration, you should suspend the storage target (or targets) that use the endpoint, then refresh their IP addresses when you re-activate them. Read [Work with private endpoints](hpc-cache-prerequisites.md#work-with-private-endpoints) for additional information.
+
+To update a storage target's IP address, use the **Refresh DNS** option. The cache queries the custom DNS server or private endpoint for a new IP address.
+
+If successful, the update should take less than two minutes. You can only refresh one storage target at a time; wait for the previous operation to complete before trying another.
+
+> [!NOTE]
+> The "Refresh DNS" option is disabled for NFS storage targets that use IP addresses instead of a DNS hostname.
+
+### Invalidate cache contents for a storage target
+
+The **Invalidate** option tells the HPC Cache to mark all cached files from this storage target as out of date. The next time a client requests these files, they will be fetched from the back-end storage system.
+
+You could use this option if you update files on the back-end storage system directly and want to make those changes immediately available to the clients connected to the HPC Cache.
+
+> [!NOTE]
+> If you use ***write caching*** for this storage target, invalidating its cache can possibly cause data loss. If a client has written a change to the cache, but it has not yet been copied to the back-end storage system, that change will be discarded.
+
+The amount of time between when a client write is saved to the cache and the time that file is written to the long-term storage system is variable. There's no way for HPC Cache to determine whether or not one particular file has been written back to its storage system before invalidating the cache.
+
+If you need to make sure all cached changes are saved to the back-end storage system, use a **Flush** command.
+
+Learn more about write caching and file write-back delay in [Understand cache usage models](cache-usage-models.md).
+ ### Force remove a storage target > [!NOTE]
$ az hpc-cache storage-target remove --resource-group cache-rg --cache-name doc-
-### Update IP address (specific configurations only)
-
-In some situations, you might need to update your storage target's IP address. This can happen in two scenarios:
-
-* Your cache uses a custom DNS system instead of the default setup, and the network infrastructure has changed.
-
-* Your storage target uses a private endpoint to access Azure Blob or NFS-mounted blob storage, and you have updated the endpoint's configuration. (You should suspend storage targets before modifying their private endpoints, as described in the [prerequisites article](hpc-cache-prerequisites.md#work-with-private-endpoints).)
-
-With a custom DNS system, it's possible for your NFS storage target's IP address to change because of back-end DNS changes. If your DNS server changes the back-end storage system's IP address, Azure HPC Cache can lose access to the storage system. Ideally, you should work with the manager of your cache's custom DNS system to plan for any updates, because these changes make storage unavailable.
-
-If you use a private endpoint for secure storage access, the endpoint's IP addresses can change if you modify its configuration. If you need to change your private endpoint configuration, you should suspend the storage target (or targets) that use the endpoint, then refresh their IP addresses when you re-activate them. Read [Work with private endpoints](hpc-cache-prerequisites.md#work-with-private-endpoints) for additional information.
-
-If you need to update a storage target's IP address, use the **Storage targets** page. Click the **...** symbol in the right column to open the context menu. Choose **Refresh DNS** to query the custom DNS server or private endpoint for a new IP address.
-
-![Screenshot of storage target list. For one storage target, the "..." menu in the far right column is open and these options appear: Flush, Suspend, Refresh DNS, Force remove, Resume (this option is disabled), and Delete.](media/refresh-dns.png)
-
-If successful, the update should take less than two minutes. You can only refresh one storage target at a time; wait for the previous operation to complete before trying another.
- ## Understand storage target state The storage target list shows two types of status: **State** and **Provisioning state**. * **State** indicates the operational state of the storage target. This value updates regularly and helps you understand whether the storage target is available for client requests, and which of the management options are available.
-* **Provisioning state** tells you whether the last action to add or edit the storage target was successful. This value is only updated if you edit the storage target.
+* **Provisioning state** tells you whether the last action to add or edit the storage target was successful. This value is only updated when you edit the storage target.
The **State** value affects which management options you can use. Here's a short explanation of the values and their effects.
iot-hub Iot Hub Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-distributed-tracing.md
Previously updated : 02/06/2019 Last updated : 01/26/2022
In this section, you configure an IoT Hub to log distributed tracing attributes
Once the logging is turned on, IoT Hub records a log when a message containing valid trace properties is encountered in any of the following situations: -- The messages arrives at IoT Hub's gateway.
+- The message arrives at IoT Hub's gateway.
- The message is processed by the IoT Hub. - The message is routed to custom endpoints. Routing must be enabled.
To control the percentage of messages containing this property, implement logic
## Update sampling options
-To change the percentage of messages to be traced from the cloud, you must update the device twin. You can accomplish this multiple ways including the JSON editor in portal and the IoT Hub service SDK. The following subsections provide examples.
+To change the percentage of messages to be traced from the cloud, you must update the device twin. You can accomplish this in multiple ways including the JSON editor in portal and the IoT Hub service SDK. The following subsections provide examples.
### Update using the portal
Example logs as shown by Log Analytics:
To understand the different types of logs, see [Azure IoT Hub distributed tracing logs](monitor-iot-hub-reference.md#distributed-tracing-preview).
-### Application Map
-
-To visualize the flow of IoT messages, set up the Application Map sample app. The sample app sends the distributed tracing logs to [Application Map](../azure-monitor/app/app-map.md) using an Azure Function and an Event Hub.
-
-> [!div class="button"]
-> <a href="https://github.com/Azure-Samples/e2e-diagnostic-provision-cli" target="_blank">Get the sample on GitHub</a>
-
-This image below shows distributed tracing in App Map with three routing endpoints:
-
-![IoT distributed tracing in App Map](./media/iot-hub-distributed-tracing/app-map.png)
- ## Understand Azure IoT distributed tracing ### Context
Once enabled, distributed tracing support for IoT Hub will follow this flow:
- To learn more about the general distributed tracing pattern in microservices, see [Microservice architecture pattern: distributed tracing](https://microservices.io/patterns/observability/distributed-tracing.html). - To set up configuration to apply distributed tracing settings to a large number of devices, see [Configure and monitor IoT devices at scale](./iot-hub-automatic-device-management.md). - To learn more about Azure Monitor, see [What is Azure Monitor?](../azure-monitor/overview.md).-- To learn more about using Azure Monitor with IoT HUb, see [Monitor IoT Hub](monitor-iot-hub.md)
+- To learn more about using Azure Monitor with IoT Hub, see [Monitor IoT Hub](monitor-iot-hub.md)
lab-services Class Type Ethical Hacking Virtualbox https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/class-type-ethical-hacking-virtualbox.md
- Title: Set up an ethical hacking lab on VirtualBox with Azure Lab Services | Microsoft Docs
-description: Learn how to set up a lab using Azure Lab Services to teach ethical hacking with VirtualBox.
- Previously updated : 06/11/2021--
-# Set up a lab to teach ethical hacking class with VirtualBox
-
-This article shows you how to set up a class that focuses on forensics side of ethical hacking. Penetration testing, a practice used by the ethical hacking community, occurs when someone attempts to gain access to the system or network to demonstrate vulnerabilities that a malicious attacker may exploit.
-
-In an ethical hacking class, students can learn modern techniques for defending against vulnerabilities. Each student gets a host virtual machine that has three nested virtual machines ΓÇô two virtual machine with [Seed](https://seedsecuritylabs.org/lab_env.html) image and another machine with [Kali Linux](https://www.kali.org/) image. The Seed virtual machine is used for exploiting purposes and Kali virtual machine provides access to the tools needed to execute forensic tasks.
-
-This article has two main sections. The first section covers how to create the classroom lab. The second section covers how to create the template machine with nested virtualization enabled and with the tools and images needed. In this case, two Seed images and a Kali Linux image on a machine that has [VirtualBox](https://www.virtualbox.org/) enabled to host the images.
-
-## Lab configuration
-
-To set up this lab, you need an Azure subscription to get started. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. Once you get an Azure subscription, you can either create a new lab account in Azure Lab Services or use an existing account. See the following tutorial for creating a new lab account: [Tutorial to setup a lab account](tutorial-setup-lab-account.md).
-
-Follow [this tutorial](tutorial-setup-classroom-lab.md) to create a new lab and then apply the following settings:
-
-| Virtual machine size | Image |
-| -- | -- |
-| Medium (Nested Virtualization) | Windows Server 2019 Datacenter |
-| Medium (Nested Virtualization) | Windows 10 |
-
-## Template machine
-
-After the template machine is created, start the machine and connect to it to complete the following three major tasks.
-
-1. Set up the machine to use [VirtualBox](https://www.virtualbox.org/) for nested virtual machines.
-2. Set up the [Kali](https://www.kali.org/) Linux image. Kali is a Linux distribution that includes tools for penetration testing and security auditing.
-3. Set up the Seed image. For this example, the [Seed](https://seedsecuritylabs.org/lab_env.html) image will be used. This image is created specifically for security training.
-
-The rest of this article will cover the manual steps to completing the tasks above.
-
-### Installing VirtualBox
-
-1. Download the [VirtualBox platform packages](https://www.virtualbox.org/wiki/Downloads) by selecting the Windows hosts option.
-2. Run the installation executable, and use the default options to complete the installation.
-
-### Set up a nested virtual machine with Kali Linux Image
-
-Kali is a Linux distribution that includes tools for penetration testing and security auditing.
-
-1. Download the ova image from [Kali Linux VM VirtualBox images](https://www.kali.org/get-kali/#kali-virtual-machines). We recommend the 32bit version, the 64bit version loads with errors. Remember the default username and password noted on the download page.
-2. Open VirtualBox Manager and [import the .ova image.](https://docs.oracle.com/cd/E26217_01/E26796/html/qs-import-vm.html). The Kali licensing agreement will need to be reviewed and accepted to continue.
-
->[!Note]
->- The VirtualBox default Ram for the Kali VM is 2 gig (2048), We recommend increasing the Ram to at least 4 gig (4096) or more depending on your needs. This can be changed by the students on their VMs. Changing the RAM size within VirtualBox does not change the Lab's VM size.
->- By default the Hard disk is set to an 80 gig limit, but is dynamically allocated. Lab Service machines are limited to 128 gigs of hard drive space, so be careful not to exceed this disk size.
->- The Kali image has USB 2.0 enable which requires [Oracle VM VirtualBox Extension Pack](https://www.virtualbox.org/wiki/Downloads) or set the USB controller to 1.0 under the USB tab.
-
-### Setup Seed lab images
-
-1. Download and extract the [SEED Labs VM image.](https://seedsecuritylabs.org/labsetup.html).
-2. Follow the directions to [create a VM in VirtualBox.](https://github.com/seed-labs/seed-labs/blob/master/manuals/vm/seedvm-manual.md)
- If you need multiple SEED VMs make copies of the .iso for each machine, using the same .iso for different machines will not work properly.
-
->[!IMPORTANT]
->Make sure that all the nested virtual machines are powered off before publishing the template. Leaving them powered on has had unexpected side effects, including damaging the virtual machines.
-
-## Cost
-
-If you would like to estimate the cost of this lab, you can use the following example:
-
-For a class of 25 students with 20 hours of scheduled class time and 10 hours of quota for homework or assignments, the price for the lab would be:
-
-25 students \* (20 + 10) hours \* 55 Lab Units \* 0.01 USD per hour = 412.50 USD
-
->[!IMPORTANT]
->Cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
-
-## Conclusion
-
-This article walked you through the steps to create a lab for ethical hacking class. It includes steps to set up nested virtualization for creating two virtual machines inside the host virtual machine for penetrating testing.
-
-## Next steps
-
-Next steps are common to setting up any lab:
--- [Add users](tutorial-setup-classroom-lab.md#add-users-to-the-lab)-- [Set quota](how-to-configure-student-usage.md#set-quotas-for-users)-- [Set a schedule](tutorial-setup-classroom-lab.md#set-a-schedule-for-the-lab)-- [Email registration links to students](how-to-configure-student-usage.md#send-invitations-to-users).
lab-services Class Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/class-types.md
In an ethical hacking class, students can learn modern techniques for defending
For detailed information on how to set up this type of lab, see [Set up a lab to teach ethical hacking class](class-type-ethical-hacking.md).
-## Ethical hacking with VirtualBox
-
-You can set up a lab for a class that focuses on forensics side of ethical hacking. Penetration testing, a practice used by the ethical hacking community, occurs when someone attempts to gain access to the system or network to demonstrate vulnerabilities that a malicious attacker may exploit.
-
-In an ethical hacking class, students can learn modern techniques for defending against vulnerabilities. Each student gets a Windows Server host virtual machine that has two nested virtual machines ΓÇô one virtual machine with [SEED Labs](https://seedsecuritylabs.org/) image and another machine with [Kali Linux](https://www.kali.org/) image. The SEED virtual machine is used for exploiting purposes. The Kali Linux virtual machine provides access to the tools needed to execute forensic tasks.
-
-For detailed information on how to set up this type of lab, see [Set up a lab to teach ethical hacking class](class-type-ethical-hacking-virtualbox.md).
- ## MATLAB [MATLAB](https://www.mathworks.com/products/matlab.html), which stands for Matrix laboratory, is programming platform from [MathWorks](https://www.mathworks.com/). It combines computational power and visualization making it popular tool in the fields of math, engineering, physics, and chemistry.
See the following articles:
- [Set up a lab focused on deep learning in natural language processing using Azure Lab Services](class-type-deep-learning-natural-language-processing.md) - [Set up a lab to teach a networking class](class-type-networking-gns3.md) - [Set up a lab to teach ethical hacking class with Hyper-V](class-type-ethical-hacking.md)-- [Set up a lab to teach ethical hacking class with VirtualBox](class-type-ethical-hacking-virtualbox.md)
machine-learning Concept Differential Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-differential-privacy.md
Title: Differential privacy in machine learning (preview) description: Learn what differential privacy is and how differentially private systems preserve data privacy. --++ Last updated 10/21/2021
machine-learning Concept Fairness Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-fairness-ml.md
--++ Last updated 10/21/2021 #Customer intent: As a data scientist, I want to learn about machine learning fairness and how to assess and mitigate unfairness in machine learning models.
# Machine learning fairness (preview)
-Learn about machine learning fairness and how the [Fairlearn](https://fairlearn.github.io/) open-source Python package can help you assess and mitigate unfairness issues in machine learning models.
+Learn about machine learning fairness and how the [Fairlearn](https://fairlearn.github.io/) open-source Python package can help you assess and mitigate unfairness issues in machine learning models.
## What is machine learning fairness?
machine-learning Concept Open Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-open-source.md
To learn how to use Ray RLLib with Azure Machine Learning, see the [how to train
Training a single or multiple models requires the visualization and inspection of desired metrics to make sure the model performs as expected. You can [use TensorBoard in Azure Machine Learning to track and visualize experiment metrics](./how-to-monitor-tensorboard.md)
-## Responsible ML: Privacy and fairness
+## Responsible AI: Privacy and fairness
### Preserve data privacy with differential privacy
machine-learning Concept Responsible Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-responsible-ml.md
Title: What is responsible machine learning (preview)
+ Title: What is responsible AI (preview)
-description: Learn what responsible machine learning is and how to use it with Azure Machine Learning to understand models, protect data and control the model lifecycle.
+description: Learn what responsible AI is and how to use it with Azure Machine Learning to understand models, protect data and control the model lifecycle.
--++ Last updated 10/21/2021
-#Customer intent: As a data scientist, I want to know learn what responsible machine learning is and how I can use it in Azure Machine Learning.
+#Customer intent: As a data scientist, I want to know learn what responsible AI is and how I can use it in Azure Machine Learning.
-# What is responsible machine learning? (preview)
+# What is responsible AI? (preview)
-In this article, you'll learn what responsible machine learning (ML) is and ways you can put it into practice with Azure Machine Learning.
+In this article, you'll learn what responsible AI is and ways you can put it into practice with Azure Machine Learning.
-## Responsible machine learning principles
+## Responsible AI principles
-Throughout the development and use of AI systems, trust must be at the core. Trust in the platform, process, and models. At Microsoft, responsible machine learning encompasses the following values and principles:
+Throughout the development and use of AI systems, trust must be at the core. Trust in the platform, process, and models. At Microsoft, responsible AI with regards tomachine learning encompasses the following values and principles:
- Understand machine learning models - Interpret and explain model behavior
Throughout the development and use of AI systems, trust must be at the core. Tru
- Control the end-to-end machine learning process - Document the machine learning lifecycle with datasheets As artificial intelligence and autonomous systems integrate more into the fabric of society, it's important to proactively make an effort to anticipate and mitigate the unintended consequences of these technologies.
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-azureml-behind-firewall.md
The hosts in the following tables are owned by Microsoft, and provide services r
| -- | -- | -- | -- | | Azure Machine Learning studio | ml.azure.com | TCP | 443 | | API |\*.azureml.ms | TCP | 443 |
+| API | \*.azureml.net | TCP | 443 |
+| Model management | \*.modelmanagement.azureml.net | TCP | 443 |
| Integrated notebook | \*.notebooks.azure.net | TCP | 443 | | Integrated notebook | \<storage\>.file.core.windows.net | TCP | 443, 445 | | Integrated notebook | \<storage\>.dfs.core.windows.net | TCP | 443 |
The hosts in the following tables are owned by Microsoft, and provide services r
| -- | -- | -- | -- | | Azure Machine Learning studio | ml.azure.us | TCP | 443 | | API | \*.ml.azure.us | TCP | 443 |
+| Model management | \*.modelmanagement.azureml.us | TCP | 443 |
| Integrated notebook | \*.notebooks.usgovcloudapi.net | TCP | 443 | | Integrated notebook | \<storage\>.file.core.usgovcloudapi.net | TCP | 443, 445 | | Integrated notebook | \<storage\>.dfs.core.usgovcloudapi.net | TCP | 443 |
The hosts in the following tables are owned by Microsoft, and provide services r
| -- | -- | -- | -- | | Azure Machine Learning studio | studio.ml.azure.cn | TCP | 443 | | API | \*.ml.azure.cn | TCP | 443 |
+| API | \*.azureml.cn | TCP | 443 |
+| Model management | \*.modelmanagement.ml.azure.cn | TCP | 443 |
| Integrated notebook | \*.notebooks.chinacloudapi.cn | TCP | 443 | | Integrated notebook | \<storage\>.file.core.chinacloudapi.cn | TCP | 443, 445 | | Integrated notebook | \<storage\>.dfs.core.chinacloudapi.cn | TCP | 443 |
To support logging of metrics and other monitoring information to Azure Monitor
* **dc.applicationinsights.azure.com** * **dc.applicationinsights.microsoft.com** * **dc.services.visualstudio.com**
-* **.in.applicationinsights.azure.com**
+* ***.in.applicationinsights.azure.com**
For a list of IP addresses for these hosts, see [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md).
This article is part of a series on securing an Azure Machine Learning workflow.
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md)
-For more information on configuring Azure Firewall, see [Tutorial: Deploy and configure Azure Firewall using the Azure portal](../firewall/tutorial-firewall-deploy-portal.md).
+For more information on configuring Azure Firewall, see [Tutorial: Deploy and configure Azure Firewall using the Azure portal](../firewall/tutorial-firewall-deploy-portal.md).
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-train.md
You can also configure forecasting tasks, which requires extra setup. See the [S
Automated machine learning tries different models and algorithms during the automation and tuning process. As a user, there is no need for you to specify the algorithm. The three different `task` parameter values determine the list of algorithms, or models, to apply. Use the `allowed_models` or `blocked_models` parameters to further modify iterations with the available models to include or exclude.
+The following table summarizes the supported models by task type.
-See the [SupportedModels reference documentation](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels) for a summarization of the supported models by task type.
+> [!NOTE]
+> If you plan to export your automated ML created models to an [ONNX model](concept-onnx.md), only those algorithms indicated with an * (asterisk) are able to be converted to the ONNX format. Learn more about [converting models to ONNX](concept-automated-ml.md#use-with-onnx). <br> <br> Also note, ONNX only supports classification and regression tasks at this time.
+>
+Classification | Regression | Time Series Forecasting
+|-- |-- |--
+[Logistic Regression](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#logisticregression-logisticregression-)* | [Elastic Net](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#elasticnet-elasticnet-)* | [AutoARIMA](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting#autoarima-autoarima-)
+[Light GBM](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#lightgbmclassifier-lightgbm-)* | [Light GBM](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#lightgbmregressor-lightgbm-)* | [Prophet](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels.forecasting#prophet-prophet-)
+[Gradient Boosting](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#gradientboosting-gradientboosting-)* | [Gradient Boosting](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#gradientboostingregressor-gradientboosting-)* | [Elastic Net](https://scikit-learn.org/stable/modules/linear_model.html#elastic-net)
+[Decision Tree](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#decisiontree-decisiontree-)* |[Decision Tree](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#decisiontreeregressor-decisiontree-)* |[Light GBM](https://lightgbm.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html)
+[K Nearest Neighbors](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#knearestneighborsclassifier-knn-)* |[K Nearest Neighbors](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#knearestneighborsregressor-knn-)* | [Gradient Boosting](https://scikit-learn.org/stable/modules/ensemble.html#regression)
+[Linear SVC](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#linearsupportvectormachine-linearsvm-)* |[LARS Lasso](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#lassolars-lassolars-)* | [Decision Tree](https://scikit-learn.org/stable/modules/tree.html#regression)
+[Support Vector Classification (SVC)](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#supportvectormachine-svm-)* |[Stochastic Gradient Descent (SGD)](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#sgdregressor-sgd-)* | [Arimax](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels.forecasting#arimax-arimax-)
+[Random Forest](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#randomforest-randomforest-)* | [Random Forest](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#randomforestregressor-randomforest-) | [LARS Lasso](https://scikit-learn.org/stable/modules/linear_model.html#lars-lasso)
+[Extremely Randomized Trees](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees)* | [Extremely Randomized Trees](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees)* | [Stochastic Gradient Descent (SGD)](https://scikit-learn.org/stable/modules/sgd.html#regression)
+[Xgboost](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#xgboostclassifier-xgboostclassifier-)* |[Xgboost](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#xgboostregressor-xgboostregressor-)* | [Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#random-forests)
+[Averaged Perceptron Classifier](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#averagedperceptronclassifier-averagedperceptronclassifier-)| [Online Gradient Descent Regressor](/python/api/nimbusml/nimbusml.linear_model.onlinegradientdescentregressor?preserve-view=true&view=nimbusml-py-latest) | [Xgboost](https://xgboost.readthedocs.io/en/latest/parameter.html)
+[Naive Bayes](https://scikit-learn.org/stable/modules/naive_bayes.html#bernoulli-naive-bayes)* |[Fast Linear Regressor](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression#fastlinearregressor-fastlinearregressor-)| [ForecastTCN](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels.forecasting#tcnforecaster-tcnforecaster-)
+[Stochastic Gradient Descent (SGD)](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification#sgdclassifier-sgd-)* || Naive
+[Linear SVM Classifier](/python/api/nimbusml/nimbusml.linear_model.linearsvmbinaryclassifier?preserve-view=true&view=nimbusml-py-latest)* || SeasonalNaive
+||| Average
+||| SeasonalAverage
+||| [ExponentialSmoothing](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels.forecasting#exponentialsmoothing-exponentialsmoothing-)
### Primary metric
machine-learning How To Machine Learning Interpretability Aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-interpretability-aml.md
-+ Last updated 10/21/2021
machine-learning Reference Machine Learning Cloud Parity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-machine-learning-cloud-parity.md
The information in the rest of this document provides information on what featur
| Labeler Portal | GA | YES | N/A | | Labeling using private workforce | GA | YES | N/A | | ML assisted labeling (Image classification and object detection) | Public Preview | YES | N/A |
-| **Responsible ML** | | | |
+| **Responsible AI** | | | |
| Explainability in UI | Public Preview | NO | N/A | | Differential privacy SmartNoise toolkit | OSS | NO | N/A | | custom tags in Azure Machine Learning to implement datasheets | GA | NO | N/A |
machine-learning Reference Yaml Component Command https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-component-command.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Compute Aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-compute-aml.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-compute-instance.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Compute Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-compute-vm.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Dataset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-dataset.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values |
machine-learning Reference Yaml Datastore Blob https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-datastore-blob.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Datastore Data Lake Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-datastore-data-lake-gen1.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Datastore Data Lake Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-datastore-data-lake-gen2.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Datastore Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-datastore-files.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Deployment Batch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-deployment-batch.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Deployment Managed Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-deployment-managed-online.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Endpoint Batch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-endpoint-batch.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Endpoint Managed Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-endpoint-managed-online.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + > [!NOTE] > A fully specified sample YAML for managed online endpoints is available for [reference](https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.template.yaml)
Examples are available in the [examples GitHub repository](https://github.com/Az
- [Install and use the CLI (v2)](how-to-configure-cli.md) - Learn how to [deploy a model with a managed online endpoint](how-to-deploy-managed-online-endpoints.md)-- [Troubleshooting managed online endpoints deployment and scoring (preview)](./how-to-troubleshoot-online-endpoints.md)
+- [Troubleshooting managed online endpoints deployment and scoring (preview)](./how-to-troubleshoot-online-endpoints.md)
machine-learning Reference Yaml Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-environment.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Job Command https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-job-command.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Job Component https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-job-component.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Job Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-job-pipeline.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Job Sweep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-job-sweep.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-model.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values |
machine-learning Reference Yaml Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-workspace.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] + ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Tutorial Pipeline Batch Scoring Classification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-pipeline-batch-scoring-classification.md
- Title: 'Tutorial: ML pipelines for batch scoring'-
-description: In this tutorial, you build a machine learning pipeline to perform batch scoring. Focus on machine learning instead of infrastructure and automation.
------- Previously updated : 10/13/2020---
-# Tutorial: Build an Azure Machine Learning pipeline for batch scoring
-
-In this advanced tutorial, you learn how to build an [Azure Machine Learning pipeline](concept-ml-pipelines.md) to run a batch scoring job. Machine learning pipelines optimize your workflow with speed, portability, and reuse, so you can focus on machine learning instead of infrastructure and automation. After you build and publish a pipeline, you configure a REST endpoint that you can use to trigger the pipeline from any HTTP library on any platform.
-
-The example uses a pretrained [Inception-V3](https://arxiv.org/abs/1512.00567) convolutional neural network model implemented in Tensorflow to classify unlabeled images.
-
-In this tutorial, you complete the following tasks:
-
-> [!div class="checklist"]
-> * Configure workspace
-> * Download and store sample data
-> * Create dataset objects to fetch and output data
-> * Download, prepare, and register the model in your workspace
-> * Provision compute targets and create a scoring script
-> * Use the `ParallelRunStep` class for async batch scoring
-> * Build, run, and publish a pipeline
-> * Enable a REST endpoint for the pipeline
-
-If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-
-## Prerequisites
-
-* Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) if you don't already have an Azure Machine Learning workspace or a compute instance.
-* After you complete the quickstart:
- 1. Select **Notebooks** in the studio.
- 1. Select the **Samples** tab.
- 1. Open the *tutorials/machine-learning-pipelines-advanced/tutorial-pipeline-batch-scoring-classification.ipynb* notebook.
-
-If you want to run the setup tutorial in your own [local environment](how-to-configure-environment.md#local), you can access the tutorial on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials). Run `pip install azureml-sdk[notebooks] azureml-pipeline-core azureml-pipeline-steps pandas requests` to get the required packages.
-
-## Configure workspace and create a datastore
-
-Create a workspace object from the existing Azure Machine Learning workspace.
-
-```python
-from azureml.core import Workspace
-ws = Workspace.from_config()
-```
-
-> [!IMPORTANT]
-> This code snippet expects the workspace configuration to be saved in the current directory or its parent. For more information on creating a workspace, see [Create and manage Azure Machine Learning workspaces](how-to-manage-workspace.md). For more information on saving the configuration to file, see [Create a workspace configuration file](how-to-configure-environment.md#workspace).
-
-## Create a datastore for sample images
-
-On the `pipelinedata` account, get the ImageNet evaluation public data sample from the `sampledata` public blob container. Call `register_azure_blob_container()` to make the data available to the workspace under the name `images_datastore`. Then, set the workspace default datastore as the output datastore. Use the output datastore to score output in the pipeline.
-
-For more information on accessing data, see [How to access data](./how-to-access-data.md).
-
-```python
-from azureml.core.datastore import Datastore
-
-batchscore_blob = Datastore.register_azure_blob_container(ws,
- datastore_name="images_datastore",
- container_name="sampledata",
- account_name="pipelinedata",
- overwrite=True)
-
-def_data_store = ws.get_default_datastore()
-```
-
-## Create dataset objects
-
-When building pipelines, `Dataset` objects are used for reading data from workspace datastores, and `OutputFileDatasetConfig` objects are used for transferring intermediate data between pipeline steps.
-
-> [!Important]
-> The batch scoring example in this tutorial uses only one pipeline step. In use cases that have multiple steps, the typical flow will include these steps:
->
-> 1. Use `Dataset` objects as *inputs* to fetch raw data, perform some transformation, and then *output* with an `OutputFileDatasetConfig` object.
->
-> 2. Use the `OutputFileDatasetConfig` *output object* in the preceding step as an *input object*. Repeat it for subsequent steps.
-
-In this scenario, you create `Dataset` objects that correspond to the datastore directories for both the input images and the classification labels (y-test values). You also create an `OutputFileDatasetConfig` object for the batch scoring output data.
-
-```python
-from azureml.core.dataset import Dataset
-from azureml.data import OutputFileDatasetConfig
-
-input_images = Dataset.File.from_files((batchscore_blob, "batchscoring/images/"))
-label_ds = Dataset.File.from_files((batchscore_blob, "batchscoring/labels/"))
-output_dir = OutputFileDatasetConfig(name="scores")
-```
-
-Register the datasets to the workspace if you want to reuse it later. This step is optional.
-
-```python
-
-input_images = input_images.register(workspace = ws, name = "input_images")
-label_ds = label_ds.register(workspace = ws, name = "label_ds")
-```
-
-## Download and register the model
-
-Download the pretrained Tensorflow model to use it for batch scoring in a pipeline. First, create a local directory where you store the model. Then, download and extract the model.
-
-```python
-import os
-import tarfile
-import urllib.request
-
-if not os.path.isdir("models"):
- os.mkdir("models")
-
-response = urllib.request.urlretrieve("http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz", "model.tar.gz")
-tar = tarfile.open("model.tar.gz", "r:gz")
-tar.extractall("models")
-```
-
-Next, register the model to your workspace, so you can easily retrieve the model in the pipeline process. In the `register()` static function, the `model_name` parameter is the key you use to locate your model throughout the SDK.
-
-```python
-from azureml.core.model import Model
-
-model = Model.register(model_path="models/inception_v3.ckpt",
- model_name="inception",
- tags={"pretrained": "inception"},
- description="Imagenet trained tensorflow inception",
- workspace=ws)
-```
-
-## Create and attach the remote compute target
-
-Machine learning pipelines can't be run locally, so you run them on cloud resources or *remote compute targets*. A remote compute target is a reusable virtual compute environment where you run experiments and machine learning workflows.
-
-Run the following code to create a GPU-enabled [`AmlCompute`](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute) target, and then attach it to your workspace. For more information about compute targets, see the [conceptual article](./concept-compute-target.md).
--
-```python
-from azureml.core.compute import AmlCompute, ComputeTarget
-from azureml.exceptions import ComputeTargetException
-compute_name = "gpu-cluster"
-
-# checks to see if compute target already exists in workspace, else create it
-try:
- compute_target = ComputeTarget(workspace=ws, name=compute_name)
-except ComputeTargetException:
- config = AmlCompute.provisioning_configuration(vm_size="STANDARD_NC6",
- vm_priority="lowpriority",
- min_nodes=0,
- max_nodes=1)
-
- compute_target = ComputeTarget.create(workspace=ws, name=compute_name, provisioning_configuration=config)
- compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
-```
-
-## Write a scoring script
-
-To do the scoring, create a batch scoring script called `batch_scoring.py`, and then write it to the current directory. The script takes input images, applies the classification model, and then outputs the predictions to a results file.
-
-The `batch_scoring.py` script takes the following parameters, which get passed from the `ParallelRunStep` you create later:
--- `--model_name`: The name of the model being used.-- `--labels_dir`: The location of the `labels.txt` file.-
-The pipeline infrastructure uses the `ArgumentParser` class to pass parameters into pipeline steps. For example, in the following code, the first argument `--model_name` is given the property identifier `model_name`. In the `init()` function, `Model.get_model_path(args.model_name)` is used to access this property.
--
-```python
-%%writefile batch_scoring.py
-
-import os
-import argparse
-import datetime
-import time
-import tensorflow as tf
-from math import ceil
-import numpy as np
-import shutil
-from tensorflow.contrib.slim.python.slim.nets import inception_v3
-
-from azureml.core import Run
-from azureml.core.model import Model
-from azureml.core.dataset import Dataset
-
-slim = tf.contrib.slim
-
-image_size = 299
-num_channel = 3
--
-def get_class_label_dict(labels_dir):
- label = []
- labels_path = os.path.join(labels_dir, 'labels.txt')
- proto_as_ascii_lines = tf.gfile.GFile(labels_path).readlines()
- for l in proto_as_ascii_lines:
- label.append(l.rstrip())
- return label
--
-def init():
- global g_tf_sess, probabilities, label_dict, input_images
-
- parser = argparse.ArgumentParser(description="Start a tensorflow model serving")
- parser.add_argument('--model_name', dest="model_name", required=True)
- parser.add_argument('--labels_dir', dest="labels_dir", required=True)
- args, _ = parser.parse_known_args()
-
- label_dict = get_class_label_dict(args.labels_dir)
- classes_num = len(label_dict)
-
- with slim.arg_scope(inception_v3.inception_v3_arg_scope()):
- input_images = tf.placeholder(tf.float32, [1, image_size, image_size, num_channel])
- logits, _ = inception_v3.inception_v3(input_images,
- num_classes=classes_num,
- is_training=False)
- probabilities = tf.argmax(logits, 1)
-
- config = tf.ConfigProto()
- config.gpu_options.allow_growth = True
- g_tf_sess = tf.Session(config=config)
- g_tf_sess.run(tf.global_variables_initializer())
- g_tf_sess.run(tf.local_variables_initializer())
-
- model_path = Model.get_model_path(args.model_name)
- saver = tf.train.Saver()
- saver.restore(g_tf_sess, model_path)
--
-def file_to_tensor(file_path):
- image_string = tf.read_file(file_path)
- image = tf.image.decode_image(image_string, channels=3)
-
- image.set_shape([None, None, None])
- image = tf.image.resize_images(image, [image_size, image_size])
- image = tf.divide(tf.subtract(image, [0]), [255])
- image.set_shape([image_size, image_size, num_channel])
- return image
--
-def run(mini_batch):
- result_list = []
- for file_path in mini_batch:
- test_image = file_to_tensor(file_path)
- out = g_tf_sess.run(test_image)
- result = g_tf_sess.run(probabilities, feed_dict={input_images: [out]})
- result_list.append(os.path.basename(file_path) + ": " + label_dict[result[0]])
- return result_list
-```
-
-> [!TIP]
-> The pipeline in this tutorial has only one step, and it writes the output to a file. For multi-step pipelines, you also use `ArgumentParser` to define a directory to write output data for input to subsequent steps. For an example of passing data between multiple pipeline steps by using the `ArgumentParser` design pattern, see the [notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/nyc-taxi-data-regression-model-building.ipynb).
-
-## Build the pipeline
-
-Before you run the pipeline, create an object that defines the Python environment and creates the dependencies that your `batch_scoring.py` script requires. The main dependency required is Tensorflow, but you also install `azureml-core` and `azureml-dataprep[fuse]` which are required by ParallelRunStep. Also, specify Docker and Docker-GPU support.
-
-```python
-from azureml.core import Environment
-from azureml.core.conda_dependencies import CondaDependencies
-from azureml.core.runconfig import DEFAULT_GPU_IMAGE
-
-cd = CondaDependencies.create(pip_packages=["tensorflow-gpu==1.15.2",
- "azureml-core", "azureml-dataprep[fuse]"])
-env = Environment(name="parallelenv")
-env.python.conda_dependencies = cd
-env.docker.base_image = DEFAULT_GPU_IMAGE
-```
-
-### Create the configuration to wrap the script
-
-Create the pipeline step using the script, environment configuration, and parameters. Specify the compute target you already attached to your workspace.
-
-```python
-from azureml.pipeline.steps import ParallelRunConfig
-
-parallel_run_config = ParallelRunConfig(
- environment=env,
- entry_script="batch_scoring.py",
- source_directory=".",
- output_action="append_row",
- mini_batch_size="20",
- error_threshold=1,
- compute_target=compute_target,
- process_count_per_node=2,
- node_count=1
-)
-```
-
-### Create the pipeline step
-
-A pipeline step is an object that encapsulates everything you need to run a pipeline, including:
-
-* Environment and dependency settings
-* The compute resource to run the pipeline on
-* Input and output data, and any custom parameters
-* Reference to a script or SDK logic to run during the step
-
-Multiple classes inherit from the parent class [`PipelineStep`](/python/api/azureml-pipeline-core/azureml.pipeline.core.builder.pipelinestep). You can choose classes to use specific frameworks or stacks to build a step. In this example, you use the `ParallelRunStep` class to define your step logic by using a custom Python script. If an argument to your script is either an input to the step or an output of the step, the argument must be defined *both* in the `arguments` array *and* in either the `input` or the `output` parameter, respectively.
-
-In scenarios where there is more than one step, an object reference in the `outputs` array becomes available as an *input* for a subsequent pipeline step.
-
-```python
-from azureml.pipeline.steps import ParallelRunStep
-from datetime import datetime
-
-parallel_step_name = "batchscoring-" + datetime.now().strftime("%Y%m%d%H%M")
-
-label_config = label_ds.as_named_input("labels_input")
-
-batch_score_step = ParallelRunStep(
- name=parallel_step_name,
- inputs=[input_images.as_named_input("input_images")],
- output=output_dir,
- arguments=["--model_name", "inception",
- "--labels_dir", label_config],
- side_inputs=[label_config],
- parallel_run_config=parallel_run_config,
- allow_reuse=False
-)
-```
-
-For a list of all the classes you can use for different step types, see the [steps package](/python/api/azureml-pipeline-steps/azureml.pipeline.steps).
-
-## Submit the pipeline
-
-Now, run the pipeline. First, create a `Pipeline` object by using your workspace reference and the pipeline step you created. The `steps` parameter is an array of steps. In this case, there's only one step for batch scoring. To build pipelines that have multiple steps, place the steps in order in this array.
-
-Next, use the `Experiment.submit()` function to submit the pipeline for execution. The `wait_for_completion` function outputs logs during the pipeline build process. You can use the logs to see current progress.
-
-> [!IMPORTANT]
-> The first pipeline run takes roughly *15 minutes*. All dependencies must be downloaded, a Docker image is created, and the Python environment is provisioned and created. Running the pipeline again takes significantly less time because those resources are reused instead of created. However, total run time for the pipeline depends on the workload of your scripts and the processes that are running in each pipeline step.
-
-```python
-from azureml.core import Experiment
-from azureml.pipeline.core import Pipeline
-
-pipeline = Pipeline(workspace=ws, steps=[batch_score_step])
-pipeline_run = Experiment(ws, 'Tutorial-Batch-Scoring').submit(pipeline)
-pipeline_run.wait_for_completion(show_output=True)
-```
-
-### Download and review output
-
-Run the following code to download the output file that's created from the `batch_scoring.py` script. Then, explore the scoring results.
-
-```python
-import pandas as pd
-
-batch_run = next(pipeline_run.get_children())
-batch_output = batch_run.get_output_data("scores")
-batch_output.download(local_path="inception_results")
-
-for root, dirs, files in os.walk("inception_results"):
- for file in files:
- if file.endswith("parallel_run_step.txt"):
- result_file = os.path.join(root, file)
-
-df = pd.read_csv(result_file, delimiter=":", header=None)
-df.columns = ["Filename", "Prediction"]
-print("Prediction has ", df.shape[0], " rows")
-df.head(10)
-```
-
-## Publish and run from a REST endpoint
-
-Run the following code to publish the pipeline to your workspace. In your workspace in Azure Machine Learning studio, you can see metadata for the pipeline, including run history and durations. You can also run the pipeline manually from the studio.
-
-Publishing the pipeline enables a REST endpoint that you can use to run the pipeline from any HTTP library on any platform.
-
-```python
-published_pipeline = pipeline_run.publish_pipeline(
- name="Inception_v3_scoring", description="Batch scoring using Inception v3 model", version="1.0")
-
-published_pipeline
-```
-
-To run the pipeline from the REST endpoint, you need an OAuth2 Bearer-type authentication header. The following example uses interactive authentication (for illustration purposes), but for most production scenarios that require automated or headless authentication, use service principal authentication as [described in this article](how-to-setup-authentication.md).
-
-Service principal authentication involves creating an *App Registration* in *Azure Active Directory*. First, you generate a client secret, and then you grant your service principal *role access* to your machine learning workspace. Use the [`ServicePrincipalAuthentication`](/python/api/azureml-core/azureml.core.authentication.serviceprincipalauthentication) class to manage your authentication flow.
-
-Both [`InteractiveLoginAuthentication`](/python/api/azureml-core/azureml.core.authentication.interactiveloginauthentication) and `ServicePrincipalAuthentication` inherit from `AbstractAuthentication`. In both cases, use the [`get_authentication_header()`](/python/api/azureml-core/azureml.core.authentication.abstractauthentication#get-authentication-header--) function in the same way to fetch the header:
-
-```python
-from azureml.core.authentication import InteractiveLoginAuthentication
-
-interactive_auth = InteractiveLoginAuthentication()
-auth_header = interactive_auth.get_authentication_header()
-```
-
-Get the REST URL from the `endpoint` property of the published pipeline object. You can also find the REST URL in your workspace in Azure Machine Learning studio.
-
-Build an HTTP POST request to the endpoint. Specify your authentication header in the request. Add a JSON payload object that has the experiment name.
-
-Make the request to trigger the run. Include code to access the `Id` key from the response dictionary to get the value of the run ID.
-
-```python
-import requests
-
-rest_endpoint = published_pipeline.endpoint
-response = requests.post(rest_endpoint,
- headers=auth_header,
- json={"ExperimentName": "Tutorial-Batch-Scoring",
- "ParameterAssignments": {"process_count_per_node": 6}})
-run_id = response.json()["Id"]
-```
-
-Use the run ID to monitor the status of the new run. The new run takes another 10-15 min to finish.
-
-The new run will look similar to the pipeline you ran earlier in the tutorial. You can choose not to view the full output.
-
-```python
-from azureml.pipeline.core.run import PipelineRun
-from azureml.widgets import RunDetails
-
-published_pipeline_run = PipelineRun(ws.experiments["Tutorial-Batch-Scoring"], run_id)
-RunDetails(published_pipeline_run).show()
-```
-
-## Clean up resources
-
-Don't complete this section if you plan to run other Azure Machine Learning tutorials.
-
-### Stop the compute instance
--
-### Delete everything
-
-If you don't plan to use the resources you created, delete them, so you don't incur any charges:
-
-1. In the Azure portal, in the left menu, select **Resource groups**.
-1. In the list of resource groups, select the resource group you created.
-1. Select **Delete resource group**.
-1. Enter the resource group name. Then, select **Delete**.
-
-You can also keep the resource group but delete a single workspace. Display the workspace properties, and then select **Delete**.
-
-## Next steps
-
-In this machine learning pipelines tutorial, you did the following tasks:
-
-> [!div class="checklist"]
-> * Built a pipeline with environment dependencies to run on a remote GPU compute resource.
-> * Created a scoring script to run batch predictions by using a pretrained Tensorflow model.
-> * Published a pipeline and enabled it to be run from a REST endpoint.
-
-For more examples of how to build pipelines by using the machine learning SDK, see the [notebook repository](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/machine-learning-pipelines).
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-pipeline-python-sdk.md
+
+ Title: 'Tutorial: ML pipelines for training'
+
+description: In this tutorial, you build a machine learning pipeline for image classification. Focus on machine learning instead of infrastructure and automation.
+++++++ Last updated : 01/28/2022+++
+# Tutorial: Build an Azure Machine Learning pipeline for image classification
+
+In this tutorial, you learn how to build an [Azure Machine Learning pipeline](concept-ml-pipelines.md) to prepare data and train a machine learning model. Machine learning pipelines optimize your workflow with speed, portability, and reuse, so you can focus on machine learning instead of infrastructure and automation.
+
+The example trains a small [Keras](https://keras.io/) convolutional neural network to classify images in the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset.
+
+In this tutorial, you complete the following tasks:
+
+> [!div class="checklist"]
+> * Configure workspace
+> * Create an Experiment to hold your work
+> * Provision a ComputeTarget to do the work
+> * Create a Dataset in which to store compressed data
+> * Create a pipeline step to prepare the data for training
+> * Define a runtime Environment in which to perform training
+> * Create a pipeline step to define the neural network and perform the training
+> * Compose a Pipeline from the pipeline steps
+> * Run the pipeline in the experiment
+> * Review the output of the steps and the trained neural network
+> * Register the model for further use
+
+If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+
+## Prerequisites
+
+* Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) if you don't already have an Azure Machine Learning workspace.
+* A Python environment in which you've installed both the `azureml-core` and `azureml-pipelines` packages. This environment is for defining and controlling your Azure Machine Learning resources and is separate from the environment used at runtime for training.
+
+> [!Important]
+> Currently, the most recent Python release compatible with `azureml-pipelines` is Python 3.8. If you've difficulty installing the `azureml-pipelines` package, ensure that `python --version` is a compatible release. Consult the documentation of your Python virtual environment manager (`venv`, `conda`, and so on) for instructions.
+
+## Start an interactive Python session
+
+This tutorial uses the Python SDK for Azure ML to create and control an Azure Machine Learning pipeline. The tutorial assumes that you'll be running the code snippets interactively in either a Python REPL environment or a Jupyter notebook.
+
+* This tutorial is based on the `image-classification.ipynb` notebook found in the `python-sdk/tutorial/using-pipelines` directory of the [AzureML Examples](https://github.com/azure/azureml-examples) repository. The source code for the steps themselves is in the `keras-mnist-fashion` subdirectory.
++
+## Import types
+
+Import all the Azure Machine Learning types that you'll need for this tutorial:
+
+```python
+import os
+import azureml.core
+from azureml.core import (
+ Workspace,
+ Experiment,
+ Dataset,
+ Datastore,
+ ComputeTarget,
+ Environment,
+ ScriptRunConfig
+)
+from azureml.data import OutputFileDatasetConfig
+from azureml.core.compute import AmlCompute
+from azureml.core.compute_target import ComputeTargetException
+from azureml.pipeline.steps import PythonScriptStep
+from azureml.pipeline.core import Pipeline
+
+# check core SDK version number
+print("Azure ML SDK Version: ", azureml.core.VERSION)
+```
+
+The Azure ML SDK version should be 1.37 or greater. If it isn't, upgrade with `pip install --upgrade azureml-core`.
+
+## Configure workspace
+
+Create a workspace object from the existing Azure Machine Learning workspace.
+
+```python
+workspace = Workspace.from_config()
+```
+
+> [!IMPORTANT]
+> This code snippet expects the workspace configuration to be saved in the current directory or its parent. For more information on creating a workspace, see [Create and manage Azure Machine Learning workspaces](how-to-manage-workspace.md). For more information on saving the configuration to file, see [Create a workspace configuration file](how-to-configure-environment.md#workspace).
+
+## Create the infrastructure for your pipeline
+
+Create an `Experiment` object to hold the results of your pipeline runs:
+
+```python
+exp = Experiment(workspace=workspace, name="keras-mnist-fashion")
+```
+
+Create a `ComputeTarget` that represents the machine resource on which your pipeline will run. The simple neural network used in this tutorial trains in just a few minutes even on a CPU-based machine. If you wish to use a GPU for training, set `use_gpu` to `True`. Provisioning a compute target generally takes about five minutes.
+
+```python
+use_gpu = False
+
+# choose a name for your cluster
+cluster_name = "gpu-cluster" if use_gpu else "cpu-cluster"
+
+found = False
+# Check if this compute target already exists in the workspace.
+cts = workspace.compute_targets
+if cluster_name in cts and cts[cluster_name].type == "AmlCompute":
+ found = True
+ print("Found existing compute target.")
+ compute_target = cts[cluster_name]
+if not found:
+ print("Creating a new compute target...")
+ compute_config = AmlCompute.provisioning_configuration(
+ vm_size= "STANDARD_NC6" if use_gpu else "STANDARD_D2_V2"
+ # vm_priority = 'lowpriority', # optional
+ max_nodes=4,
+ )
+
+ # Create the cluster.
+ compute_target = ComputeTarget.create(workspace, cluster_name, compute_config)
+
+ # Can poll for a minimum number of nodes and for a specific timeout.
+ # If no min_node_count is provided, it will use the scale settings for the cluster.
+ compute_target.wait_for_completion(
+ show_output=True, min_node_count=None, timeout_in_minutes=10
+ )
+# For a more detailed view of current AmlCompute status, use get_status().print(compute_target.get_status().serialize())
+```
+
+> [!Note]
+> GPU availability depends on the quota of your Azure subscription and upon Azure capacity. See [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md).
+
+### Create a dataset for the Azure-stored data
+
+Fashion-MNIST] is a dataset of fashion images divided into 10 classes. Each image is a 28x28 grayscale image and there are 60,000 training and 10,000 test images. As an image classification problem, Fashion-MNIST is harder than the classic MNIST handwritten digit database. It's distributed in the same compressed binary form as the original [handwritten digit database](http://yann.lecun.com/exdb/mnist/) .
+
+To create a `Dataset` that references the Web-based data, run:
+
+```python
+data_urls = ["https://data4mldemo6150520719.blob.core.windows.net/demo/mnist-fashion"]
+fashion_ds = Dataset.File.from_files(data_urls)
+
+# list the files referenced by fashion_ds
+print(fashion_ds.to_path())
+```
+
+This code completes quickly. The underlying data remains in the Azure storage resource specified in the `data_urls` array.
+
+## Create the data-preparation pipeline step
+
+The first step in this pipeline will convert the compressed data files of `fashion_ds` into a dataset in your own workspace consisting of CSV files ready for use in training. Once registered with the workspace, your collaborators can access this data for their own analysis, training, and so on
+
+```python
+datastore = workspace.get_default_datastore()
+prepared_fashion_ds = OutputFileDatasetConfig(
+ destination=(datastore, "outputdataset/{run-id}")
+).register_on_complete(name="prepared_fashion_ds")
+```
+
+The above code specifies a dataset that is based on the output of a pipeline step. The underlying processed files will be put in the workspace's default datastore's blob storage at the path specified in `destination`. The dataset will be registered in the workspace with the name `prepared_fashion_ds`.
+
+### Create the pipeline step's source
+
+The code that you've executed so far has create and controlled Azure resources. Now it's time to write code that does the first step in the domain.
+
+If you're following along with the example in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/using-pipelines), the source file is already available as `keras-mnist-fashion/prepare.py`.
+
+If you're working from scratch, create a subdirectory called `kera-mnist-fashion/`. Create a new file, add the following code to it, and name the file `prepare.py`.
+
+```python
+# prepare.py
+# Converts MNIST-formatted files at the passed-in input path to a passed-in output path
+import os
+import sys
+
+# Conversion routine for MNIST binary format
+def convert(imgf, labelf, outf, n):
+ f = open(imgf, "rb")
+ l = open(labelf, "rb")
+ o = open(outf, "w")
+
+ f.read(16)
+ l.read(8)
+ images = []
+
+ for i in range(n):
+ image = [ord(l.read(1))]
+ for j in range(28 * 28):
+ image.append(ord(f.read(1)))
+ images.append(image)
+
+ for image in images:
+ o.write(",".join(str(pix) for pix in image) + "\n")
+ f.close()
+ o.close()
+ l.close()
+
+# The MNIST-formatted source
+mounted_input_path = sys.argv[1]
+# The output directory at which the outputs will be written
+mounted_output_path = sys.argv[2]
+
+# Create the output directory
+os.makedirs(mounted_output_path, exist_ok=True)
+
+# Convert the training data
+convert(
+ os.path.join(mounted_input_path, "mnist-fashion/train-images-idx3-ubyte"),
+ os.path.join(mounted_input_path, "mnist-fashion/train-labels-idx1-ubyte"),
+ os.path.join(mounted_output_path, "mnist_train.csv"),
+ 60000,
+)
+
+# Convert the test data
+convert(
+ os.path.join(mounted_input_path, "mnist-fashion/t10k-images-idx3-ubyte"),
+ os.path.join(mounted_input_path, "mnist-fashion/t10k-labels-idx1-ubyte"),
+ os.path.join(mounted_output_path, "mnist_test.csv"),
+ 10000,
+)
+```
+
+The code in `prepare.py` takes two command-line arguments: the first is assigned to `mounted_input_path` and the second to `mounted_output_path`. If that subdirectory doesn't exist, the call to `os.makedirs` creates it. Then, the program converts the training and testing data and outputs the comma-separated files to the `mounted_output_path`.
+
+### Specify the pipeline step
+
+Back in the Python environment you're using to specify the pipeline, run this code to create a `PythonScriptStep` for your preparation code:
+
+```python
+script_folder = "./keras-mnist-fashion"
+
+prep_step = PythonScriptStep(
+ name="prepare step",
+ script_name="prepare.py",
+ # On the compute target, mount fashion_ds dataset as input, prepared_fashion_ds as output
+ arguments=[fashion_ds.as_named_input("fashion_ds").as_mount(), prepared_fashion_ds],
+ source_directory=script_folder,
+ compute_target=compute_target,
+ allow_reuse=True,
+)
+```
+
+The call to `PythonScriptStep` specifies that, when the pipeline step is run:
+
+* All the files in the `script_folder` directory are uploaded to the `compute_target`
+* Among those uploaded source files, the file `prepare.py` will be run
+* The `fashion_ds` and `prepared_fashion_ds` datasets will be mounted on the `compute_target` and appear as directories
+* The path to the `fashion_ds` files will be the first argument to `prepare.py`. In `prepare.py`, this argument is assigned to `mounted_input_path`
+* The path to the `prepared_fashion_ds` will be the second argument to `prepare.py`. In `prepare.py`, this argument is assigned to `mounted_output_path`
+* Because `allow_reuse` is `True`, it won't be rerun until its source files or inputs change
+* This `PythonScriptStep` will be named `prepare step`
+
+Modularity and reuse are key benefits of pipelines. Azure Machine Learning can automatically determine source code or Dataset changes. The output of a step that isn't affected will be reused without rerunning the steps again if `allow_reuse` is `True`. If a step relies on a data source external to Azure Machine Learning that may change (for instance, a URL that contains sales data), set `allow_reuse` to `False` and the pipeline step will run every time the pipeline is run.
+
+## Create the training step
+
+Once the data has been converted from the compressed format to CSV files, it can be used for training a convolutional neural network.
+
+### Create the training step's source
+
+With larger pipelines, it's a good practice to put each step's source code in a separate directory (`src/prepare/`, `src/train/`, and so on) but for this tutorial, just use or create the file `train.py` in the same `keras-mnist-fashion/` source directory.
++
+Most of this code should be familiar to ML developers:
+
+* The data is partitioned into train and validation sets for training, and a separate test subset for final scoring
+* The input shape is 28x28x1 (only 1 because the input is grayscale), there will be 256 inputs in a batch, and there are 10 classes
+* The number of training epochs will be 10
+* The model has three convolutional layers, with max pooling and dropout, followed by a dense layer and softmax head
+* The model is fitted for 10 epochs and then evaluated
+* The model architecture is written to "outputs/model/model.json" and the weights to `outputs/model/model.h5`
+
+Some of the code, though, is specific to Azure Machine Learning. `run = Run.get_context()` retrieves a [`Run`](/python/api/azureml-core/azureml.core.run(class)?view=azure-ml-py&preserve-view=True) object, which contains the current service context. The `train.py` source uses this `run` object to retrieve the input dataset via its name (an alternative to the code in `prepare.py` that retrieved the dataset via the `argv` array of script arguments).
+
+The `run` object is also used to log the training progress at the end of every epoch and, at the end of training, to log the graph of loss and accuracy over time.
+
+### Create the training pipeline step
+
+The training step has a slightly more complex configuration than the preparation step. The preparation step used only standard Python libraries. More commonly, you'll need to modify the runtime environment in which your source code runs.
+
+Create a file `conda_dependencies.yml` with the following contents:
+
+```yml
+dependencies:
+- python=3.6.2
+- pip:
+ - azureml-core
+ - azureml-dataset-runtime
+ - keras==2.4.3
+ - tensorflow==2.4.3
+ - numpy
+ - scikit-learn
+ - pandas
+ - matplotlib
+```
+
+The `Environment` class represents the runtime environment in which a machine learning task runs. Associate the above specification with the training code with:
+
+```python
+keras_env = Environment.from_conda_specification(
+ name="keras-env", file_path="./conda_dependencies.yml"
+)
+
+train_cfg = ScriptRunConfig(
+ source_directory=script_folder,
+ script="train.py",
+ compute_target=compute_target,
+ environment=keras_env,
+)
+```
+
+Creating the training step itself uses code similar to the code used to create the preparation step:
+
+```python
+train_step = PythonScriptStep(
+ name="train step",
+ arguments=[
+ prepared_fashion_ds.read_delimited_files().as_input(name="prepared_fashion_ds")
+ ],
+ source_directory=train_cfg.source_directory,
+ script_name=train_cfg.script,
+ runconfig=train_cfg.run_config,
+)
+```
+
+## Create and run the pipeline
+
+Now that you've specified data inputs and outputs and created your pipeline's steps, you can compose them into a pipeline and run it:
+
+```python
+pipeline = Pipeline(workspace, steps=[prep_step, train_step])
+run = exp.submit(pipeline)
+```
+
+The `Pipeline` object you create runs in your `workspace` and is composed of the preparation and training steps you've specified.
+
+> [!Note]
+> This pipeline has a simple dependency graph: the training step relies on the preparation step and the preparation step relies on the `fashion_ds` dataset. Production pipelines will often have much more complex dependencies. Steps may rely on multiple upstream steps, a source code change in an early step may have far-reaching consequences, and so on. Azure Machine Learning tracks these concerns for you. You need only pass in the array of `steps` and Azure Machine Learning takes care of calculating the execution graph.
+
+The call to `submit` the `Experiment` completes quickly, and produces output similar to:
+
+```dotnetcli
+Submitted PipelineRun 5968530a-abcd-1234-9cc1-46168951b5eb
+Link to Azure Machine Learning Portal: https://ml.azure.com/runs/abc-xyz...
+```
+
+You can monitor the pipeline run by opening the link or you can block until it completes by running:
+
+```python
+run.wait_for_completion(show_output=True)
+```
+
+> [!IMPORTANT]
+> The first pipeline run takes roughly *15 minutes*. All dependencies must be downloaded, a Docker image is created, and the Python environment is provisioned and created. Running the pipeline again takes significantly less time because those resources are reused instead of created. However, total run time for the pipeline depends on the workload of your scripts and the processes that are running in each pipeline step.
+
+Once the pipeline completes, you can retrieve the metrics you logged in the training step:
+
+```python
+run.find_step_run("train step")[0].get_metrics()
+```
+
+If you're satisfied with the metrics, you can register the model in your workspace:
+
+```python
+run.find_step_run("train step")[0].register_model(
+ model_name="keras-model",
+ model_path="outputs/model/",
+ datasets=[("train test data", fashion_ds)],
+)
+```
+
+## Clean up resources
+
+Don't complete this section if you plan to run other Azure Machine Learning tutorials.
+
+### Stop the compute instance
++
+### Delete everything
+
+If you don't plan to use the resources you created, delete them, so you don't incur any charges:
+
+1. In the Azure portal, in the left menu, select **Resource groups**.
+1. In the list of resource groups, select the resource group you created.
+1. Select **Delete resource group**.
+1. Enter the resource group name. Then, select **Delete**.
+
+You can also keep the resource group but delete a single workspace. Display the workspace properties, and then select **Delete**.
+
+## Next steps
+
+In this tutorial, you used the following types:
+
+* The `Workspace` represents your Azure Machine Learning workspace. It contained:
+ * The `Experiment` that contains the results of training runs of your pipeline
+ * The `Dataset` that lazily loaded the data held in the Fashion-MNIST datastore
+ * The `ComputeTarget` that represents the machine(s) on which the pipeline steps run
+ * The `Environment` that is the runtime environment in which the pipeline steps run
+ * The `Pipeline` that composes the `PythonScriptStep` steps into a whole
+ * The `Model` that you registered after being satisfied with the training process
+
+The `Workspace` object contains references to other resources (notebooks, endpoints, and so on) that weren't used in this tutorial. For more, see [What is an Azure Machine Learning workspace?](concept-workspace.md).
+
+The `OutputFileDatasetConfig` promotes the output of a run to a file-based dataset. For more information on datasets and working with data, see [How to access data](./how-to-access-data.md).
+
+For more on compute targets and environments, see [What are compute targets in Azure Machine Learning?](concept-compute-target.md) and [What are Azure Machine Learning environments?](concept-environments.md)
+
+The `ScriptRunConfig` associates a `ComputeTarget` and `Environment` with Python source files. A `PythonScriptStep` takes that `ScriptRunConfig` and defines its inputs and outputs, which in this pipeline was the file dataset built by the `OutputFileDatasetConfig`.
+
+For more examples of how to build pipelines by using the machine learning SDK, see the [example repository](https://github.com/Azure/azureml-examples).
marketplace Azure App Managed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-managed.md
Title: Configure a managed application plan description: Configure a managed application plan for your Azure application offer in Partner Center (Azure Marketplace). --++
marketplace Azure App Marketing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-marketing.md
Title: Sell your Azure application offer description: Learn about the co-sell with Microsoft and resell through Cloud Solution Providers (CSP) program options for an Azure application offer in the Microsoft commercial marketplace (Azure Marketplace).--++
marketplace Azure App Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-offer-listing.md
Title: Configure your Azure application offer listing details description: Configure the listing details for your Azure application offer in Partner Center (Azure Marketplace). --++
marketplace Azure App Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-offer-setup.md
Title: Create an Azure application offer in Azure Marketplace description: Create an Azure application offer for listing or selling in Azure Marketplace, or through the Cloud Solution Provider (CSP) program using the commercial marketplace portal.--++
marketplace Azure App Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-plans.md
Title: Create plans for an Azure application offer description: Create plans for an Azure application offer in Partner Center (Azure Marketplace). --++
marketplace Azure App Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-preview.md
Title: Add a preview audience for an Azure Application offer description: Add a preview audience for an Azure application offer in Partner Center.--++
marketplace Azure App Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-properties.md
Title: How to configure your Azure Application offer properties description: Learn how to configure the properties for your Azure application offer in Partner Center (Azure Marketplace). --++
marketplace Azure App Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-solution.md
Title: Configure a solution template plan description: Configure a solution template plan for your Azure application offer in Partner Center (Azure Marketplace). --++
marketplace Azure App Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-technical-configuration.md
Title: Add technical details for an Azure application offer description: Add technical details for an Azure application offer in Partner Center (Azure Marketplace).--++
marketplace Azure App Test Publish https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-test-publish.md
Title: Test and publish an Azure application offer description: Submit your Azure application offer to preview, preview your offer, test, and publish it to Azure Marketplace. --++
marketplace Marketplace Managed Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-managed-apps.md
- Title: Azure applications managed application offer publishing guide - Azure Marketplace
-description: This article describes the requirements for publishing a managed application in Azure Marketplace.
----- Previously updated : 11/11/2021--
-# Publishing guide for Azure managed applications
-
-An Azure *managed application* offer is one way to publish an Azure application in Azure Marketplace. Managed applications are transact offers that are deployed and billed through Azure Marketplace. The listing option that a user sees is *Get It Now*.
-
-This article explains the requirements for the managed application offer type.
-
-Use the managed application offer type under the following conditions:
--- You're deploying a subscription-based solution for your customer by using either a virtual machine (VM) or an entire infrastructure as a service (IaaS)-based solution.-- You or your customer requires the solution to be managed by a partner.-
->[!NOTE]
->For example, a partner can be a systems integrator or a managed service provider (MSP).
-
-## Managed application offer requirements
-
-|Requirements |Details |
-|||
-|An Azure subscription | Managed applications must be deployed to a customer's subscription, but they can be managed by a third party. |
-|Billing and metering | The resources are provided in a customer's Azure subscription. Azure Resources that use the pay-as-you-go payment model are transacted with the customer via Microsoft and billed via the customer's Azure subscription. <br><br> For bring-your-own-license Azure Resources, Microsoft bills any infrastructure costs that are incurred in the customer subscription, but you transact software licensing fees with the customer directly. |
-|An Azure Managed Application package | The configured Azure Resource Manager Template and Create UI Definition that will be used to deploy your application to the customer's subscription.<br><br>For more information about creating a Managed Application, see [Managed Application Overview](../azure-resource-manager/managed-applications/publish-service-catalog-app.md).|
---
-> [!NOTE]
-> Managed applications must be deployable through Azure Marketplace. If customer communication is a concern, reach out to interested customers after you've enabled lead sharing.
-
-> [!Note]
-> A Cloud Solution Provider (CSP) partner channel opt-in is now available. For more information about marketing your offer through the Microsoft CSP partner channels, see [Cloud Solution Providers](./cloud-solution-providers.md).
-
-## Next steps
-
-If you haven't already done so, learn how to [Grow your cloud business with Azure Marketplace](https://azuremarketplace.microsoft.com/sell).
-
-To register for and start working in Partner Center:
--- [Sign in to Partner Center](https://partner.microsoft.com/dashboard/account/v3/enrollment/introduction/partnership) to create or complete your offer.-- See [Create an Azure application offer](azure-app-offer-setup.md) for more information.
marketplace Marketplace Solution Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-solution-templates.md
- Title: Publishing guide for Azure applications solution template offers - Azure Marketplace
-description: This article describes the requirements for publishing solution templates on Azure Marketplace.
----- Previously updated : 07/01/2021--
-# Publishing guide for Azure applications solution template offers
-
-This article explains the requirements for publishing solution template offers, which is one way to publish Azure application offers in Azure Marketplace. The solution template offer type requires an [Azure Resource Manager template (ARM template)](../azure-resource-manager/templates/overview.md) to automatically deploy your solution infrastructure.
-
-Use the Azure application *solution template* offer type under the following conditions:
--- Your solution requires additional deployment and configuration automation beyond a single virtual machine (VM), such as a combination of VMs, networking, and storage resources.-- Your customers are going to manage the solution themselves.-
-The listing option that a customer sees for this offer type is *Get It Now*.
-
-## Requirements for solution template offers
-
-| **Requirements** | **Details** |
-| | -- |
-|Billing and metering | Solution template offers are not transaction offers, but they can be used to deploy paid VM offers that are billed through the Microsoft commercial marketplace. The resources that the solution's ARM template deploys are set up in the customer's Azure subscription. Pay-as-you-go virtual machines are transacted with the customer via Microsoft and billed via the customer's Azure subscription.<br/> For bring-your-own-license (BYOL) billing, although Microsoft bills infrastructure costs that are incurred in the customer subscription, you transact your software licensing fees with the customer directly. |
-|Azure-compatible virtual hard disk (VHD) | VMs must be built on Windows or Linux. For more information, see: <ul> <li>[Create an Azure application offer](azure-app-offer-setup.md) (for Windows VHDs).</li><li>[Linux distributions endorsed on Azure](../virtual-machines/linux/endorsed-distros.md) (for Linux VHDs).</li></ul> |
-| Customer usage attribution | Enabling customer usage attribution is required on all solution templates that are published on Azure Marketplace. For more information about customer usage attribution and how to enable it, see [Azure partner customer usage attribution](./azure-partner-customer-usage-attribution.md). |
-| Use managed disks | [Managed disks](../virtual-machines/managed-disks-overview.md) is the default option for persisted disks of infrastructure as a service (IaaS) VMs in Azure. You must use managed disks in solution templates. <ul><li>To update your solution templates, follow the guidance in [Use managed disks in Azure Resource Manager templates](../virtual-machines/using-managed-disks-template-deployments.md), and use the provided [samples](https://github.com/Azure/azure-quickstart-templates).<br><br> </li><li>To publish the VHD as an image in Azure Marketplace, import the underlying VHD of the managed disks to a storage account by using either of the following methods:<ul><li>[Azure PowerShell](/previous-versions/azure/virtual-machines/scripts/virtual-machines-powershell-sample-copy-managed-disks-vhd) </li> <li> [The Azure CLI](/previous-versions/azure/virtual-machines/scripts/virtual-machines-cli-sample-copy-managed-disks-vhd) </li> </ul></ul> |
-
-## Next steps
-
-If you haven't already done so, learn how to [Grow your cloud business with Azure Marketplace](https://azuremarketplace.microsoft.com/sell).
-
-To register for and start working in Partner Center:
--- [Sign in to Partner Center](https://partner.microsoft.com/dashboard/account/v3/enrollment/introduction/partnership) to create or complete your offer.-- See [Create an Azure application offer](./azure-app-offer-setup.md) for more information.
marketplace Plan Azure App Managed App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plan-azure-app-managed-app.md
Title: Plan an Azure managed application for an Azure application offer description: Learn what is required to create a managed application plan for a new Azure application offer using the commercial marketplace portal in Microsoft Partner Center.--++
marketplace Plan Azure App Solution Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plan-azure-app-solution-template.md
Title: Plan a solution template for an Azure application offer description: Learn what is required to create a solution template plan for a new Azure application offer using the commercial marketplace portal in Microsoft Partner Center.--++
marketplace Plan Azure Application Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plan-azure-application-offer.md
Title: Plan an Azure Application offer for the commercial marketplace description: Plan an Azure application offer for Azure Marketplace using Partner Center.---++
marketplace Supported Html Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/supported-html-tags.md
Previously updated : 01/25/2021 Last updated : 01/26/2021 # HTML tags supported in commercial marketplace offer descriptions
mysql Concept Servers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concept-servers.md
When the server is in the **Stopped** state, the server's compute is not billed.
During the time server is stopped, no management operations can be performed on the server. In order to change any configuration settings on the server, you will need to [start the server](how-to-stop-start-server-portal.md). Refer to the [stop/start limitations](./concepts-limitations.md#stopstart-operation).
+> [!NOTE]
+> Operations on servers that are in a [Stop](concept-servers.md#stopstart-an-azure-database-for-mysql-flexible-server) state are disabled and show as inactive in the Azure portal. Operations that are not supported on stopped servers include changing the pricing tier, number of vCores, storage size or IOPS, backup retention day, server tag, the server password, server parameters, storage auto-grow, GEO backup, HA, and user identity.
+ ## How do I manage a server? You can manage the creation, deletion, server parameter configuration (my.cnf), scaling, networking, security, high availability, backup & restore, monitoring of your Azure Database for MySQL Flexible Server by using the [Azure portal](./quickstart-create-server-portal.md) or the [Azure CLI](./quickstart-create-server-cli.md). In addition, following stored procedures are available in Azure Database for MySQL to perform certain database administration tasks required as SUPER user privilege is not supported on the server.
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/whats-new.md
Last updated 10/12/2021
This article summarizes new releases and features in Azure Database for MySQL - Flexible Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first. ## January 2022
+- **All Operations are disabled on Stopped Azure Database for MySQL - Flexible Server**
+ Operations on servers that are in a [Stop](concept-servers.md#stopstart-an-azure-database-for-mysql-flexible-server) state are disabled and show as inactive in the Azure portal. Operations that are not supported on stopped servers include changing the pricing tier, number of vCores, storage size or IOPS, backup retention day, server tag, the server password, server parameters, storage auto-grow, GEO backup, HA, and user identity.
+
- **Bug fixes** Restart workflow struck issue with servers with HA and Geo-redundant backup option enabled is fixed.
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-high-availability.md
For flexible servers configured with high availability, these maintenance activi
Unplanned outages include software bugs or infrastructure component failures that impact the availability of the database. If the primary server becomes unavailable, it is detected by the monitoring system and initiates a failover process. The process includes a few seconds of wait time to make sure it is not a false positive. The replication to the standby replica is severed and the standby replica is activated to be the primary database server. That includes the standby to recovery any residual WAL files. Once it is fully recovered, DNS for the same end point is updated with the standby server's IP address. Clients can then retry connecting to the database server using the same connection string and resume their operations. >[!NOTE]
-> Flexible servers configured with zone-redundant high availability provide a recovery point objective (RPO) of **Zero** (no data loss.The recovery tome objective (RTO) is expected to be **less than 120s** in typical cases. However, depending on the activity in the primary database server at the time of the failover, the failover may take longer.
+> Flexible servers configured with zone-redundant high availability provide a recovery point objective (RPO) of **Zero** (no data loss). The recovery time objective (RTO) is expected to be **less than 120s** in typical cases. However, depending on the activity in the primary database server at the time of the failover, the failover may take longer.
After the failover, while a new standby server is being provisioned, applications can still connect to the primary server and proceed with their read/write operations. Once the standby server is established, it will start recovering the logs that were generated after the failover.
purview Concept Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-network.md
Previously updated : 01/21/2022 Last updated : 01/26/2022 # Azure Purview network architecture and best practices
For performance and cost optimization, we highly recommended deploying one or mo
:::image type="content" source="media/concept-best-practices/network-pe-multi-region.png" alt-text="Screenshot that shows Azure Purview with private endpoints in a scenario of multiple virtual networks and multiple regions."lightbox="media/concept-best-practices/network-pe-multi-region.png":::
+### DNS configuration with private endpoints
+
+#### Name resolution for single Azure Purview account
+
+If you have one Azure Purview account in your tenant, and you have enabled private endpoints for account, portal and ingestion, you can use any of [the supported scenarios](catalog-private-link-name-resolution.md#deployment-options) for name resolution in your network.
+
+#### Name resolution for multiple Azure Purview accounts
+
+It is recommended to follow these recommendations, if your organization needs to deploy and maintain multiple Azure Purview accounts using private endpoints:
+
+1. Deploy at least one _account_ private endpoint for each Azure Purview account.
+2. Deploy at least one _ingestion_ private endpoint for each Azure Purview account.
+3. Deploy one _portal_ private endpoint for one of the Azure Purview accounts in your Azure environments. Create one DNS A record for _portal_ private endpoint to resolve `web.purview.azure.com`.
+
+
+> [!NOTE]
+> _Portal_ private endpoint mainly renders static assets related to Azure Purview Studio, thus, it is independent of Azure Purview account, therefore, only one _portal_ private endpoint is needed to visit all Azure Purview accounts in the Azure environment.
+You may need to deploy separate _portal_ private endpoints for each Azure Purview account in the scenarios where Azure Purview accounts are deployed in isolated network segmentations.
+> Azure Purview _portal_ is static contents for all customers without any customer information. Optionally, you can use public network to launch `web.purview.azure.com` if your end users are allowed to launch the Internet.
+ ## Option 3: Use both private and public endpoints You might choose an option in which a subset of your data sources uses private endpoints, and at the same time, you need to scan either of the following:
purview Concept Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-search.md
- Title: Understand search features in Azure Purview
-description: This article explains how Azure Purview enables data discovery through search features.
----- Previously updated : 09/27/2021--
-# Understand search features in Azure Purview
-
-This article provides an overview of the search experience in Azure Purview. Search is a core platform capability of Azure Purview, that powers the data discovery and data use governance experiences in an organization.
-
-## Search
-
-The Azure Purview search experience is powered by a managed search index. After a data source is registered with Azure Purview, its metadata is indexed by the search service to allow easy discovery. The index provides search relevance capabilities and completes search requests by querying millions of metadata assets. Search helps you to discover, understand, and use the data to get the most value out of it.
-
-The search experience in Azure Purview is a three stage process:
-
-1. The search box shows the history containing recently used keywords and assets.
-1. When you begin typing the keystrokes, the search suggests the matching keywords and assets.
-1. The search result page is shown with assets matching the keyword entered.
-
-## Reduce the time to discover data
-
-Data discovery is the first step for a data analytics or data governance workload for data consumers. Data discovery can be time consuming, because you may not know where to find the data that you want. Even after finding the data, you may have doubts about whether or not you can trust the data and take a dependencies on it.
-
-The goal of search in Azure Purview is to speed up the process of data discovery by providing gestures, to quickly find the data that matters.
-
-## Recent search and suggestions
-
-Many times, you may be working on multiple projects at the same time. To make it easier to resume previous projects, Azure Purview search provides the ability to see recent search keywords and suggestions. Also, you can manage the recent search history by selecting **View all** from the search box drop-down.
-
-## Filters
-
-Filters (also known as *facets*) are designed to complement searching. Filters are shown in the search result page. The filters in the search result page include classification, sensitivity label, data source, and owners. Users can select specific values in a filter to see only matching data assets, and restrict the search result to the matching assets.
-
-## Hit highlighting
-
-Matching keywords in the search result page are highlighted to make it easier to identify why a data asset was returned by search. The keyword match can occur in multiple fields such as data asset name, description, and owner.
-
-It may not be obvious why a data asset is included in search, even with hit highlighting enabled. All properties are searched by default. Therefore, a data asset might be returned because of a match on a column-level property. And because multiple users can annotate the data assets with their own classifications and descriptions, not all metadata is displayed in the list of search results.
-
-## Sort
-
-Users have two options to sort the search results. They can sort by the name of the asset or by default search relevance.
-
-## Search relevance
-
-Relevance is the default sort order in the search result page. The search relevance finds documents that include the search keyword (some or all). Assets that contain many instances of the search keyword are ranked higher. Also, custom scoring mechanisms are constantly tuned to improve search relevance.
-
-## Next steps
-
-* [Quickstart: Create an Azure Purview account in the Azure portal](create-catalog-portal.md)
-* [Quickstart: Create an Azure Purview account using Azure PowerShell/Azure CLI](create-catalog-powershell.md)
-* [Use the Azure Purview Studio](use-purview-studio.md)
purview How To Access Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-access-policies-storage.md
- Title: Data access policy provisioning for Azure Storage
-description: Step-by-step guide on how to integrate Azure Storage with Azure Purview to enable data owners to create access policies.
----- Previously updated : 1/14/2022---
-# Dataset provisioning by data owner for Azure Storage (preview)
-
-This guide describes how a data owner can enable access to data stored in Azure Storage from Azure Purview. The Azure Purview policy authoring supports the following capabilities:
-- Allow access to data stored in Blob and Azure Data Lake Storage (ADLS) Gen2.-
-> [!Note]
-> These capabilities are currently in preview. This preview version is provided without a service level agreement, and should not be used for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure
-Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Prerequisites
->[!IMPORTANT]
-> The access policy feature is only available on **new** Azure Purview and Azure Storage accounts.
-- Create a new or use an existing isolated test subscription. You can [follow this guide to create one](../cost-management-billing/manage/create-subscription.md).-- Create a new or use an existing Azure Purview account. You can [follow our quick-start guide to create one](create-catalog-portal.md).-- Create a new Azure Storage account in one of the regions listed below. You can [follow this guide to create one](../storage/common/storage-account-create.md). Only Storage account versions >= 81.x.x support policy enforcement.--
-## Configuration
-
-### Register Azure Purview as a resource provider in other subscriptions
-Execute this step only if the Storage and Azure Purview accounts are in different subscriptions. Register Azure Purview as a resource provider in the subscription for the Azure Storage account by following this guide: [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md)
-
-### Configure permissions for policy management actions
-#### Storage account permissions
-User needs to have **either one of these** role combinations in the Azure Storage account to be able to register it for *Data use Governance* in Azure Purview:
-- IAM *Owner* -- Both IAM *Contributor* + IAM *User Access Administrator*
-
-You can follow this [guide to configure Azure RBAC permissions](../role-based-access-control/check-access.md)
-
-#### Azure Purview account permissions
->[!IMPORTANT]
-> - Policy operations are only supported at **root collection level** and not child collection level.
-- User needs Azure Purview *Data source admins* role at the root collection level to:
- - Register a source for *Data use governance*.
- - Publish a policy.
-- User needs Azure Purview *Policy authors* role at root collection level to create or edit policies.-
-Check the section on managing Azure Purview role assignments in this [guide](how-to-create-and-manage-collections.md).
-
->[!WARNING]
-> **Known issues** related to permissions
-> - In addition to Azure Purview *Policy authors* role, user requires *Directory Reader* permission in Azure Active Directory to create data owner policy. Learn more about permissions for [Azure AD Directory Reader](../active-directory/roles/permissions-reference.md#directory-readers)
-> - Azure Purview *Policy author* role is not sufficient to create policies. It also requires Azure Purview *Data source admin* role as well.
-
-### Register and scan data sources in Azure Purview
-Register and scan each data source with Azure Purview to later define access policies. You can follow these guides:
--- [Register and scan Azure Storage Blob - Azure Purview](register-scan-azure-blob-storage-source.md)--- [Register and scan Azure Data Lake Storage (ADLS) Gen2 - Azure Purview](register-scan-adls-gen2.md)-
->[!Important]
-> Make sure you write down the **Name** you use when registering a source in Azure Purview. You will need it when you publish a policy. The recommended practice is to make the registered name exactly the same as the endpoint name (i.e. the Storage account name).
-
-If you would like to use a data source to create access policies in Azure Purview, enable it for access policy through the **Data use governance** toggle, as shown in the picture.
-
-![Image shows how to register a data source for policy.](./media/how-to-access-policies-storage/register-data-source-for-policy-storage.png)
-
->[!Note]
-> - To disable a source for *Data use Governance*, remove it first from being bound (i.e. published) in any policy.
-> - While user needs to have both Azure Storage *Owner* and Azure Purview *Data source admin* to enable a source for *Data use governance*, any of those roles can independently disable it.
-> - Disabling *Data use governance* for a subscription will disable it also for all assets registered in that subscription.
-
-> [!WARNING]
-> **Known issues** related to source registration
-> - Moving data sources to a different resource group or subscription is not yet supported. If want to do that, de-register the data source in Azure Purview before moving it and then register it again after that happens.
-
-### Data use governance best practices
-- We highly encourage registering data sources for *Data use governance* and managing all associated access policies in a single Azure Purview account.-- Should you have multiple Azure Purview accounts, be aware that **all** data sources belonging to a subscription must be registered for *Data use governance* in a single Azure Purview account. That Azure Purview account can be in any subscription in the tenant. The *Data use governance* toggle will become greyed out when there are invalid configurations. Some examples of valid and invalid configurations follow in the diagram below:
- - **Case 1** shows a valid configuration where a Storage account is registered in an Azure Purview account in the same subscription.
- - **Case 2** shows a valid configuration where a Storage account is registered in an Azure Purview account in a different subscription.
- - **Case 3** shows an invalid configuration arising because Storage accounts S3SA1 and S3SA2 both belong to Subscription 3, but are registered to different Azure Purview accounts. In that case, the *Data use governance* toggle will only work in the Azure Purview account that wins and registers a data source in that subscription first. The toggle will then be greyed out for the other data source.
-
-![Diagram shows valid and invalid configurations when using multiple Azure Purview accounts to manage policies.](./media/how-to-access-policies-storage/valid-and-invalid-configurations.png)"
-
-## Policy authoring
-
-This section describes the steps for creating, updating, and publishing Azure Purview access policies.
-
-### Create a new policy
-
-This section describes the steps to create a new policy in Azure Purview.
-
-1. Log in to Azure Purview portal.
-
-1. Navigate to **Policy management** app using the left side panel.
-
-1. Select the **New Policy** button in the policy page.
-
- ![Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to create policies.](./media/how-to-access-policies-storage/policy-onboard-guide-1.png)
-
-1. The new policy page will appear. Enter the policy **Name** and **Description**.
-
-1. To add policy statements to the new policy, select the **New policy statement** button. This will bring up the policy statement builder.
-
- ![Image shows how a data owner can create a new policy statement.](./media/how-to-access-policies-storage/create-new-policy.png)"
-
-1. Select the **Effect** button and choose *Allow* from the drop-down list.
-
-1. Select the **Action** button and choose *Read* or *Modify* from the drop-down list.
-
-1. Select the **Data Resources** button to bring up the options to provide the data asset path, which will open on the right.
-
-1. Use the **Assets** box if you scanned the data source, otherwise use the **Data sources** box above. Assuming the first, in the **Assets** box, enter the **Data Source Type** and select the **Name** of a previously registered data source.
-
- ![Image shows how a data owner can select a Data Resource when editing a policy statement.](./media/how-to-access-policies-storage/select-data-source-type.png)
-
-1. Select the **Continue** button and transverse the hierarchy to select the folder or file. Then select the **Add** button. This will take you back to the policy editor.
-
- ![Image shows how a data owner can select the asset when creating or editing a policy statement.](./media/how-to-access-policies-storage/select-asset.png)"
-
-1. Select the **Subjects** button and enter the subject identity as a principal, group, or MSI. Then select the **OK** button. This will take you back to the policy editor
-
- ![Image shows how a data owner can select the subject when creating or editing a policy statement.](./media/how-to-access-policies-storage/select-subject.png)
-
-1. Repeat the steps #5 to #11 to enter any more policy statements.
-
-1. Select the **Save** button to save the policy
-
-> [!Note]
-> - Policy statements set below container level on a Storage account are supported. If no access has been provided at Storage account level or container level, then the App that will execute the access will need to provide a fully qualified name (i.e., a direct absolute path) to the data object. The following documents show examples of how to do that:
-> - [*abfs* for ADLS Gen2](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md#access-files-from-the-cluster)
-> - [*az storage blob download* for Blob Storage](../storage/blobs/storage-quickstart-blobs-cli.md#download-a-blob)
-> - Creating a policy at Storage account level will enable the Subjects to access system containers e.g., *$logs*. If this is undesired, first scan the data source and then create the policy at container or sub-container level.
-
-> [!WARNING]
-> **Known issues** related to Policy creation
-> - Do not create policy statements based on Azure Purview resource sets. Even if displayed in Azure Purview policy authoring UI, they are not yet enforced. Learn more about [resource sets](concept-resource-sets.md).
-> - Once subscription gets disabled for *Data use governance* any underlying assets that are enabled for *Data use governance* will be disabled, which is the right behavior. However, policy statements based on those assets will still be allowed after that.
-
-### Update or delete a policy
-
-Steps to create a new policy in Azure Purview are as follows.
-
-1. Log in to Azure Purview portal.
-
-1. Navigate to Azure Purview Policy management app using the left side panel.
-
- ![Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to update a policy.](./media/how-to-access-policies-storage/policy-onboard-guide-2.png)
-
-1. The Policy portal will present the list of existing policies in Azure Purview. Select the policy that needs to be updated.
-
-1. The policy details page will appear, including Edit and Delete options. Select the **Edit** button, which brings up the policy statement builder for the statements in this policy. Now, any parts of the statements in this policy can be updated. To delete the policy, use the **Delete** button.
-
- ![Image shows how a data owner can edit or delete a policy statement.](./media/how-to-access-policies-storage/edit-policy.png)
-
-### Publish the policy
-
-A newly created policy is in the draft state. The process of publishing associates the new policy with one or more data sources under governance. This is called "binding" a policy to a data source.
-
-The steps to publish a policy are as follows
-
-1. Log in to Azure Purview portal.
-
-1. Navigate to the Azure Purview Policy management app using the left side panel.
-
- ![Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to publish a policy.](./media/how-to-access-policies-storage/policy-onboard-guide-2.png)
-
-1. The Policy portal will present the list of existing policies in Azure Purview. Locate the policy that needs to be published. Select the **Publish** button on the right top corner of the page.
-
- ![Image shows how a data owner can publish a policy.](./media/how-to-access-policies-storage/publish-policy.png)
-
-1. A list of data sources is displayed. You can enter a name to filter the list. Then, select each data source where this policy is to be published and then select the **Publish** button.
-
- ![Image shows how a data owner can select the data source where the policy will be published.](./media/how-to-access-policies-storage/select-data-sources-publish-policy.png)
-
->[!Important]
-> - Publish is a background operation. It can take up to **2 hours** for the changes to be reflected in the data source.
-> - There is no need to publish a policy again for it to take effect if the data resource continues to be the same.
-
-## Additional information
-
-### Limits
-The limit for Azure Purview policies that can be enforced by Storage accounts is 100MB per subscription, which roughly equates to 5000 policies.
-
-### Policy action mapping
-
-This section contains a reference of how actions in Azure Purview data policies map to specific actions in Azure Storage.
-
-| **Azure Purview policy action** | **Data source specific actions** |
-||--|
-|||
-| *Read* |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/read |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read |
-|||
-| *Modify* |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/read |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/write |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/delete |
-|||
--
-## Next steps
-Check the blog and demo related to the capabilities mentioned in this how-to guide
-
-* [What's New in Azure Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954)
-* [Demo of access policy for Azure Storage](https://www.youtube.com/watch?v=CFE8ltT19Ss)
purview How To Bulk Edit Assets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-bulk-edit-assets.md
Previously updated : 10/15/2021 Last updated : 01/25/2022 # How to bulk edit assets to annotate classifications, glossary terms and modify contacts
-This article describes how to tag multiple glossary terms, classifications, owners and experts to a list of selected assets in a single action.
+This article describes how to tag glossary terms, classifications, owners and experts to multiple assets in bulk.
-## Add Assets to View selected list using search
+## Select assets to bulk edit
-1. Search on the data asset you want to add to the list for bulk editing.
+1. Use Azure Purview search or browse to discover assets you wish to edit.
-1. In the search result page, hover on the asset you want to add to the bulk edit **View selected** list to see a checkbox.
+1. In the search results, if you focus on an asset a checkbox appears.
- :::image type="content" source="media/how-to-bulk-edit-assets/asset-checkbox.png" alt-text="Screenshot of the checkbox.":::
+ :::image type="content" source="media/how-to-bulk-edit-assets/asset-checkbox.png" alt-text="Screenshot of the bulk edit checkbox.":::
-1. Select the checkbox to add it to the bulk edit **View selected** list. Once added, you will see the selected items icon at the bottom of the page.
+1. You can add an asset to the bulk edit list from the asset detail page. Select **Select for bulk edit** to add the asset to the bulk edit list.
- :::image type="content" source="media/how-to-bulk-edit-assets/selected-list.png" alt-text="Screenshot of the list.":::
-
-1. Repeat the above steps to add all the data assets to the list.
+ :::image type="content" source="media/how-to-bulk-edit-assets/asset-list.png" alt-text="Screenshot of the asset.":::
-## Add Assets to View selected list from asset detail page
+1. Select the checkbox to add it to the bulk edit list. You can see the selected assets by clicking **View selected**.
-You can also add an asset to the bulk edit list in the asset detail page. Select the checkbox at the top right corner to add the asset to the bulk edit list.
+ :::image type="content" source="media/how-to-bulk-edit-assets/selected-list.png" alt-text="Screenshot of the list.":::\
- :::image type="content" source="media/how-to-bulk-edit-assets/asset-list.png" alt-text="Screenshot of the asset.":::
+## How to bulk edit assets
-## Bulk edit assets in the View selected list to add, replace, or remove glossary terms.
-
-1. When you're done with the identification of all the data assets which needs to be bulk-edited, Select **View selected** list from search results page or asset details page.
+1. When all assets have been chosen, select **View selected** to pull up the selected assets.
:::image type="content" source="media/how-to-bulk-edit-assets/view-list.png" alt-text="Screenshot of the view.":::
You can also add an asset to the bulk edit list in the asset detail page. Select
:::image type="content" source="media/how-to-bulk-edit-assets/remove-list.png" alt-text="Screenshot with the Deselect button highlighted.":::
-1. Select **Bulk edit** to add, remove or replace glossary terms for all the selected assets.
+1. Select **Bulk edit** to add, remove or replace an annotation for all the selected assets. You can edit the glossary terms, classifications, owners or experts of an asset.
:::image type="content" source="media/how-to-bulk-edit-assets/bulk-edit.png" alt-text="Screenshot with the bulk edit button highlighted.":::
-1. To add glossary terms, select Operation as **Add**. Select all the glossary terms you want to add in the New value. Select Apply when complete.
- - Add operation will append New value to the list of glossary terms already tagged to data assets.
+1. For each attribute selected, you can choose which edit operation to apply
+ 1. **Add** will append a new annotation to the selected data assets.
+ 1. **Replace with** will replace all of the annotations for the selected data assets with the annotation selected.
+ 1. **Remove** will remove all annotations for selected data assets.
:::image type="content" source="media/how-to-bulk-edit-assets/add-list.png" alt-text="Screenshot of the add.":::
-1. To replace glossary terms select Operation as **Replace with**. Select all the glossary terms you want to replace in the New value. Select Apply when complete.
- - Replace operation will replace all the glossary terms for selected data assets with the terms selected in New value.
-
-1. To remove glossary terms select Operation as **Remove**. Select Apply when complete.
- - Remove operation will remove all the glossary terms for selected data assets.
-
- :::image type="content" source="media/how-to-bulk-edit-assets/replace-list.png" alt-text="Screenshot of the remove terms.":::
-
-1. Repeat the above for classifications, owners and experts.
-
- :::image type="content" source="media/how-to-bulk-edit-assets/all-list.png" alt-text="Screenshot of the classifications and contacts.":::
-
-1. Once complete close the bulk edit blade by selecting **Close** or **Remove all and close**. Close will not remove the selected assets whereas remove all and close will remove all the selected assets.
+1. Once complete, close the bulk edit blade by selecting **Close** or **Remove all and close**. Close won't remove the selected assets whereas remove all and close will remove all the selected assets.
:::image type="content" source="media/how-to-bulk-edit-assets/close-list.png" alt-text="Screenshot of the close.":::
- > [!Important]
- > The recommended number of assets for bulk edit are 25. Selecting more than 25 might cause performance issues.
- > The **View Selected** box will be visible only if there is at least one asset selected.
+> [!Important]
+> The recommended number of assets for bulk edit are 25. Selecting more than 25 might cause performance issues.
+> The **View Selected** box will be visible only if there is at least one asset selected.
## Next steps
purview How To Create And Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-create-and-manage-collections.md
Previously updated : 11/04/2021 Last updated : 01/24/2022 # Create and manage collections in Azure Purview
-Collections in Azure Purview can be used to organize assets and sources by your business's flow, but they are also the tool used to manage access across Azure Purview. This guide will take you through the creation and management of these collections, as well as cover steps about how to register sources and add assets into your collections.
+Collections in Azure Purview can be used to organize assets and sources by your business's flow. They are also the tool used to manage access across Azure Purview. This guide will take you through the creation and management of these collections, as well as cover steps about how to register sources and add assets into your collections.
## Prerequisites
Collections in Azure Purview can be used to organize assets and sources by your
### Check permissions
-In order to create and manage collections in Azure Purview, you will need to be a **Collection Admin** within Azure Purview. We can check these permissions in the [Azure Purview Studio](https://web.purview.azure.com/resource/). You can find the studio by going to your Azure Purview resource in the [Azure portal](https://portal.azure.com), and selecting the Open Azure Purview Studio tile on the overview page.
+In order to create and manage collections in Azure Purview, you will need to be a **Collection Admin** within Azure Purview. We can check these permissions in the [Azure Purview Studio](https://web.purview.azure.com/resource/). You can find Studio in the overview page of the Azure Purview resource in [Azure portal](https://portal.azure.com).
1. Select Data Map > Collections from the left pane to open collection management page. :::image type="content" source="./media/how-to-create-and-manage-collections/find-collections.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the Collections tab selected." border="true":::
-1. Select your root collection. This is the top collection in your collection list and will have the same name as your Azure Purview resource. In our example below, it is called Contoso Azure Purview. Alternatively, if collections already exist you can select any collection where you want to create a subcollection.
+1. Select your root collection. This is the top collection in your collection list and will have the same name as your Azure Purview resource. In the following example, it's called Contoso Azure Purview. Alternatively, if collections already exist you can select any collection where you want to create a subcollection.
:::image type="content" source="./media/how-to-create-and-manage-collections/select-root-collection.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the root collection highlighted." border="true":::
In order to create and manage collections in Azure Purview, you will need to be
:::image type="content" source="./media/how-to-create-and-manage-collections/role-assignments.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
-1. To create a collection, you will need to be in the collection admin list under role assignments. If you created the Azure Purview resource, you should be listed as a collection admin under the root collection already. If not, you will need to contact the collection admin to grant you permission.
+1. To create a collection, you'll need to be in the collection admin list under role assignments. If you created the Azure Purview resource, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant you permission.
:::image type="content" source="./media/how-to-create-and-manage-collections/collection-admins.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the collection admin section highlighted." border="true":::
In order to create and manage collections in Azure Purview, you will need to be
### Create a collection
-You will need to be a collection admin in order to create a collection. If you are not sure, follow the [guide above](#check-permissions) to check permissions.
+You'll need to be a collection admin in order to create a collection. If you aren't sure, follow the [guide above](#check-permissions) to check permissions.
1. Select Data Map > Collections from the left pane to open collection management page.
You will need to be a collection admin in order to create a collection. If you a
### Edit a collection
-1. Select **Edit** either from the collection detail page, or from the collection's drop down menu.
+1. Select **Edit** either from the collection detail page, or from the collection's dropdown menu.
:::image type="content" source="./media/how-to-create-and-manage-collections/edit-collection.png" alt-text="Screenshot of Azure Purview studio window, open to collection window, with the 'edit' button highlighted both in the selected collection window, and under the ellipsis button next to the name of the collection." border="true":::
You will need to be a collection admin in order to create a collection. If you a
### Delete a collection
-You will need to be a collection admin in order to delete a collection. If you are not sure, follow the guide above to check permissions. Collection can be deleted only if no child collections, assets, data sources or scans are associated with it.
+You'll need to be a collection admin in order to delete a collection. If you aren't sure, follow the guide above to check permissions. Collection can be deleted only if no child collections, assets, data sources or scans are associated with it.
1. Select **Delete** from the collection detail page.
You will need to be a collection admin in order to delete a collection. If you a
Since permissions are managed through collections in Azure Purview, it is important to understand the roles and what permissions they will give your users. A user granted permissions on a collection will have access to sources and assets associated with that collection, as well as inherit permissions to subcollections. Inheritance [can be restricted](#restrict-inheritance), but is allowed by default.
-The guide below will discuss the roles, how to manage them, and permissions inheritance.
+The following guide will discuss the roles, how to manage them, and permissions inheritance.
### Roles All assigned roles apply to sources, assets, and other objects within the collection where the role is applied.
-* **Collection admins** - can edit the collection, its details, and add subcollections. They can also add data curators, data readers, and other Azure Purview roles to a collection scope. Collection admins that are automatically inherited from a parent collection can't be removed.
-* **Data source admins** - can manage data sources and data scans.
-* **Data curators** - can perform create, read, modify, and delete actions on catalog data objects and establish relationships between objects.
-* **Data readers** - can access but not modify catalog data objects.
+* **Collection admins** can edit the collection, its details, and add subcollections. They can also add data curators, data readers, and other Azure Purview roles to a collection scope. Collection admins that are automatically inherited from a parent collection can't be removed.
+* **Data source admins** can manage data sources and data scans. They can also enter the policy management app to view and publish policies.
+* **Data curators** can perform create, read, modify, and delete actions on catalog data objects and establish relationships between objects. They can also enter the policy management app to view policies.
+* **Data readers** can access but not modify catalog data objects.
+* **Policy Authors** can enter the policy management app and create/edit policy statements.
### Add role assignments
purview How To Search Catalog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-search-catalog.md
Previously updated : 10/01/2021 Last updated : 01/25/2022 # Search the Azure Purview Data Catalog After data is scanned and ingested into the Azure Purview data map, data consumers need to easily find the data needed for their analytics or governance workloads. Data discovery can be time consuming because you may not know where to find the data that you want. Even after finding the data, you may have doubts about whether you can trust the data and take a dependency on it.
-The goal of search in Azure Purview is to speed up the process of data discovery to quickly find the data that matters. This article outlines how to search the Azure Purview data catalog to quickly find the data you are looking for.
+The goal of search in Azure Purview is to speed up the process of data discovery to quickly find the data that matters. This article outlines how to search the Azure Purview data catalog to quickly find the data you're looking for.
## Search the catalog for assets
The search bar can be quickly accessed from the top bar of the Azure Purview Stu
:::image type="content" source="./media/how-to-search-catalog/purview-search-bar.png" alt-text="Screenshot showing the location of the Azure Purview search bar" border="true":::
-Once you click on the search bar, you will be presented with your search history and the assets recently accessed in the data catalog. This allows you to quickly pick up from previous data exploration that was already done.
+Once you click on the search bar, you'll be presented with your search history and the assets recently accessed in the data catalog. This allows you to quickly pick up from previous data exploration that was already done.
:::image type="content" source="./media/how-to-search-catalog/search-no-keywords.png" alt-text="Screenshot showing the search bar before any keywords have been entered" border="true":::
Enter in keywords that help identify your asset such as its name, data type, cla
Once you enter in your search, Azure Purview returns a list of data assets a user is a data reader for to that matched to the keywords entered in.
-The Azure Purview relevance engine sorts through all the matches and ranks them based on what it believes their usefulness is to a user. For example, a table that matches on multiple keywords that a data steward has assigned glossary terms and given a description is likely going to be more interesting to a data consumer than a folder which has been unannotated. A large set of factors determine an assetΓÇÖs relevance score and the Azure Purview search team is constantly tuning the relevance engine to ensure the top search results have value to you.
+The Azure Purview relevance engine sorts through all the matches and ranks them based on what it believes their usefulness is to a user. For example, a data consumer is likely more interested in a table curated by a data steward that matches on multiple keywords than an unannotated folder. Many factors determine an assetΓÇÖs relevance score and the Azure Purview search team is constantly tuning the relevance engine to ensure the top search results have value to you.
-If the top results donΓÇÖt include the assets you are looking for, you can use the facets on the left-hand side to filter down by business metadata such glossary terms, classifications and the containing collection. If you are interested in a particular data source type such as Azure Data Lake Storage Gen2 or Azure SQL Database, you can use the source type pill filter to narrow down your search.
+If the top results donΓÇÖt include the assets you're looking for, you can use the facets on the left-hand side to filter down by business metadata such glossary terms, classifications, and the containing collection. If you're interested in a particular data source type such as Azure Data Lake Storage Gen2 or Azure SQL Database, you can use a pill filter to narrow down your search.
> [!NOTE]
-> Search will only return assets in collections you are a data reader or curator for. For more information, see [create and manage Collections](how-to-create-and-manage-collections.md).
+> Search will only return assets in collections you're a data reader or curator for. For more information, see [create and manage Collections](how-to-create-and-manage-collections.md).
:::image type="content" source="./media/how-to-search-catalog/search-results.png" alt-text="Screenshot showing the results of a search" border="true":::
For certain annotations, you can click on the ellipses to choose between an AND
:::image type="content" source="./media/how-to-search-catalog/search-and-or-choice.png" alt-text="Screenshot showing how to choose between and AND or OR condition" border="true":::
-Once you find the asset you are looking for, you can select it to view details such as schema, lineage, and a detailed classification list. To learn more about the asset details page, see [Manage catalog assets](catalog-asset-details.md).
+From the search results page, you can select an asset to view details such as schema, lineage, and classifications. To learn more about the asset details page, see [Manage catalog assets](catalog-asset-details.md).
:::image type="content" source="./media/how-to-search-catalog/search-view-asset.png" alt-text="Screenshot showing the asset details page" border="true":::
+## Searching Azure Purview in connected services
+
+Once you register your Azure Purview instance to an Azure Data Factory or an Azure Synapse Analytics workspace, you can search the Azure Purview data catalog directly from those services. To learn more, see [Discover data in ADF using Azure Purview](../data-factory/how-to-discover-explore-purview-data.md) and [Discover data in Synapse using Azure Purview](../synapse-analytics/catalog-and-governance/how-to-discover-connect-analyze-azure-purview.md).
+ ## Bulk edit search results
-If you are looking to make changes to multiple assets returned by search, Azure Purview lets you modify glossary terms, classifications, and contacts in bulk. To learn more, see the [bulk edit assets](how-to-bulk-edit-assets.md) guide.
+If you're looking to make changes to multiple assets returned by search, Azure Purview lets you modify glossary terms, classifications, and contacts in bulk. To learn more, see the [bulk edit assets](how-to-bulk-edit-assets.md) guide.
## Browse the data catalog
-While searching is great if you know what you are looking for, there are times where data consumers wish to explore the data available to them. The Azure Purview data catalog offers a browse experience that enables users to explore what data is available to them either by collection or through traversing the hierarchy of each data source in the catalog. For more information, see [browse the data catalog](how-to-browse-catalog.md).
+While searching is great if you know what you're looking for, there are times where data consumers wish to explore the data available to them. The Azure Purview data catalog offers a browse experience that enables users to explore what data is available to them either by collection or through traversing the hierarchy of each data source in the catalog. For more information, see [browse the data catalog](how-to-browse-catalog.md).
## Search query syntax
-All search queries consist of keywords and operators. A keyword is a something that would be part of an asset's properties. Potential keywords can be a classification, glossary term, asset description, or an asset name. A keyword can be just a part of the property you are looking to match to. Use keywords and the operators listed below to ensure Azure Purview returns the assets you are looking for.
+All search queries consist of keywords and operators. A keyword is a something that would be part of an asset's properties. Potential keywords can be a classification, glossary term, asset description, or an asset name. A keyword can be just a part of the property you're looking to match to. Use keywords and the operators to ensure Azure Purview returns the assets you're looking for.
Certain characters including spaces, dashes, and commas are interpreted as delimiters. Searching a string like `hive-database` is the same as searching two keywords `hive database`.
-Below are the operators that can be used to compose a search query. Operators can be combined as many times as need in a single query.
+The following table contains the operators that can be used to compose a search query. Operators can be combined as many times as need in a single query.
| Operator | Definition | Example | | -- | - | - | | OR | Specifies that an asset must have at least one of the two keywords. Must be in all caps. A white space is also an OR operator. | The query `hive OR database` returns assets that contain 'hive' or 'database' or both. | | AND | Specifies that an asset must have both keywords. Must be in all caps | The query `hive AND database` returns assets that contain both 'hive' and 'database'. |
-| NOT | Specifies that an asset can't contain the keyword to the right of the NOT clause | The query `hive NOT database` returns assets that contain 'hive', but not 'database'. |
+| NOT | Specifies that an asset can't contain the keyword to the right of the NOT clause. Must be in all caps | The query `hive NOT database` returns assets that contain 'hive', but not 'database'. |
| () | Groups a set of keywords and operators together. When combining multiple operators, parentheses specify the order of operations. | The query `hive AND (database OR warehouse)` returns assets that contain 'hive' and either 'database' or 'warehouse', or both. | | "" | Specifies exact content in a phrase that the query must match to. | The query `"hive database"` returns assets that contain the phrase "hive database" in their properties |
-| * | A wildcard that matches on one to many characters. Can't be the first character in a keyword. | The query `dat*` returns assets that have properties that start with 'dat' such as 'data' or 'database'. |
-| ? | A wildcard that matches on a single character. Can't be the first character in a keyword | The query `dat?` returns assets that have properties that start with 'dat' and are four letters such as 'date' or 'data'. |
+| field:keyword | Searches the keyword in a specific attribute of an asset. Field search is case insensitive and is limited to the following fields at this time: <ul><li>name</li><li>description</li><li>entityType</li><li>assetType</li><li>classification</li><li>term</li><li>contact</li></ul> | The query `description: German` returns all assets that contain the word "German" in the description.<br><br>The query `term:Customer` will return all assets with glossary terms that include "Customer". |
+
+> [!TIP]
+> Searching "*" will return all the assets in the catalog.
+
+### Known limitations
-> [!Note]
-> Always specify Boolean operators (**AND**, **OR**, **NOT**) in all caps. Otherwise, case doesn't matter, nor do extra spaces.
+* Searching for classifications only matches on the formal classification name. For example, the keywords "World Cities" don't match classification "MICROSOFT.GOVERNMENT.CITY_NAME".
+* Grouping isn't supported within a field search. Customers should use operators to connect field searches. For example,`name:(alice AND bob)` is invalid search syntax, but `name:alice AND name:bob` is supported.
## Next steps
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/overview.md
For more information, see our [introduction to Data Map](concept-elastic-data-ma
## Data Catalog With the Azure Purview Data Catalog, business and technical users alike can quickly & easily find relevant data using a search experience with filters based on various lenses like glossary terms, classifications, sensitivity labels and more. For subject matter experts, data stewards and officers, the Azure Purview Data Catalog provides data curation features like business glossary management and ability to automate tagging of data assets with glossary terms. Data consumers and producers can also visually trace the lineage of data assets starting from the operational systems on-premises, through movement, transformation & enrichment with various data storage & processing systems in the cloud to consumption in an analytics system like Power BI.
-For more information, see our [introduction to search using Data Catalog](concept-search.md).
+For more information, see our [introduction to search using Data Catalog](how-to-search-catalog.md).
## Data Insights With the Azure Purview data insights, data officers and security officers can get a birdΓÇÖs eye view and at a glance understand what data is actively scanned, where sensitive data is and how it moves.
purview Reference Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/reference-purview-glossary.md
Last updated 08/16/2021
Below is a glossary of terminology used throughout Azure Purview. ## Annotation
-Information that is associated with data assets in Azure Azure Purview, for example, glossary terms and classifications. After they are applied, annotations can be used within Search to aid in the discovery of the data assets. 
+Information that is associated with data assets in Azure Purview, for example, glossary terms and classifications. After they are applied, annotations can be used within Search to aid in the discovery of the data assets. 
## Approved The state given to any request that has been accepted as satisfactory by the designated individual or group who has authority to change the state of the request.  ## Asset
-Any single object that is stored within an Azure Azure Purview data catalog.
+Any single object that is stored within an Azure Purview data catalog.
> [!NOTE] > A single object in the catalog could potentially represent many objects in storage, for example, a resource set is an asset but it's made up of many partition files in storage. ## Azure Information Protection
An individual or group in charge of managing a data asset.
## Pattern rule A configuration that overrides how Azure Purview groups assets as resource sets and displays them within the catalog. ## Azure Purview instance
-A single Azure Azure Purview resource. 
+A single Azure Purview resource. 
## Registered source A source that has been added to an Azure Purview instance and is now managed as a part of the Data catalog.  ## Related terms
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-adls-gen2.md
Previously updated : 11/10/2021 Last updated : 01/24/2022 # Connect to Azure Data Lake Gen2 in Azure Purview
It is important to give your service principal the permission to scan the ADLS G
[!INCLUDE [view and manage scans](includes/view-and-manage-scans.md)] ## Access policy+
+Follow these configuration guides to:
+- [Configure from Azure Purview data owner access policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
+- [Configure from Azure Purview data owner access policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)
-Follow this configuration guide to [enable access policies on an Azure Storage account](./how-to-access-policies-storage.md)
## Next steps
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-blob-storage-source.md
Previously updated : 11/10/2021 Last updated : 01/24/2022
Scans can be managed or run again on completion
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-full-inc-scan.png" alt-text="full or incremental scan"::: ## Access policy+
+Follow these configuration guides to:
+- [Configure from Azure Purview data owner access policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
+- [Configure from Azure Purview data owner access policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)
-Follow this configuration guide to [enable access policies on an Azure Storage account](./how-to-access-policies-storage.md)
## Next steps
purview Tutorial Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-data-owner-policies-resource-group.md
+
+ Title: Access provisioning by data owner to resource groups or subscriptions
+description: Step-by-step guide showing how a data owner can create policies on resource groups or subscriptions.
+++++ Last updated : 1/25/2022+++
+# Access provisioning by data owner to resource groups or subscriptions (preview)
+
+This guide describes how a data owner can leverage Azure Purview to enable access to ALL data sources in a subscription or a resource group. This can be achieved through a single policy statement, and will cover all existing data sources, as well as data sources that are created afterwards. However, at this point, only the following data sources are supported:
+- Blob storage
+- Azure Data Lake Storage (ADLS) Gen2
+
+> [!Note]
+> These capabilities are currently in preview. This preview version is provided without a service level agreement, and should not be used for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure
+Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
++
+## Configuration
+
+### Register the subscription or resource group in Azure Purview
+The subscription or resource group needs to be registered with Azure Purview to later define access policies. You can follow this guide:
+
+- [Register multiple sources - Azure Purview](register-scan-azure-multiple-sources.md)
+
+Enable the resource group or subscription for access policies in Azure Purview by setting the **Data use governance** toggle to enable, as shown in the picture.
+
+![Image shows how to register a data source for policy.](./media/tutorial-access-policies-resource-group/register-resource-group-for-policy.png)
++
+## Policy authoring
+
+## Additional information
+
+### Limits
+The limit for Azure Purview policies that can be enforced by Storage accounts is 100MB per subscription, which roughly equates to 5000 policies.
+
+>[!Important]
+> - Publish is a background operation. It can take up to **2 hours** for the changes to be reflected in the data source.
+
+## Next steps
+Check the blog and demo related to the capabilities mentioned in this how-to guide
+
+* [What's New in Azure Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954)
+* [Demo of access policy for Azure Storage](https://www.youtube.com/watch?v=CFE8ltT19Ss)
+* [Enable Azure Purview data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-data-owner-policies-storage.md
+
+ Title: Access provisioning by data owner to Azure Storage datasets
+description: Step-by-step guide on how to integrate Azure Storage with Azure Purview to enable data owners to create access policies.
+++++ Last updated : 1/25/2022+++
+# Access provisioning by data owner to Azure Storage datasets (preview)
+
+This guide describes how a data owner can leverage Azure Purview to enable access to datasets in Azure Storage. At this point, only the following data sources are supported:
+- Blob storage
+- Azure Data Lake Storage (ADLS) Gen2
+
+> [!Note]
+> These capabilities are currently in preview. This preview version is provided without a service level agreement, and should not be used for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure
+Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
++
+## Configuration
+
+### Register and scan data sources in Azure Purview
+Register and scan each data source with Azure Purview to later define access policies. You can follow these guides:
+
+- [Register and scan Azure Storage Blob - Azure Purview](register-scan-azure-blob-storage-source.md)
+
+- [Register and scan Azure Data Lake Storage (ADLS) Gen2 - Azure Purview](register-scan-adls-gen2.md)
+
+Enable the data source for access policies in Azure Purview by setting the **Data use governance** toggle to enable, as shown in the picture.
+
+![Image shows how to register a data source for policy.](./media/how-to-access-policies-storage/register-data-source-for-policy-storage.png)
+++
+## Policy authoring
+
+## Additional information
+>[!Important]
+> - Publish is a background operation. It can take up to **2 hours** for the changes to be reflected in Storage account(s).
+
+- Policy statements set below container level on a Storage account are supported. If no access has been provided at Storage account level or container level, then the App that will execute the access will need to provide a fully qualified name (i.e., a direct absolute path) to the data object. The following documents show examples of how to do that:
+ - [*abfs* for ADLS Gen2](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md#access-files-from-the-cluster)
+ - [*az storage blob download* for Blob Storage](../storage/blobs/storage-quickstart-blobs-cli.md#download-a-blob)
+ - Creating a policy at Storage account level will enable the Subjects to access system containers e.g., *$logs*. If this is undesired, first scan the data source and then create the policy at container or sub-container level.
+- The limit for Azure Purview policies that can be enforced by Storage accounts is 100MB per subscription, which roughly equates to 5000 policies.
+
+### Known issues
+
+> [!Warning]
+> **Known issues** related to Policy creation
+> - Do not create policy statements based on Azure Purview resource sets. Even if displayed in Azure Purview policy authoring UI, they are not yet enforced. Learn more about [resource sets](concept-resource-sets.md).
+> - Once subscription gets disabled for *Data use governance* any underlying assets that are enabled for *Data use governance* will be disabled, which is the right behavior. However, policy statements based on those assets will still be allowed after that.
+
+### Policy action mapping
+
+This section contains a reference of how actions in Azure Purview data policies map to specific actions in Azure Storage.
+
+| **Azure Purview policy action** | **Data source specific actions** |
+||--|
+|||
+| *Read* |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/read |
+| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read |
+|||
+| *Modify* |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read |
+| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write |
+| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action |
+| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action |
+| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete |
+| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/read |
+| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/write |
+| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/delete |
+|||
++
+## Next steps
+Check the blog and demo related to the capabilities mentioned in this how-to guide
+
+* [What's New in Azure Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954)
+* [Demo of access policy for Azure Storage](https://www.youtube.com/watch?v=CFE8ltT19Ss)
+* [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)
+
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-api-keys.md
Previously updated : 06/25/2021 Last updated : 01/26/2022 # Use API keys for Azure Cognitive Search authentication
-Cognitive Search uses API keys as its primary authentication methodology. For inbound requests to the search services, such as requests that create or query an index, API keys are the only authentication option you have. A few outbound request scenarios, particularly those involving indexers, can use Azure Active Directory identities and roles.
-
-API keys are generated when the service created. Passing a valid API key on the request is considered proof that the request is from an authorized client. There are two kinds of keys. *Admin keys* convey write permissions on the service and also grant rights to query system information. *Query keys* convey read permissions and can be used by apps to query a specific index.
+Cognitive Search uses key-based authentication as its primary authentication methodology. For inbound requests to a search service endpoint, such as requests that create or query an index, API keys are the only generally available authentication option you have. A few outbound request scenarios, particularly those involving indexers, can use Azure Active Directory identities and roles.
> [!NOTE]
-> Authorization for data plane operations using Azure role-based access control (RBAC) is now in preview. You can use this preview capability to supplement or replace API keys [with Azure roles for Search](search-security-rbac.md).
+> [Authorization for data plane operations](search-security-rbac.md) using Azure role-based access control (RBAC) is now in preview. You can use this preview capability to supplement or replace API keys on search index requests.
## Using API keys in search
+API keys are generated when the service created. Passing a valid API key on the request is considered proof that the request is from an authorized client. There are two kinds of keys. *Admin keys* convey write permissions on the service and also grant rights to query system information. *Query keys* convey read permissions and can be used by apps to query a specific index.
+ When connecting to a search service, all requests must include an API key that was generated specifically for your service. + In [REST solutions](search-get-started-rest.md), the API key is typically specified in a request header
search Search Security Manage Encryption Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-manage-encryption-keys.md
description: Supplement server-side encryption over indexes and synonym maps in Azure Cognitive Search using keys that you create and manage in Azure Key Vault. --++ Previously updated : 07/02/2021 Last updated : 01/25/2022
Azure Cognitive Search automatically encrypts content with [service-managed keys](../security/fundamentals/encryption-atrest.md#azure-encryption-at-rest-components). If more protection is needed, you can supplement default encryption with an additional encryption layer using keys that you create and manage in Azure Key Vault. Objects that can be encrypted include indexes, synonym lists, indexers, data sources, and skillsets.
-This article walks you through the steps of setting up customer-managed key encryption. Here are some points to keep in mind:
+This article walks you through the steps of setting up customer-managed key (CMK) encryption. Here are some points to keep in mind:
-+ Customer-managed key encryption depends on [Azure Key Vault](../key-vault/general/overview.md). You can create your own encryption keys and store them in a key vault, or you can use Azure Key Vault APIs to generate encryption keys.
++ CMK encryption depends on [Azure Key Vault](../key-vault/general/overview.md). You can create your own encryption keys and store them in a key vault, or you can use Azure Key Vault APIs to generate encryption keys.
-+ Encryption with customer-managed keys is enabled when objects are created, on a per object basis. You cannot encrypt content that already exists.
++ CMK encryption occurs when an object is created. You can't encrypt objects that already exist.
-Encryption is computationally expensive to decrypt so only sensitive content is encrypted. This includes all content within indexes and synonym lists. For indexers, data sources, and skillsets, only those fields that store connection strings, descriptions, keys, and user inputs are encrypted. For example, skillsets have Cognitive Services keys, and some skills accept user inputs, such as custom entities. Keys and user inputs into skills are encrypted.
+Encryption is computationally expensive to decrypt so only sensitive content is encrypted. This includes all content within indexes and synonym lists. For indexers, data sources, and skillsets, only those fields that store connection strings, descriptions, keys, and user inputs are encrypted. For example, skillsets have Cognitive Services keys, and some skills accept user inputs, such as custom entities. In both cases, keys and user inputs into skills are encrypted.
## Double encryption
-Double encryption is an extension of customer-managed key (CMK) encryption. CMK encryption applies to long-term storage that is written to a data disk. The term *double encryption* refers to the additional encryption of short-term storage (of content written to temporary disks). There is no configuration required. When you apply CMK to objects, double encryption is invoked automatically.
+Double encryption is an extension of customer-managed key (CMK) encryption. CMK encryption applies to long-term storage that is written to a data disk. The term *double encryption* refers to the additional encryption of short-term storage (of content written to temporary disks). No configuration is required. When you apply CMK to objects, double encryption is invoked automatically.
-Although double encryption is available in all regions, support was rolled out in two phases. The first roll out was in August 2020 and included the five regions listed below. The second roll out in May 2021 extended double encryption to all remaining regions. If you are using CMK on an older service and want double encryption, you will need to create a new search service in your region of choice.
+Although double encryption is available in all regions, support was rolled out in two phases. The first rollout was in August 2020 and included the five regions listed below. The second rollout in May 2021 extended double encryption to all remaining regions. If you're using CMK on an older service and want double encryption, you'll need to create a new search service in your region of choice.
| Region | Service creation date | |--|--|
Although double encryption is available in all regions, support was rolled out i
The following tools and services are used in this scenario. + [Azure Cognitive Search](search-create-service-portal.md) on a [billable tier](search-sku-tier.md#tier-descriptions) (Basic or above, in any region).
-+ [Azure Key Vault](../key-vault/general/overview.md), you can create key vault using [Azure portal](../key-vault//general/quick-create-portal.md), [Azure CLI](../key-vault//general/quick-create-cli.md), or [Azure PowerShell](../key-vault//general/quick-create-powershell.md). in the same subscription as Azure Cognitive Search. The key vault must have **soft-delete** and **purge protection** enabled.
+++ [Azure Key Vault](../key-vault/general/overview.md), you can create key vault using [Azure portal](../key-vault//general/quick-create-portal.md), [Azure CLI](../key-vault//general/quick-create-cli.md), or [Azure PowerShell](../key-vault//general/quick-create-powershell.md). Create the resource in the same subscription as Azure Cognitive Search. The key vault must have **soft-delete** and **purge protection** enabled.+ + [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md). If you don't have one, [set up a new tenant](../active-directory/develop/quickstart-create-new-tenant.md).
-You should have a search application that can create the encrypted object. Into this code, you'll reference a key vault key and Active Directory registration information. This code could be a working app, or prototype code such as the [C# code sample DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK).
+You should have a search client that can create the encrypted object. Into this code, you'll reference a key vault key and Active Directory registration information. This code could be a working app, or prototype code such as the [C# code sample DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK).
> [!TIP]
-> You can use [Postman](search-get-started-rest.md), [Visual Studio Code](search-get-started-vs-code.md), or [Azure PowerShell](search-get-started-powershell.md), to call REST APIs that create indexes and synonym maps that include an encryption key parameter. You can also use Azure SDKs. Portal support for adding a key to indexes or synonym maps is not supported.
+> You can use [Postman](search-get-started-rest.md), [Visual Studio Code](search-get-started-vs-code.md), or [Azure PowerShell](search-get-started-powershell.md), to call REST APIs that create indexes and synonym maps that include an encryption key parameter. You can also use Azure SDKs. Portal support for adding a key to indexes or synonym maps isn't supported.
## Key Vault tips
-If you are new to Azure Key Vault, review this quickstart to learn about basic tasks: [Set and retrieve a secret from Azure Key Vault using PowerShell](../key-vault/secrets/quick-create-powershell.md). Here are some tips for using Key Vault:
+If you're new to Azure Key Vault, review this quickstart to learn about basic tasks: [Set and retrieve a secret from Azure Key Vault using PowerShell](../key-vault/secrets/quick-create-powershell.md). Here are some tips for using Key Vault:
-+ Use as many key vaults as you need. Managed keys can be in different key vaults. A search service can have multiple encrypted objects, each one encrypted with a different customer-managed encryption keys, stored in different key vaults.
++ Use as many key vaults as you need. Managed keys can be in different key vaults. A search service can have multiple encrypted objects, each one encrypted with a different customer-managed encryption key, stored in different key vaults. + [Enable logging](../key-vault/general/logging.md) on Key Vault so that you can monitor key usage.
-+ Remember to follow strict procedures during routine rotation of key vault keys and Active Directory application secrets and registration. Always update all [encrypted content](search-security-get-encryption-keys.md) to use new secrets and keys before deleting the old ones. If you miss this step, your content cannot be decrypted.
++ Remember to follow strict procedures during routine rotation of key vault keys and Active Directory application secrets and registration. Always update all [encrypted content](search-security-get-encryption-keys.md) to use new secrets and keys before deleting the old ones. If you miss this step, your content can't be decrypted. ## 1 - Enable purge protection
-As a first step, make sure [soft-delete](../key-vault/general/soft-delete-overview.md) and [purge protection](../key-vault/general/soft-delete-overview.md#purge-protection) are enabled on the key vault. Due to the nature of encryption with customer-managed keys, no one can retrieve your data if your Azure Key vault key is deleted.
+As a first step, make sure [soft-delete](../key-vault/general/soft-delete-overview.md) and [purge protection](../key-vault/general/soft-delete-overview.md#purge-protection) are enabled on the key vault. Due to the nature of encryption with customer-managed keys, no one can retrieve your data if your Azure Key Vault key is deleted.
-To prevent data loss caused by accidental Key Vault key deletions, soft-delete and purge protection must be enabled on the key vault. Soft-delete is enabled by default, so you will only encounter issues if you purposely disabled it. Purge protection is not enabled by default, but it is required for customer-managed key encryption in Cognitive Search.
+To prevent data loss caused by accidental Key Vault key deletions, soft-delete and purge protection must be enabled on the key vault. Soft-delete is enabled by default, so you'll only encounter issues if you purposely disabled it. Purge protection isn't enabled by default, but it is required for customer-managed key encryption in Cognitive Search.
You can set both properties using the portal, PowerShell, or Azure CLI commands.
-### Using Azure portal
+### [**Azure portal**](#tab/portal-pp)
1. [Sign in to Azure portal](https://portal.azure.com) and open your key vault overview page. 1. On the **Overview** page under **Essentials**, enable **Soft-delete** and **Purge protection**.
-### Using PowerShell
+### [**Using PowerShell**](#tab/ps-pp)
1. Run `Connect-AzAccount` to set up your Azure credentials.
You can set both properties using the portal, PowerShell, or Azure CLI commands.
Set-AzResource -resourceid $resource.ResourceId -Properties $resource.Properties ```
-### Using Azure CLI
+### [**Using Azure CLI**](#tab/cli-pp)
+ If you have an [installation of Azure CLI](/cli/azure/install-azure-cli), you can run the following command to enable the required properties.
You can set both properties using the portal, PowerShell, or Azure CLI commands.
az keyvault update -n <vault_name> -g <resource_group> --enable-soft-delete --enable-purge-protection ``` ++ ## 2 - Create a key in Key Vault
-Skip key generation if you already have a key in Azure Key Vault that you want to use, but collect the Key Identifier. You will need this information when creating an encrypted object.
+Skip key generation if you already have a key in Azure Key Vault that you want to use, but collect the key identifier. You will need this information when creating an encrypted object.
1. [Sign in to Azure portal](https://portal.azure.com) and open your key vault overview page.
Skip key generation if you already have a key in Azure Key Vault that you want t
1. Select **Create** to start the deployment.
-1. Make a note of the Key Identifier ΓÇô it's composed of the **key value Uri**, the **key name**, and the **key version**. You will need the identifier to define an encrypted index in Azure Cognitive Search.
+1. Select the key, select the current version, and then make a note of the key identifier. It's composed of the **key value Uri**, the **key name**, and the **key version**. You will need the identifier to define an encrypted index in Azure Cognitive Search.
+
+ :::image type="content" source="media/search-manage-encryption-keys/cmk-key-identifier.png" alt-text="Create a new key vault key" border="true":::
+
+## 3 - Create a security principle
+
+You have several options for accessing the encryption key at run time. The simplest approach is to retrieve the key using the managed identity and permissions of your search service. You can use either a system or user-managed identity. Doing so allows you to omit the steps for application registration and application secrets, and simplifies the encryption key definition.
+
+Alternatively, you can create and register an Azure Active Directory application. The search service will provide the application ID on requests.
+
+A managed identity enables your search service to authenticate to Azure Key Vault without storing credentials (ApplicationID or ApplicationSecret) in code. The lifecycle of this type of managed identity is tied to the lifecycle of your search service, which can only have one managed identity. For more information about how managed identities work, see [What are managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+
+### [**System-managed identity**](#tab/managed-id-sys)
+
+1. Make your search service a trusted service.
+
+ ![Turn on system assigned managed identity](./media/search-managed-identities/turn-on-system-assigned-identity.png "Turn on system assigned managed identity")
+
+Conditions that will prevent you from adopting this approach include:
+++ You can't directly grant your search service access permissions to the key vault (for example, if the search service is in a different Active Directory tenant than the Azure Key Vault).+++ A single search service is required to host multiple encrypted indexes or synonym maps, each using a different key from a different key vault, where each key vault must use **a different identity** for authentication. Because a search service can only have one managed identity, a requirement for multiple identities will disqualify the simplified approach for your scenario. +
+### [**User-managed identity (preview)**](#tab/managed-id-user)
+
+> [!IMPORTANT]
+> User-managed identity support is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> The REST API version 2021-04-30-Preview and [Management REST API 2021-04-01-Preview](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) provide this feature.
+
+1. [Sign into Azure portal](https://portal.azure.com/).
+
+1. Select **+ Create a new resource**.
+
+1. In the "Search services and marketplace" search bar, search for "User Assigned Managed Identity" and then select **Create**.
+
+1. Give the identity a descriptive name.
+
+1. Next, assign the user-managed identity to the search service. This can be done using the [2021-04-01-preview](/rest/api/searchmanagement/management-api-versions) management API.
+
+ The identity property takes a type and one or more fully qualified user-assigned identities:
+
+ * **type** is the type of identity used for the resource. The type 'SystemAssigned, UserAssigned' includes both an identity created by the system and a set of user assigned identities. The type 'None' will remove all identities from the service.
+ * **userAssignedIdentities** includes the details of the user-managed identity.
+ * User-managed identity format:
+ * /subscriptions/**subscription ID**/resourcegroups/**resource group name**/providers/Microsoft.ManagedIdentity/userAssignedIdentities/**managed identity name**
+
+ Example of how to assign a user-managed identity to a search service:
+
+ ```http
+ PUT https://management.azure.com/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Search/searchServices/[search service name]?api-version=2021-04-01-preview
+ Content-Type: application/json
+
+ {
+ "location": "[region]",
+ "sku": {
+ "name": "[sku]"
+ },
+ "properties": {
+ "replicaCount": [replica count],
+ "partitionCount": [partition count],
+ "hostingMode": "default"
+ },
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/[subscription ID]/resourcegroups/[resource group name]/providers/Microsoft.ManagedIdentity/userAssignedIdentities/[managed identity name]": {}
+ }
+ }
+ }
+ ```
+
+1. Use a simplified construction of the "encryptionKey" that omits the Active Directory properties and add an identity property. Make sure to use the 2021-04-30-preview REST API version.
- :::image type="content" source="media/search-manage-encryption-keys/cmk-key-identifier.png" alt-text="Create a new key vault key":::
+ ```json
+ {
+ "encryptionKey": {
+ "keyVaultUri": "https://[key vault name].vault.azure.net",
+ "keyVaultKeyName": "[key vault key name]",
+ "keyVaultKeyVersion": "[key vault key version]",
+ "identity" : {
+ "@odata.type": "#Microsoft.Azure.Search.DataUserAssignedIdentity",
+ "userAssignedIdentity" : "/subscriptions/[subscription ID]/resourceGroups/[resource group name]/providers/Microsoft.ManagedIdentity/userAssignedIdentities/[managed identity name]"
+ }
+ }
+ }
+ ```
-## 3 - Register an app
+### [**Register an app**](#tab/register-app)
1. In [Azure portal](https://portal.azure.com), find the Azure Active Directory resource for your subscription.
Skip key generation if you already have a key in Azure Key Vault that you want t
1. Once the app registration is created, copy the Application ID. You will need to provide this string to your application.
- If you are stepping through the [DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK), paste this value into the **appsettings.json** file.
+ If you're stepping through the [DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK), paste this value into the **appsettings.json** file.
:::image type="content" source="media/search-manage-encryption-keys/cmk-application-id.png" alt-text="Application ID in the Essentials section":::
Skip key generation if you already have a key in Azure Key Vault that you want t
1. Select **New client secret**. Give the secret a display name and select **Add**.
-1. Copy the application secret. If you are stepping through the sample, paste this value into the **appsettings.json** file.
+1. Copy the application secret. If you're stepping through the sample, paste this value into the **appsettings.json** file.
:::image type="content" source="media/search-manage-encryption-keys/cmk-application-secret.png" alt-text="Application secret"::: ++ ## 4 - Grant permissions
-In this step, you will create an access policy in Key Vault. This policy gives the application you registered with Active Directory permission to use your customer-managed key.
+In this step, you'll create an access policy in Key Vault. This policy gives the application you registered with Active Directory permission to use your customer-managed key.
-Access permissions could be revoked at any given time. Once revoked, any search service index or synonym map that uses that key vault will become unusable. Restoring Key vault access permissions at a later time will restore index\synonym map access. For more information, see [Secure access to a key vault](../key-vault/general/security-features.md).
+Access permissions could be revoked at any given time. Once revoked, any search service index or synonym map that uses that key vault will become unusable. Restoring key vault access permissions at a later time will restore index\synonym map access. For more information, see [Secure access to a key vault](../key-vault/general/security-features.md).
1. Still in the Azure portal, open your key vault **Overview** page.
-1. Select the **Access policies** on the left, and select **+ Add Access Policy**.
-
- :::image type="content" source="media/search-manage-encryption-keys/cmk-add-access-policy.png" alt-text="Add new key vault access policy":::
+1. Select the **Access policies** on the left, and select **+ Create**.
-1. Choose **Select principal** and select the application you registered with Active Directory. You can search for it by name.
+ :::image type="content" source="media/search-manage-encryption-keys/cmk-add-access-policy.png" alt-text="Create an access policy." border="true":::
- :::image type="content" source="media/search-manage-encryption-keys/cmk-access-policy-permissions.png" alt-text="Select key vault access policy principal":::
+1. On the **Permissions** page, select *Get* for **Key permissions**, **Secret permissions**, and **Certificate Permissions**. Select *Unwrap Key* and *Wrap Key* for ** cryptographic operations on the key.
-1. In **Key permissions**, choose *Get*, *Unwrap Key* and *Wrap Key*.
+ :::image type="content" source="media/search-manage-encryption-keys/cmk-access-policy-permissions.png" alt-text="Select permissions in the Permissions page." border="true":::
-1. In **Secret Permissions**, select *Get*.
+1. Select **Next**.
-1. In **Certificate Permissions**, select *Get*.
+1. On the **Principle** page, find and select the security principle used by the search service to access the encryption key. This will either be the system-managed or user-managed identity of the search service, or the registered application.
-1. Select **Add** and then **Save**.
+1. Select **Next** and **Create**.
> [!Important]
-> Encrypted content in Azure Cognitive Search is configured to use a specific Azure Key Vault key with a specific **version**. If you change the key or version, the index or synonym map must be updated to use the new key\version **before** deleting the previous key\version.
-> Failing to do so will render the index or synonym map unusable, at you won't be able to decrypt the content once key access is lost.
+> Encrypted content in Azure Cognitive Search is configured to use a specific Azure Key Vault key with a specific **version**. If you change the key or version, the index or synonym map must be updated to use it **before** you delete the previous one.
+> Failing to do so will render the index or synonym map unusable. You won't be able to decrypt the content if the key is lost.
<a name="encrypt-content"></a> ## 5 - Encrypt content
-To add a customer-managed key on an index, synonym map, indexer, data source, or skillset, use the [Search REST API](/rest/api/searchservice/) or an Azure SDK to create an object that has encryption enabled. The portal does not expose synonym maps or encryption properties.
+Encryption keys are added when you create an object. To add a customer-managed key on an index, synonym map, indexer, data source, or skillset, use the [Search REST API](/rest/api/searchservice/) or an Azure SDK to create an object that has encryption enabled. The portal does not allow encryption properties on object creation.
1. Call the Create APIs to specify the **encryptionKey** property:
To add a customer-managed key on an index, synonym map, indexer, data source, or
+ [Create Data Source](/rest/api/searchservice/create-data-source) + [Create Skillset](/rest/api/searchservice/create-skillset).
-1. Insert the encryptionKey construct into the object definition. This property is a first-level property, on the same level as name and description. The [examples below](#rest-examples) show property placement. If you are using the same key vault, key, and version, you can paste in the same encryptionKey construct into each object for which you are enabling encryption.
+1. Insert the encryptionKey construct into the object definition. This property is a first-level property, on the same level as name and description. The [REST examples below](#rest-examples) show property placement. If you're using the same vault, key, and version, you can paste in the same "encryptionKey" construct into each object definition.
- The following JSON example shows an encryptionKey, with placeholder values for Azure Key Vault and application registration in Azure Active Directory:
+ The first example shows an "encryptionKey" for a search service that connects using a managed identity:
+
+ ```json
+ {
+ "encryptionKey": {
+ "keyVaultUri": "https://demokeyvault.vault.azure.net",
+ "keyVaultKeyName": "myEncryptionKey",
+ "keyVaultKeyVersion": "eaab6a663d59439ebb95ce2fe7d5f660"
+ }
+ }
+ ```
+
+ The second example includes "accessCredentials", necessary if you registered an application in Azure AD:
```json {
To add a customer-managed key on an index, synonym map, indexer, data source, or
Once you create the encrypted object on the search service, you can use it as you would any other object of its type. Encryption is transparent to the user and developer. > [!Note]
-> None of these key vault details are considered secret and could be easily retrieved by browsing to the relevant Azure Key Vault key page in Azure portal.
+> None of these key vault details are considered secret and could be easily retrieved by browsing to the relevant Azure Key Vault page in Azure portal.
## REST examples
-This section shows the JSON for several objects so that you can see where to locate `encryptionKey` in an object definition.
+This section shows the JSON for several objects so that you can see where to locate "encryptionKey" in an object definition.
### Index encryption
You can now send the index creation request, and then start using the index norm
### Synonym map encryption
-Create an encrypted synonym map using the [Create Synonym Map Azure Cognitive Search REST API](/rest/api/searchservice/create-synonym-map). Use the `encryptionKey` property to specify which encryption key to use.
+Create an encrypted synonym map using the [Create Synonym Map Azure Cognitive Search REST API](/rest/api/searchservice/create-synonym-map). Use the "encryptionKey" property to specify which encryption key to use.
```json {
You can now send the synonym map creation request, and then start using it norma
### Data source encryption
-Create an encrypted data source using the [Create Data Source (REST API)](/rest/api/searchservice/create-data-source). Use the `encryptionKey` property to specify which encryption key to use.
+Create an encrypted data source using the [Create Data Source (REST API)](/rest/api/searchservice/create-data-source). Use the "encryptionKey" property to specify which encryption key to use.
```json {
You can now send the data source creation request, and then start using it norma
### Skillset encryption
-Create an encrypted skillset using the [Create Skillset REST API](/rest/api/searchservice/create-skillset). Use the `encryptionKey` property to specify which encryption key to use.
+Create an encrypted skillset using the [Create Skillset REST API](/rest/api/searchservice/create-skillset). Use the "encryptionKey" property to specify which encryption key to use.
```json {
You can now send the skillset creation request, and then start using it normally
### Indexer encryption
-Create an encrypted indexer using the [Create Indexer REST API](/rest/api/searchservice/create-indexer). Use the `encryptionKey` property to specify which encryption key to use.
+Create an encrypted indexer using the [Create Indexer REST API](/rest/api/searchservice/create-indexer). Use the "encryptionKey" property to specify which encryption key to use.
```json {
Create an encrypted indexer using the [Create Indexer REST API](/rest/api/search
You can now send the indexer creation request, and then start using it normally. >[!Important]
-> While `encryptionKey` cannot be added to existing search indexes or synonym maps, it may be updated by providing different values for any of the three key vault details (for example, updating the key version).
+> While "encryptionKey" can't be added to existing search indexes or synonym maps, it may be updated by providing different values for any of the three key vault details (for example, updating the key version).
> When changing to a new Key Vault key or a new key version, any search index or synonym map that uses the key must first be updated to use the new key\version **before** deleting the previous key\version.
-> Failing to do so will render the index or synonym map unusable, as it won't be able to decrypt the content once key access is lost. Although restoring Key vault access permissions at a later time will restore content access.
-
-## Simpler alternative: Trusted service
-
-Depending on tenant configuration and authentication requirements, you might be able to implement a simpler approach for accessing a key vault key. Instead of creating and using an Active Directory application, you can either make a search service a trusted service by enabling a system-managed identity for it or assign a user-assigned managed identity to your search service. You would then either use the trusted search service or user-assigned managed identity as a security principle, rather than an AD-registered application, to access the key vault key.
-
-Both of these approaches allow you to omit the steps for application registration and application secrets, and simplifies the encryption key definition.
-
-### System-assigned managed identity
-
-In general, a managed identity enables your search service to authenticate to Azure Key Vault without storing credentials (ApplicationID or ApplicationSecret) in code. The lifecycle of this type of managed identity is tied to the lifecycle of your search service, which can only have one managed identity. For more information about how managed identities work, see [What are managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
-
-1. Make your search service a trusted service.
- ![Turn on system assigned managed identity](./media/search-managed-identities/turn-on-system-assigned-identity.png "Turn on system assigned managed identity")
-
-1. When setting up an access policy in Azure Key Vault, choose the trusted search service as the principle (instead of the AD-registered application). Assign the same permissions (multiple GETs, WRAP, UNWRAP) as instructed in the grant access key permissions step.
-
-1. Use a simplified construction of the `encryptionKey` that omits the Active Directory properties.
-
- ```json
- {
- "encryptionKey": {
- "keyVaultUri": "https://demokeyvault.vault.azure.net",
- "keyVaultKeyName": "myEncryptionKey",
- "keyVaultKeyVersion": "eaab6a663d59439ebb95ce2fe7d5f660"
- }
- }
- ```
-
-Conditions that will prevent you from adopting this simplified approach include:
-
-+ You cannot directly grant your search service access permissions to the Key vault (for example, if the search service is in a different Active Directory tenant than the Azure Key Vault).
-
-+ A single search service is required to host multiple encrypted indexes\synonym maps, each using a different key from a different Key vault, where each key vault must use **a different identity** for authentication. Because a search service can only have one managed identity, a requirement for multiple identities will disqualify the simplified approach for your scenario.
-
-### User-assigned managed identity (preview)
-
-> [!IMPORTANT]
-> User-assigned managed identity support is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> The REST API version 2021-04-30-Preview and [Management REST API 2021-04-01-Preview](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) provide this feature.
-
-Assigning a user-assigned managed identity to your search service will enable your search service to authenticate to Azure Key Vault without storing credentials (ApplicationID or ApplicationSecret) in code. The lifecycle of this type of managed identity is independent to the lifecycle of your search service. For more information about how managed identities work, see [What are managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
-
-1. If you don't already have a user-assigned managed identity created, you'll need to create one. To create a user-assigned managed identity follow the below steps:
-
- 1. Sign into the [Azure portal](https://portal.azure.com/).
- 1. Select **+ Create a new resource**.
- 1. In the "Search services and marketplace" search bar, search for "User Assigned Managed Identity" and then select **Create**.
- 1. Give the identity a descriptive name.
-
-1. Next, assign the user-assigned managed identity to the search service. This can be done using the [2021-04-01-preview](/rest/api/searchmanagement/management-api-versions) management API.
-
- The identity property takes a type and one or more fully-qualified user-assigned identities:
-
- * **type** is the type of identity used for the resource. The type 'SystemAssigned, UserAssigned' includes both an identity created by the system and a set of user assigned identities. The type 'None' will remove all identities from the service.
- * **userAssignedIdentities** includes the details of the user assigned managed identity.
- * User-assigned managed identity format:
- * /subscriptions/**subscription ID**/resourcegroups/**resource group name**/providers/Microsoft.ManagedIdentity/userAssignedIdentities/**managed identity name**
-
- Example of how to assign a user-assigned managed identity to a search service:
-
- ```http
- PUT https://management.azure.com/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Search/searchServices/[search service name]?api-version=2021-04-01-preview
- Content-Type: application/json
-
- {
- "location": "[region]",
- "sku": {
- "name": "[sku]"
- },
- "properties": {
- "replicaCount": [replica count],
- "partitionCount": [partition count],
- "hostingMode": "default"
- },
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "/subscriptions/[subscription ID]/resourcegroups/[resource group name]/providers/Microsoft.ManagedIdentity/userAssignedIdentities/[managed identity name]": {}
- }
- }
- }
- ```
-
-1. When setting up an access policy in Azure Key Vault, choose the user-assigned managed identity as the principle (instead of the AD-registered application). Assign the same permissions (multiple GETs, WRAP, UNWRAP) as instructed in the grant access key permissions step.
-
-1. Use a simplified construction of the `encryptionKey` that omits the Active Directory properties and add an identity property. Make sure to use the 2021-04-30-preview REST API version.
-
- ```json
- {
- "encryptionKey": {
- "keyVaultUri": "https://[key vault name].vault.azure.net",
- "keyVaultKeyName": "[key vault key name]",
- "keyVaultKeyVersion": "[key vault key version]",
- "identity" : {
- "@odata.type": "#Microsoft.Azure.Search.DataUserAssignedIdentity",
- "userAssignedIdentity" : "/subscriptions/[subscription ID]/resourceGroups/[resource group name]/providers/Microsoft.ManagedIdentity/userAssignedIdentities/[managed identity name]"
- }
- }
- }
- ```
+> Failing to do so will render the index or synonym map unusable, as it won't be able to decrypt the content once key access is lost. Although restoring key vault access permissions at a later time will restore content access.
## Work with encrypted content
-With customer-managed key encryption, you will notice latency for both indexing and queries due to the extra encrypt/decrypt work. Azure Cognitive Search does not log encryption activity, but you can monitor key access through key vault logging. We recommend that you [enable logging](../key-vault/general/logging.md) as part of key vault configuration.
+With customer-managed key encryption, you'll notice latency for both indexing and queries due to the extra encrypt/decrypt work. Azure Cognitive Search does not log encryption activity, but you can monitor key access through key vault logging. We recommend that you [enable logging](../key-vault/general/logging.md) as part of key vault configuration.
Key rotation is expected to occur over time. Whenever you rotate keys, it's important to follow this sequence:
Key rotation is expected to occur over time. Whenever you rotate keys, it's impo
1. [Update the encryptionKey properties](/rest/api/searchservice/update-index) on an index or synonym map to use the new values. Only objects that were originally created with this property can be updated to use a different value. 1. Disable or delete the previous key in the key vault. Monitor key access to verify the new key is being used.
-For performance reasons, the search service caches the key for up to several hours. If you disable or delete the key without providing a new one, queries will continue to work on a temporary basis until the cache expires. However, once the search service cannot decrypt content, you will get this message: "Access forbidden. The query key used might have been revoked - please retry."
+For performance reasons, the search service caches the key for up to several hours. If you disable or delete the key without providing a new one, queries will continue to work on a temporary basis until the cache expires. However, once the search service can no longer decrypt content, you'll get this message: "Access forbidden. The query key used might have been revoked - please retry."
## Next steps
-If you are unfamiliar with Azure security architecture, review the [Azure Security documentation](../security/index.yml), and in particular, this article:
+If you're unfamiliar with Azure security architecture, review the [Azure Security documentation](../security/index.yml), and in particular, this article:
> [!div class="nextstepaction"] > [Data encryption-at-rest](../security/fundamentals/encryption-atrest.md)
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-overview.md
This article describes the security features in Azure Cognitive Search that protect data and operations.
-## Data flow and points of entry
+## Data flow (network traffic patterns)
A search service is hosted on Azure and is typically accessed by client applications using public network connections. Understanding the search service's points of entry and network traffic patterns is useful background for setting up development and production environments.
Cognitive Search has three basic network traffic patterns:
+ Outbound requests issued by the search service to other services on Azure and elsewhere + Internal service-to-service requests over the secure Microsoft backbone network
-Inbound requests range from creating objects, loading data, and querying. For inbound access to data and operations, you can implement a progression of security measures, starting with API keys on the request. You can then supplement with either inbound rules in an IP firewall, or create private endpoints that fully shield your service from the public internet. You can also use Azure Active Directory and role-based access control for data plane operations (currently in preview).
+### Inbound traffic
-Outbound requests can include both read and write operations. The primary agent of an outbound call is an indexer and constituent skillsets. For indexers, read operations include [document cracking](search-indexer-overview.md#document-cracking) and data ingestion. An indexer can also write to Azure Storage when creating knowledge stores, persisting cached enrichments, and persisting debug sessions. Finally, a skillset can also include custom skills that run external code, for example in Azure Functions or in a web app.
+Inbound requests that target a search service endpoint consist of creating objects, processing data, and querying an index.
-Internal requests include service-to-service calls for tasks like diagnostic logging, encryption, authentication and authorization through Azure Active Directory, private endpoint connections, and requests made to Cognitive Services for built-in skills.
+For inbound access to data and operations on your search service, you can implement a progression of security measures, starting with API keys on the request. You can also use Azure Active Directory and role-based access control for data plane operations (currently in preview). You can then supplement with [network security features](#service-access-and-authentication), either inbound rules in an IP firewall, or private endpoints that fully shield your service from the public internet.
-## Network security
+### Outbound traffic
+
+Outbound requests from a search service to other applications are typically made by indexers. Outbound requests include both read and write operations:
+++ Indexers connect to external data sources to read data for indexing.++ Indexers can also write to Azure Storage when creating knowledge stores, persisting cached enrichments, and persisting debug sessions.++ A custom skill runs external code that's hosted off-service. An indexer sends the request for external processing during skillset execution.+
+Indexer connections can be made under a managed identity if you're using Azure Active Directory, or a connection string that includes shared access keys or a database login.
+
+### Internal traffic
+
+Internal requests are secured and managed by Microsoft. Internal traffic consists of service-to-service calls for tasks like authentication and authorization through Azure Active Directory, diagnostic logging in Azure Monitor, encryption, private endpoint connections, and requests made to Cognitive Services for built-in skills.
<a name="service-access-and-authentication"></a>
+## Network security
+ Inbound security features protect the search service endpoint through increasing levels of security and complexity. Cognitive Search uses [key-based authentication](search-security-api-keys.md), where all requests require an API key for authenticated access. Optionally, you can implement additional layers of control by setting firewall rules that limit access to specific IP addresses. For advanced protection, you can enable Azure Private Link to shield your service endpoint from all internet traffic.
At the storage layer, data encryption is built in for all service-managed conten
In Azure Cognitive Search, encryption starts with connections and transmissions, and extends to content stored on disk. For search services on the public internet, Azure Cognitive Search listens on HTTPS port 443. All client-to-service connections use TLS 1.2 encryption. Earlier versions (1.0 or 1.1) are not supported.
-### Encrypted data at rest
+### Data at rest
For data handled internally by the search service, the following table describes the [data encryption models](../security/fundamentals/encryption-models.md). Some features, such as knowledge store, incremental enrichment, and indexer-based indexing, read from or write to data structures in other Azure Services. Those services have their own levels of encryption support separate from Azure Cognitive Search.
For data handled internally by the search service, the following table describes
| server-side encryption | customer-managed keys | Azure Key Vault | Available on billable tiers, in all regions, for content created after January 2019. | Content (indexes and synonym maps) on data disks | | server-side double encryption | customer-managed keys | Azure Key Vault | Available on billable tiers, in selected regions, on search services after August 1 2020. | Content (indexes and synonym maps) on data disks and temporary disks |
-### Service-managed keys
+#### Service-managed keys
Service-managed encryption is a Microsoft-internal operation, based on [Azure Storage Service Encryption](../storage/common/storage-service-encryption.md), using 256-bit [AES encryption](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard). It occurs automatically on all indexing, including on incremental updates to indexes that are not fully encrypted (created before January 2018).
-### Customer-managed keys (CMK)
+#### Customer-managed keys (CMK)
Customer-managed keys require an additional billable service, Azure Key Vault, which can be in a different region, but under the same subscription, as Azure Cognitive Search. Enabling CMK encryption will increase index size and degrade query performance. Based on observations to date, you can expect to see an increase of 30%-60% in query times, although actual performance will vary depending on the index definition and types of queries. Because of this performance impact, we recommend that you only enable this feature on indexes that really require it. For more information, see [Configure customer-managed encryption keys in Azure Cognitive Search](search-security-manage-encryption-keys.md). <a name="double-encryption"></a>
-### Double encryption
+#### Double encryption
In Azure Cognitive Search, double encryption is an extension of CMK. It is understood to be two-fold encryption (once by CMK, and again by service-managed keys), and comprehensive in scope, encompassing long-term storage that is written to a data disk, and short-term storage written to temporary disks. Double encryption is implemented in services created after specific dates. For more information, see [Double encryption](search-security-manage-encryption-keys.md#double-encryption).
sentinel Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/ci-cd.md
> > The Microsoft Sentinel **Repositories** page is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-Microsoft Sentinel *content* is Security Information and Event Management (SIEM) that assists customers with ingesting, monitoring, alerting, hunting, and more in Microsoft Sentinel. For example, Microsoft Sentinel content includes data connectors, parsers, workbooks, and analytics rules. For more information, see [About Microsoft Sentinel content and solutions](sentinel-solutions.md).
+Microsoft Sentinel *content* is Security Information and Event Management (SIEM) or Security Orchestration, Automation, and Response (SOAR) resources that assist customers with ingesting, monitoring, alerting, hunting, automating response, and more in Microsoft Sentinel. For example, Microsoft Sentinel content includes data connectors, parsers, workbooks, and analytics rules. For more information, see [About Microsoft Sentinel content and solutions](sentinel-solutions.md).
You can use the out-of-the-box (built-in) content provided in the Microsoft Sentinel Content hub and customize it for your own needs, or create your own custom content from scratch.
sentinel Connect Azure Windows Microsoft Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-azure-windows-microsoft-services.md
Connectors of this type use Azure Policy to apply a single diagnostic settings c
> > With this type of data connector, the connectivity status indicators (a color stripe in the data connectors gallery and connection icons next to the data type names) will show as *connected* (green) only if data has been ingested at some point in the past 14 days. Once 14 days have passed with no data ingestion, the connector will show as being disconnected. The moment more data comes through, the *connected* status will return.
-You can find and query the data for each resource type using the table name that appears in the section for the resource's connector in the [Data connectors reference](data-connectors-reference.md) page.
+You can find and query the data for each resource type using the table name that appears in the section for the resource's connector in the [Data connectors reference](data-connectors-reference.md) page. For more information, see [Create diagnostic settings to send Azure Monitor platform logs and metrics to different destinations](/azure/azure-monitor/essentials/diagnostic-settings?tabs=CMD) in the Azure Monitor documentation.
## Windows agent-based connections
storage Immutable Storage Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/immutable-storage-overview.md
This table shows how this feature is supported in your account and the impact on
- [Data protection overview](data-protection-overview.md) - [Time-based retention policies for immutable blob data](immutable-time-based-retention-policy-overview.md) - [Legal holds for immutable blob data](immutable-legal-hold-overview.md)-- [Configure immutability policies for blob versions (preview)](immutable-policy-configure-version-scope.md)
+- [Configure immutability policies for blob versions](immutable-policy-configure-version-scope.md)
- [Configure immutability policies for containers](immutable-policy-configure-container-scope.md)
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/monitor-blob-storage.md
This table shows how this feature is supported in your account and the impact on
| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) <sup>2</sup>|![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
+| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) |
+| Premium block blobs | ![Yes](../media/icons/yes-icon.png) <sup>2</sup>|![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) |
### Metrics in Azure Monitor
storage Quickstart Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/quickstart-storage-explorer.md
Title: Quickstart - Create a blob with Azure Storage Explorer
description: Learn how to use Azure Storage Explorer to create a container and a blob, download the blob to your local computer, and view all of the blobs in the container. -+ Last updated 10/28/2021-+
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The items that appear in these tables will change over time as support continues
| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
+| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) |
| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Object replication for block blobs](object-replication-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-page-blobs) | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
The items that appear in these tables will change over time as support continues
| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
+| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) |
| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | | [Object replication for block blobs](object-replication-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-page-blobs) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
storage Storage Quickstart Blobs Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-quickstart-blobs-cli.md
Title: 'Quickstart: Upload, download, and list blobs - Azure CLI'
description: In this quickstart, you learn how to use the Azure CLI upload a blob to Azure Storage, download a blob, and list the blobs in a container. -+ Last updated 08/17/2020-+
storage Storage Quickstart Blobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-quickstart-blobs-dotnet.md
Title: "Quickstart: Azure Blob Storage library v12 - .NET" description: In this quickstart, you will learn how to use the Azure Blob Storage client library version 12 for .NET to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.--++ Last updated 10/06/2021
storage Storage Quickstart Blobs Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-quickstart-blobs-portal.md
Title: 'Quickstart: Upload, download, and list blobs - Azure portal'
description: In this quickstart, you use the Azure portal in object (Blob) storage. Then you use the Azure portal to upload a blob to Azure Storage, download a blob, and list the blobs in a container. -+ Last updated 10/25/2021-+
storage Storage Quickstart Blobs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-quickstart-blobs-powershell.md
Title: 'Quickstart: Upload, download, and list blobs - Azure PowerShell'
description: In this quickstart, you use Azure PowerShell in object (Blob) storage. Then you use PowerShell to upload a blob to Azure Storage, download a blob, and list the blobs in a container. -+ Last updated 03/31/2020-+
storage Upgrade To Data Lake Storage Gen2 How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md
description: Shows you how to use Resource Manager templates to upgrade from Azu
Previously updated : 10/12/2021 Last updated : 01/25/2022 # Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities
-This article helps you unlock capabilities such as file and directory-level security and faster operations. These capabilities are widely used by big data analytics workloads and are referred to collectively as Azure Data Lake Storage Gen2.
+This article helps you to enable a hierarchical namespace and unlock capabilities such as file and directory-level security and faster operations. These capabilities are widely used by big data analytics workloads and are referred to collectively as Azure Data Lake Storage Gen2.
To learn more about these capabilities and evaluate the impact of this upgrade on workloads, applications, costs, service integrations, tools, features, and documentation, see [Upgrading Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2.md).
storage Upgrade To Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/upgrade-to-data-lake-storage-gen2.md
description: Description goes here.
Previously updated : 10/12/2021 Last updated : 01/25/2022 # Upgrading Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities
-This article helps you unlock capabilities such as file and directory-level security and faster operations. These capabilities are widely used by big data analytics workloads and are referred to collectively as Azure Data Lake Storage Gen2. The most popular capabilities include:
+This article helps you to enable a hierarchical namespace and unlock capabilities such as file and directory-level security and faster operations. These capabilities are widely used by big data analytics workloads and are referred to collectively as Azure Data Lake Storage Gen2. The most popular capabilities include:
- Higher throughput, input/output operations per second (IOPS), and storage capacity limits.
storage Sas Expiration Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/sas-expiration-policy.md
The SAS expiration period appears in the console output.
To log the creation of a SAS that is valid over a longer interval than the SAS expiration policy recommends, first create a diagnostic setting that sends logs to an Azure Log Analytics workspace. For more information, see [Send logs to Azure Log Analytics](../blobs/monitor-blob-storage.md#send-logs-to-azure-log-analytics).
-Next, use an Azure Monitor log query to determine whether a SAS has expired. Create a new query in your Log Analytics workspace, add the following query text, and press **Run**.
+Next, use an Azure Monitor log query to monitor whether policy has been violated. Create a new query in your Log Analytics workspace, add the following query text, and press **Run**.
```kusto StorageBlobLogs | where SasExpiryStatus startswith "Policy Violated"
StorageBlobLogs | where SasExpiryStatus startswith "Policy Violated"
- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](storage-sas-overview.md) - [Create a service SAS](/rest/api/storageservices/create-service-sas) - [Create a user delegation SAS](/rest/api/storageservices/create-user-delegation-sas)-- [Create an account SAS](/rest/api/storageservices/create-account-sas)
+- [Create an account SAS](/rest/api/storageservices/create-account-sas)
storage Storage Account Keys Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-account-keys-manage.md
Previously updated : 01/04/2022 Last updated : 01/25/2022
$storageAccountKey = `
To list your account access keys with Azure CLI, call the [az storage account keys list](/cli/azure/storage/account/keys#az_storage_account_keys_list) command, as shown in the following example. Remember to replace the placeholder values in brackets with your own values.
-```azurecli-interactive
+```azurecli
az storage account keys list \ --resource-group <resource-group> \ --account-name <storage-account>
To rotate your storage account access keys with Azure CLI:
1. Update the connection strings in your application code to reference the secondary access key for the storage account. 1. Call the [az storage account keys renew](/cli/azure/storage/account/keys#az_storage_account_keys_renew) command to regenerate the primary access key, as shown in the following example:
- ```azurecli-interactive
+ ```azurecli
az storage account keys renew \ --resource-group <resource-group> \ --account-name <storage-account> \
To create a key expiration policy in the Azure portal:
1. In the [Azure portal](https://portal.azure.com), go to your storage account. 1. Under **Security + networking**, select **Access keys**. Your account access keys appear, as well as the complete connection string for each key.
-1. Select the **Set rotation reminder** link.
+1. Select the **Set rotation reminder** button. If the **Set rotation reminder** button is grayed out, you will need to rotate each of your keys. Follow the steps described in [Manually rotate access keys](#manually-rotate-access-keys) to rotate the keys.
1. In **Set a reminder to rotate access keys**, select the **Enable key rotation reminders** checkbox and set a frequency for the reminder. 1. Select **Save**.
To create a key expiration policy in the Azure portal:
To create a key expiration policy with PowerShell, use the [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) command and set the `-KeyExpirationPeriodInDay` parameter to the interval in days until the access key should be rotated.
-```powershell
-$account = Set-AzStorageAccount -ResourceGroupName <resource-group> -Name `
- <storage-account-name> -KeyExpirationPeriodInDay <period-in-days>
-```
+The `KeyCreationTime` property indicates when the account access keys were created or last rotated. Older accounts may have a null value for the `KeyCreationTime` property because it has not yet been set. If the `KeyCreationTime` property is null, you cannot create a key expiration policy until you rotate the keys. For this reason, it's a good idea to check the `KeyCreationTime` property for the storage account before you attempt to set the key expiration policy.
-You can also set the key expiration policy as you create a storage account by setting the `-KeyExpirationPeriodInDay` parameter of the [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) command.
+The following example checks whether the `KeyCreationTime` property has been set for each key. If the `KeyCreationTime` property has a value, then a key expiration policy is created for the storage account. Remember to replace the placeholder values in brackets with your own values.
-To verify that the policy has been applied, check the storage account's `KeyPolicy` property.
+```azurepowershell
+$rgName = "<resource-group>"
+$accountName = "<account-name>"
-```powershell
-$account.KeyPolicy
-```
+$account = Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName
-The key expiration period appears in the console output.
+# Check whether the KeyCreationTime property has a value for each key
+# before creating the key expiration policy.
+if ($account.KeyCreationTime.Key1 -eq $null -or $account.KeyCreationTime.Key2 -eq $null)
+{
+ Write-Host("You must regenerate both keys at least once before setting expiration policy")
+}
+else
+{
+ $account = Set-AzStorageAccount -ResourceGroupName $rgName -Name `
+ $accountName -KeyExpirationPeriodInDay 60
+}
+```
-> [!div class="mx-imgBorder"]
-> ![access key expiration period](./media/storage-account-keys-manage/key-policy-powershell.png)
+You can also set the key expiration policy as you create a storage account by setting the `-KeyExpirationPeriodInDay` parameter of the [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) command.
-To find out when a key was created or last rotated, use the `KeyCreationTime` property.
+To verify that the policy has been applied, check the storage account's `KeyPolicy` property.
```powershell
-$account.KeyCreationTime
+$account.KeyPolicy
```
-The access key creation time for both access keys appears in the console output.
-
-> [!div class="mx-imgBorder"]
-> ![access key creation times](./media/storage-account-keys-manage/key-creation-time-powershell.png)
- ### [Azure CLI](#tab/azure-cli)
-To create a key expiration policy with Azure CLI, use the [az storage account update](/cli/azure/storage/account#az_storage_account_update) command and set the `--key-exp-days` parameter to the number of days an access key can be active before it should be rotated.
-
-```azurecli-interactive
-az storage account update \
- -n <storage-account-name> \
- -g <resource-group> --key-exp-days <period-in-days>
+To create a key expiration policy with Azure CLI, use the [az storage account update](/cli/azure/storage/account#az_storage_account_update) command and set the `--key-exp-days` parameter to the interval in days until the access key should be rotated.
+
+The `keyCreationTime` property indicates when the account access keys were created or last rotated. Older accounts may have a null value for the `keyCreationTime` property because it has not yet been set. If the `keyCreationTime` property is null, you cannot create a key expiration policy until you rotate the keys. For this reason, it's a good idea to check the `keyCreationTime` property for the storage account before you attempt to set the key expiration policy.
+
+The following example checks whether the `keyCreationTime` property has been set for each key. If the `keyCreationTime` property has a value, then a key expiration policy is created for the storage account. Remember to replace the placeholder values in brackets with your own values.
+
+```azurecli
+key1_create_time=$(az storage account show \
+ --name <storage-account> \
+ --resource-group <resource-group> \
+ --query 'keyCreationTime.key1' \
+ --output tsv)
+key2_create_time=$(az storage account show \
+ --name <storage-account> \
+ --resource-group <resource-group> \
+ --query 'keyCreationTime.key2' \
+ --output tsv)
+
+if [ -z "$key1_create_time" ] || [ -z "$key2_create_time" ];
+then
+ echo "You must regenerate both keys at least once before setting expiration policy"
+else
+ az storage account update \
+ --name <storage-account> \
+ --resource-group <resource-group> \
+ --key-exp-days 60
+fi
```
-You can also set the key expiration policy as you create a storage account by setting the `-KeyExpirationPeriodInDay` parameter of the [az storage account create](/cli/azure/storage/account#az_storage_account_create) command.
+You can also set the key expiration policy as you create a storage account by setting the `--key-exp-days` parameter of the [az storage account create](/cli/azure/storage/account#az_storage_account_create) command.
To verify that the policy has been applied, call the [az storage account show](/cli/azure/storage/account#az_storage_account_show) command, and use the string `{KeyPolicy:keyPolicy}` for the `-query` parameter.
-```azurecli-interactive
+```azurecli
az storage account show \ -n <storage-account-name> \ -g <resource-group-name> \
The key expiration period appears in the console output.
```json { "KeyPolicy": {
- "keyExpirationPeriodInDays": 5
+ "enableAutoRotation": false,
+ "keyExpirationPeriodInDays": 60
} } ```
-To find out when a key was created or last rotated, use the [az storage account show](/cli/azure/storage/account#az_storage_account_show) command, and then use the string `keyCreationTime` for the `-query` parameter.
-
-```azurecli-interactive
-az storage account show \
- -n <storage-account-name> \
- -g <resource-group-name> \
- --query "keyCreationTime"
-```
- ## Check for key expiration policy violations
storage Storage Explorer Support Policy Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-explorer-support-policy-lifecycle.md
This table describes the release date and the end of support date for each relea
| Storage Explorer version | Release date | End of support date | |:-:|::|:-:|
+| v1.22.1 | January 25, 2022 | January 25, 2023 |
+| v1.22.0 | December 14, 2021 | December 14, 2022 |
+| v1.21.3 | October 25, 2021 | October 25, 2022 |
| v1.21.2 | September 28, 2021 | September 28, 2022 | | v1.21.1 | September 22, 2021 | September 22, 2022 | | v1.21.0 | September 8, 2021 | September 8, 2022 |
storage Storage Use Azcopy Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-use-azcopy-configure.md
To filter the transfers by status, use the following command:
azcopy jobs show <job-id> --with-status=Failed ```
+> [!TIP]
+> The value of the `--with-status` flag is case-sensitive.
+ Use the following command to resume a failed/canceled job. This command uses its identifier along with the SAS token as it isn't persistent for security reasons: ```
storage Files Manage Namespaces https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/files-manage-namespaces.md
In the DFS Management console, select the namespace you just created and select
![A screenshot of the **New Folder** dialog.](./media/files-manage-namespaces/dfs-folder-targets.png)
-In the textbox labeled **Name** provide the name of the folder. Select **Add...** to add folder targets for this folder. The resulting **Add Folder Target** dialog provides a textbox labeled **Path to folder target** where you can provide the UNC path to the desired folder. Select **OK** on the **Add Folder Target** dialog. If you are adding a UNC path to an Azure file share, you may receive a message reporting that the server `storageaccount.file.core.windows.net` cannot be contacts. This is expected; select **Yes** to continue. Finally, select **OK** on the **New Folder** dialog to create the folder and folder targets.
+In the textbox labeled **Name** provide the name of the folder. Select **Add...** to add folder targets for this folder. The resulting **Add Folder Target** dialog provides a textbox labeled **Path to folder target** where you can provide the UNC path to the desired folder. Select **OK** on the **Add Folder Target** dialog. If you are adding a UNC path to an Azure file share, you may receive a message reporting that the server `storageaccount.file.core.windows.net` cannot be contacted. This is expected; select **Yes** to continue. Finally, select **OK** on the **New Folder** dialog to create the folder and folder targets.
# [PowerShell](#tab/azure-powershell) ```PowerShell
synapse-analytics Overview Database Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/database-designer/overview-database-templates.md
These information blueprints can be used by organizations to plan, architect, an
For example, if you're building a product recommendation solution for your retail customers, you'll need a basic blue-print to understand what the customer purchased and the transaction that led to the purchase. You may also need information about the store where the purchase was made. You also need to understand whether the customer is part of a loyalty program. Just to accomplish this use case we need the following schemas:
+* Product
+* Transaction
+* TransactionLineItem
+* Customer
+* CustomerLoyalty
+* Store
You can set up this use case by selecting the six tables in the retail database template.
You can set up this use case by selecting the six tables in the retail database
A typical database template addresses the core requirements of a specific industry and consists of:
+* A supporting set of [business area templates](concepts-database-templates.md#business-area-templates).
+* One or more [enterprise templates](concepts-database-templates.md#enterprise-templates).
## Available database templates Currently, you can choose from 11 database templates in Azure Synapse Studio to start creating your lake database:
+* **Agriculture**ΓÇè-ΓÇèFor companies engaged in growing crops, raising livestock, and dairy production.
+* **Automotive** - For companies manufacturing automobiles, heavy vehicles, tires, and other automotive components.
+* **Banking** - For companies that analyze banking data.
+* **Consumer Goods** - For manufacturers or producers of goods bought and used by consumers.
+* **Energy & Commodity Trading**ΓÇè-ΓÇèFor traders of energy, commodities, or carbon credits.
+* **Freight & Logistics**ΓÇè-ΓÇèFor companies that provide freight and logistics services.
+* **Fund Management** - For companies that manage investment funds for investors.
+* **Genomics** - For companies acquiring and analyzing genomic data about human beings or other species.
+* **Life Insurance & Annuities** - For companies that provide life insurance, sell annuities, or both.
+* **Manufacturing** - For companies engaged in discrete manufacturing of a wide range of products.
+* **Oil & Gas**ΓÇè-ΓÇèFor companies that are involved in various phases of the Oil & Gas value chain.
+* **Pharmaceuticals** - For companies engaged in creating, manufacturing, and marketing pharmaceutical and bio-pharmaceutical products and medical devices.
+* **Property & Casualty Insurance** - For companies that provide insurance against risks to property and various forms of liability coverage.
+* **Retail** - For sellers of consumer goods or services to customers through multiple channels.
+* **Utilities**ΓÇè-ΓÇèFor gas, electric, and water utilities; power generators; and water desalinators.
As emission and carbon management is an important discussion in all industries, we've included those components in all the available database templates. These components make it easy for companies who need to track and report their direct and indirect greenhouse gas emissions. ## Next steps+ Continue to explore the capabilities of the database designer using the links below.-- [Database templates concept](concepts-database-templates.md)-- [Quick start](quick-start-create-lake-database.md)
+* [Database templates concept](concepts-database-templates.md)
+* [Quick start](quick-start-create-lake-database.md)
synapse-analytics How To Connect To Workspace From Restricted Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-connect-to-workspace-from-restricted-network.md
On the **Resource** tab:
To access the linked storage with the storage explorer in Azure Synapse Analytics Studio workspace, you must create one private endpoint. The steps for this are similar to those of step 3. On the **Resource** tab:
-* For **Resource type**, select **Microsoft.Synapse/storageAccounts**.
+* For **Resource type**, select **Microsoft.Storage/storageAccounts**.
* For **Resource**, select the storage account name that you created previously. * For **Target sub-resource**, select the endpoint type: * **blob** is for Azure Blob Storage.
virtual-desktop Create Profile Container Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/create-profile-container-azure-ad.md
Previously updated : 12/01/2021 Last updated : 01/24/2022 # Create a profile container with Azure Files and Azure Active Directory (preview)
The Azure AD Kerberos functionality is only available on the following operating
- Windows 10 Enterprise single or multi-session, versions 2004 or later with the latest cumulative updates installed, especially the [KB5007253 - 2021-11 Cumulative Update Preview for Windows 10](https://support.microsoft.com/topic/november-22-2021-kb5007253-os-builds-19041-1387-19042-1387-19043-1387-and-19044-1387-preview-d1847be9-46c1-49fc-bf56-1d469fc1b3af). - Windows Server, version 2022 with the latest cumulative updates installed, especially the [KB5007254 - 2021-11 Cumulative Update Preview for Microsoft server operating system version 21H2](https://support.microsoft.com/topic/november-22-2021-kb5007254-os-build-20348-380-preview-9a960291-d62e-486a-adcc-6babe5ae6fc1).
-The user accounts must be [hybrid user identities](../active-directory/hybrid/whatis-hybrid-identity.md), which means you'll also need Active Directory Domain Services (AD DS) and Azure AD Connect. You must create these accounts in Active Directory and sync them to Azure AD.
+The user accounts must be [hybrid user identities](../active-directory/hybrid/whatis-hybrid-identity.md), which means you'll also need Active Directory Domain Services (AD DS) and Azure AD Connect. You must create these accounts in Active Directory and sync them to Azure AD. The service doesn't currently support environments where users are managed with Azure AD and optionally synced to Azure AD Directory Services.
To assign Azure Role-Based Access Control (RBAC) permissions for the Azure file share to a user group, you must create the group in Active Directory and sync it to Azure AD.
Follow the instructions in the following sections to configure Azure AD authenti
For more information, see [Install the Azure AD PowerShell module](/powershell/azure/active-directory/install-adv2). -- Set variables for both storage account name and resource group name by running the following PowerShell cmdlets, replacing the values with the ones relevant to your environment.
+- Set the required variables for your tenant, subscription, storage account name and resource group name by running the following PowerShell cmdlets, replacing the values with the ones relevant to your environment.
```powershell
+ $tenantId = "<MyTenantId>"
+ $subscriptionId = "<MySubscriptionId>"
$resourceGroupName = "<MyResourceGroup>" $storageAccountName = "<MyStorageAccount>" ```
Follow the instructions in the following sections to configure Azure AD authenti
- Enable Azure AD authentication on your storage account by running the following PowerShell cmdlets: ```powershell
- Connect-AzAccount
- $Subscription = $(Get-AzContext).Subscription.Id;
- $ApiVersion = '2021-04-01'
+ Connect-AzAccount -Tenant $tenantId -SubscriptionId $subscriptionId
- $Uri = ('https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Storage/storageAccounts/{2}?api-version={3}' -f $Subscription, $ResourceGroupName, $StorageAccountName, $ApiVersion);
+ $Uri = ('https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Storage/storageAccounts/{2}?api-version=2021-04-01' -f $subscriptionId, $resourceGroupName, $storageAccountName);
- $json =
- @{properties=@{azureFilesIdentityBasedAuthentication=@{directoryServiceOptions="AADKERB"}}};
+ $json = @{properties=@{azureFilesIdentityBasedAuthentication=@{directoryServiceOptions="AADKERB"}}};
$json = $json | ConvertTo-Json -Depth 99 $token = $(Get-AzAccessToken).Token
To enable Azure AD authentication on a storage account, you need to create an Az
```powershell $Token = ([Microsoft.Open.Azure.AD.CommonLibrary.AzureSession]::AccessTokens['AccessToken']).AccessToken
- $apiVersion = '1.6'
- $Uri = ('https://graph.windows.net/{0}/{1}/{2}?api-version={3}' -f $azureAdPrimaryDomain, 'servicePrincipals', $servicePrincipal.ObjectId, $apiVersion)
+ $Uri = ('https://graph.windows.net/{0}/{1}/{2}?api-version=1.6' -f $azureAdPrimaryDomain, 'servicePrincipals', $servicePrincipal.ObjectId)
$json = @' { "passwordCredentials": [
To enable Azure AD authentication on a storage account, you need to create an Az
'@ $now = [DateTime]::UtcNow $json = $json -replace "<STORAGEACCOUNTSTARTDATE>", $now.AddDays(-1).ToString("s")
- $json = $json -replace "<STORAGEACCOUNTENDDATE>", $now.AddMonths(12).ToString("s")
+ $json = $json -replace "<STORAGEACCOUNTENDDATE>", $now.AddMonths(6).ToString("s")
$json = $json -replace "<STORAGEACCOUNTPASSWORD>", $password $Headers = @{'authorization' = "Bearer $($Token)"} try {
All users that need to have FSLogix profiles stored on the storage account you'r
### Assign directory level access permissions
-To prevent users from accessing the user profile of other users, you must also assign directory-level permissions. This section provides the steps to configure the permissions. Learn more about the recommended list of permissions for FSLogix profiles at [Configure the storage permissions for profile containers](/fslogix/fslogix-storage-config-ht)
+To prevent users from accessing the user profile of other users, you must also assign directory-level permissions. This section will give you a step-by-step guide for how to configure the permissions.
> [!IMPORTANT] > Without proper directory level permissions in place, a user can delete the user profile or access the personal information of a different user. It's important to make sure users have proper permissions to prevent accidental deletion from happening.
To configure your storage account:
1. On a device that's domain-joined to the Active Directory, install the [ActiveDirectory PowerShell module](/powershell/module/activedirectory/?view=windowsserver2019-ps&preserve-view=true) if you haven't already.
-2. Set the storage account's ActiveDirectoryProperties to support the Shell experience. Because Azure AD doesn't currently support configuring ACLs in Shell, it must instead rely on Active Directory. To configure Shell, run the following command in PowerShell:
+2. Set the required variables for your tenant, subscription, storage account name and resource group name by running the following PowerShell cmdlets, replacing the values with the ones relevant to your environment. You can skip this step if you've already set these values.
+ ```powershell
- function Set-StorageAccountAadKerberosADProperties {
- [CmdletBinding()]
- param(
- [Parameter(Mandatory=$true, Position=0)]
- [string]$ResourceGroupName,
-
- [Parameter(Mandatory=$true, Position=1)]
- [string]$StorageAccountName,
-
- [Parameter(Mandatory=$false, Position=2)]
- [string]$Domain
- )
-
- $AzContext = Get-AzContext;
- if ($null -eq $AzContext) {
- Write-Error "No Azure context found. Please run Connect-AzAccount and then retry." -ErrorAction Stop;
- }
-
- $AdModule = Get-Module ActiveDirectory;
- if ($null -eq $AdModule) {
- Write-Error "Please install and/or import the ActiveDirectory PowerShell module." -ErrorAction Stop;
- }
-
- if ([System.String]::IsNullOrEmpty($Domain)) {
- $domainInformation = Get-ADDomain
- $Domain = $domainInformation.DnsRoot
- } else {
- $domainInformation = Get-ADDomain -Server $Domain
- }
-
- $domainGuid = $domainInformation.ObjectGUID.ToString()
- $domainName = $domainInformation.DnsRoot
- $domainSid = $domainInformation.DomainSID.Value
- $forestName = $domainInformation.Forest
- $netBiosDomainName = $domainInformation.DnsRoot
- $azureStorageSid = $domainSid + "-123454321";
-
- Write-Verbose "Setting AD properties on $StorageAccountName in $ResourceGroupName : `
+ $tenantId = "<MyTenantId>"
+ $subscriptionId = "<MySubscriptionId>"
+ $resourceGroupName = "<MyResourceGroup>"
+ $storageAccountName = "<MyStorageAccount>"
+ ```
+
+3. Set the storage account's ActiveDirectoryProperties to support the Shell experience. Because Azure AD doesn't currently support configuring ACLs in Shell, it must instead rely on Active Directory. To configure Shell, run the following cmdlets in PowerShell:
+
+ ```powershell
+ Connect-AzAccount -Tenant $tenantId -SubscriptionId $subscriptionId
+
+ $AdModule = Get-Module ActiveDirectory;
+ if ($null -eq $AdModule) {
+ Write-Error "Please install and/or import the ActiveDirectory PowerShell module." -ErrorAction Stop;
+ }
+ $domainInformation = Get-ADDomain
+ $Domain = $domainInformation.DnsRoot
+ $domainGuid = $domainInformation.ObjectGUID.ToString()
+ $domainName = $domainInformation.DnsRoot
+ $domainSid = $domainInformation.DomainSID.Value
+ $forestName = $domainInformation.Forest
+ $netBiosDomainName = $domainInformation.DnsRoot
+ $azureStorageSid = $domainSid + "-123454321";
+
+ Write-Verbose "Setting AD properties on $storageAccountName in $resourceGroupName : `
EnableActiveDirectoryDomainServicesForFile=$true, ActiveDirectoryDomainName=$domainName, ` ActiveDirectoryNetBiosDomainName=$netBiosDomainName, ActiveDirectoryForestName=$($domainInformation.Forest) ` ActiveDirectoryDomainGuid=$domainGuid, ActiveDirectoryDomainSid=$domainSid, ` ActiveDirectoryAzureStorageSid=$azureStorageSid"
- $Subscription = $AzContext.Subscription.Id;
- $ApiVersion = '2021-04-01'
-
- $Uri = ('https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Storage/storageAccounts/{2}?api-version={3}' `
- -f $Subscription, $ResourceGroupName, $StorageAccountName, $ApiVersion);
+ $Uri = ('https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Storage/storageAccounts/{2}?api-version=2021-04-01' -f $subscriptionId, $resourceGroupName, $storageAccountName);
- $json=
- @{
- properties=
- @{azureFilesIdentityBasedAuthentication=
- @{directoryServiceOptions="AADKERB";
- activeDirectoryProperties=@{domainName="$($domainName)";
- netBiosDomainName="$($netBiosDomainName)";
- forestName="$($forestName)";
- domainGuid="$($domainGuid)";
- domainSid="$($domainSid)";
- azureStorageSid="$($azureStorageSid)"}
- }
- }
- };
-
- $json = $json | ConvertTo-Json -Depth 99
-
- $token = $(Get-AzAccessToken).Token
- $headers = @{ Authorization="Bearer $token" }
-
- try {
- Invoke-RestMethod -Uri $Uri -ContentType 'application/json' -Method PATCH -Headers $Headers -Body $json
- } catch {
- Write-Host $_.Exception.ToString()
- Write-Host "Error setting Storage Account AD properties. StatusCode:" $_.Exception.Response.StatusCode.value__
- Write-Host "Error setting Storage Account AD properties. StatusDescription:" $_.Exception.Response.StatusDescription
- Write-Error -Message "Caught exception setting Storage Account AD properties: $_" -ErrorAction Stop
- }
- }
- ```
+ $json=
+ @{
+ properties=
+ @{azureFilesIdentityBasedAuthentication=
+ @{directoryServiceOptions="AADKERB";
+ activeDirectoryProperties=@{domainName="$($domainName)";
+ netBiosDomainName="$($netBiosDomainName)";
+ forestName="$($forestName)";
+ domainGuid="$($domainGuid)";
+ domainSid="$($domainSid)";
+ azureStorageSid="$($azureStorageSid)"}
+ }
+ }
+ };
-3. Call the function by running the following PowerShell cmdlets:
+ $json = $json | ConvertTo-Json -Depth 99
- ```powershell
- Connect-AzAccount
- Set-StorageAccountAadKerberosADProperties -ResourceGroupName $resourceGroupName -StorageAccountName $storageAccountName
+ $token = $(Get-AzAccessToken).Token
+ $headers = @{ Authorization="Bearer $token" }
+
+ try {
+ Invoke-RestMethod -Uri $Uri -ContentType 'application/json' -Method PATCH -Headers $Headers -Body $json
+ } catch {
+ Write-Host $_.Exception.ToString()
+ Write-Host "Error setting Storage Account AD properties. StatusCode:" $_.Exception.Response.StatusCode.value__
+ Write-Host "Error setting Storage Account AD properties. StatusDescription:" $_.Exception.Response.StatusDescription
+ Write-Error -Message "Caught exception setting Storage Account AD properties: $_" -ErrorAction Stop
+ }
``` Enable Azure AD Kerberos functionality by configuring the group policy or registry value in the following list:
Next, make sure you can retrieve a Kerberos Ticket Granting Ticket (TGT) by foll
``` klist purge klist get krbtgt
+ klist
``` 5. Confirm you have a Kerberos TGT by looking for an item with a server property of `krbtgt/KERBEROS.MICROSOFTONLINE.COM @ KERBEROS.MICROSOFTONLINE.COM`.
Next, make sure you can retrieve a Kerberos Ticket Granting Ticket (TGT) by foll
net use <DriveLetter>: \\<storage-account-name>.file.core.windows.net\<fIle-share-name> ```
-Finally, follow the instructions in [Configure directory and file level permissions](../storage/files/storage-files-identity-ad-ds-configure-permissions.md) to finish configuring your permissions with icacls or Windows Explorer.
+Finally, follow the instructions in [Configure directory and file level permissions](../storage/files/storage-files-identity-ad-ds-configure-permissions.md) to finish configuring your permissions with icacls or Windows Explorer. Learn more about the recommended list of permissions for FSLogix profiles at [Configure the storage permissions for profile containers](/fslogix/fslogix-storage-config-ht).
## Configure the session hosts
Finally, test the profile to make sure that it works:
6. If everything's set up correctly, you should see a directory with a name that's formatted like this: `<user SID>_<username>`.
+## Disable Azure AD authentication on your Azure Storage account
+
+If you need to disable Azure AD authentication on your storage account:
+
+- Set the required variables for your tenant, subscription, storage account name and resource group name by running the following PowerShell cmdlets, replacing the values with the ones relevant to your environment.
+
+ ```powershell
+ $tenantId = "<MyTenantId>"
+ $subscriptionId = "<MySubscriptionId>"
+ $resourceGroupName = "<MyResourceGroup>"
+ $storageAccountName = "<MyStorageAccount>"
+ ```
+
+- Run the following cmdlets in PowerShell to disable Azure AD authentication on your storage account:
+
+ ```powershell
+ Connect-AzAccount -Tenant $tenantId -SubscriptionId $subscriptionId
+ $Uri = ('https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Storage/storageAccounts/{2}?api-version=2021-04-01' -f $subscriptionId, $resourceGroupName, $storageAccountName);
+
+ $json = @{properties=@{azureFilesIdentityBasedAuthentication=@{directoryServiceOptions="None"}}};
+ $json = $json | ConvertTo-Json -Depth 99
+
+ $token = $(Get-AzAccessToken).Token
+ $headers = @{ Authorization="Bearer $token" }
+
+ try {
+ Invoke-RestMethod -Uri $Uri -ContentType 'application/json' -Method PATCH -Headers $Headers -Body $json;
+ } catch {
+ Write-Host $_.Exception.ToString()
+ Write-Host "Error setting Storage Account directoryServiceOptions=None. StatusCode:" $_.Exception.Response.StatusCode.value__
+ Write-Host "Error setting Storage Account directoryServiceOptions=None. StatusDescription:" $_.Exception.Response.StatusDescription
+ Write-Error -Message "Caught exception setting Storage Account directoryServiceOptions=None: $_" -ErrorAction Stop
+ }
+ ```
+ ## Next steps - To troubleshoot FSLogix, see [this troubleshooting guide](/fslogix/fslogix-trouble-shooting-ht).
virtual-desktop Deploy Azure Ad Joined Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/deploy-azure-ad-joined-vm.md
Previously updated : 12/01/2021 Last updated : 01/24/2022
You can enable [multifactor authentication](set-up-mfa.md) for Azure AD-joined V
You can use FSLogix profile containers with Azure AD-joined VMs when you store them on Azure Files. For more information, see [Create a profile container with Azure Files and Azure AD](create-profile-container-azure-ad.md).
+## Accessing on-premises resources
+
+While you don't need an Active Directory to deploy or access your Azure AD-joined VMs, an Active Directory and line-of-sight to it are needed to access on-premises resources from those VMs. To learn more about accessing on-premises resources, see [How SSO to on-premises resources works on Azure AD joined devices](../active-directory/devices/azuread-join-sso.md).
+ ## Next steps Now that you've deployed some Azure AD joined VMs, you can sign in to a supported Azure Virtual Desktop client to test it as part of a user session. If you want to learn how to connect to a session, check out these articles:
virtual-machines Av2 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/av2-series.md
The Av2-series VMs can be deployed on a variety of hardware types and processors
[VM Generation Support](generation-2.md): Generation 1 <br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Not Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
<br> | Size | vCore | Memory: GiB | Temp storage (SSD) GiB | Max temp storage throughput: IOPS/Read MBps/Write MBps | Max data disks/throughput: IOPS | Max NICs | Expected network bandwidth (Mbps)
virtual-machines Dasv5 Dadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dasv5-dadsv5-series.md
Dasv5-series virtual machines support Standard SSD, Standard HDD, and Premium SS
[VM Generation Support](generation-2.md): Generation 1 and 2 <br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) |
Dadsv5-series virtual machines support Standard SSD, Standard HDD, and Premium S
[VM Generation Support](generation-2.md): Generation 1 and 2 <br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) |
virtual-machines Dav4 Dasv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dav4-dasv4-series.md
Dav4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
[VM Generation Support](generation-2.md): Generation 1<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / Read MBps / Write MBps | Max NICs | Expected network bandwidth (Mbps) |
Dasv4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max burst cached and temp storage throughput: IOPS / MBps<sup>1</sup> | Max uncached disk throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Expected network bandwidth (Mbps) |
virtual-machines Dcasv5 Dcadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dcasv5-dcadsv5-series.md
This series supports Standard SSD, Standard HDD, and Premium SSD disk types. Bil
- [Memory Preserving Updates](maintenance-and-updates.md) - [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md) - [Ephemeral OS Disks](ephemeral-os-disks.md)
+- [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization)
### DCasv5-series products
virtual-machines Dcv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dcv2-series.md
Example confidential use cases include: databases, blockchain, multiparty data a
[VM Generation Support](generation-2.md): Generation 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Not Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
<br> ## Technical specifications
virtual-machines Dcv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dcv3-series.md
Base All-Core Frequency: 2.8 GHz<br>
[VM Generation Support](generation-2.md): Generation 2<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br> [Dedicated Host](dedicated-hosts.md): Coming Soon<br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
## DCsv3-series Technical specifications
virtual-machines Ddv4 Ddsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ddv4-ddsv4-series.md
The new Ddv4 VM sizes include fast, larger local SSD storage (up to 2,400 GiB) a
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max NICs|Expected network bandwidth (Mbps) |
The new Ddsv4 VM sizes include fast, larger local SSD storage (up to 2,400 GiB)
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs|Expected network bandwidth (Mbps) |
virtual-machines Ddv5 Ddsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ddv5-ddsv5-series.md
Ddv5-series virtual machines support Standard SSD and Standard HDD disk types. T
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Required <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps<sup>*</sup> | Max NICs|Max network bandwidth (Mbps) |
Ddsv5-series virtual machines support Standard SSD, Standard HDD, and Premium SS
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Required <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br>
virtual-machines Dv2 Dsv2 Series Memory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dv2-dsv2-series-memory.md
Dv2-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice La
[VM Generation Support](generation-2.md): Generation 1<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max temp storage throughput: IOPS/Read MBps/Write MBps | Max data disks/throughput: IOPS | Max NICs| Expected network bandwidth (Mbps) |
DSv2-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice L
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Max uncached disk throughput: IOPS/MBps | Max NICs| Expected network bandwidth (Mbps) |
virtual-machines Dv2 Dsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dv2-dsv2-series.md
Dv2-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice La
[VM Generation Support](generation-2.md): Generation 1<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max temp storage throughput: IOPS/Read MBps/Write MBps | Max data disks | Throughput: IOPS | Max NICs | Expected network bandwidth (Mbps) |
DSv2-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice L
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Max uncached disk throughput: IOPS/MBps | Max NICs|Expected network bandwidth (Mbps) |
virtual-machines Dv3 Dsv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dv3-dsv3-series.md
Dv3-series VMs feature Intel® Hyper-Threading Technology.
[VM Generation Support](generation-2.md): Generation 1<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/Read MBps/Write MBps | Max NICs/ Expected network bandwidth |
Dsv3-series VMs feature Intel® Hyper-Threading Technology.
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Max burst cached and temp storage throughput: IOPS/MBps<sup>2</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs/ Expected network bandwidth (Mbps) |
virtual-machines Dv4 Dsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dv4-dsv4-series.md
Remote Data disk storage is billed separately from virtual machines. To use prem
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max NICs|Expected network bandwidth (Mbps) |
Dsv4-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice L
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs|Expected network bandwidth (Mbps) |
virtual-machines Dv5 Dsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dv5-dsv5-series.md
Dv5-series virtual machines do not have any temporary storage thus lowering the
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Required <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max NICs|Max network bandwidth (Mbps) |
Dsv5-series virtual machines do not have any temporary storage thus lowering the
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Required <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>3</sup> | Max NICs | Max network bandwidth (Mbps) |
virtual-machines Easv5 Eadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/easv5-eadsv5-series.md
Easv5-series virtual machines support Standard SSD, Standard HDD, and Premium SS
[VM Generation Support](generation-2.md): Generation 1 and 2 <br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) |
Eadsv5-series virtual machines support Standard SSD, Standard HDD, and Premium S
[VM Generation Support](generation-2.md): Generation 1 and 2 <br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) |
virtual-machines Eav4 Easv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/eav4-easv4-series.md
The Eav4-series and Easv4-series utilize AMD's 2.35Ghz EPYC<sup>TM</sup> 7452 pr
[VM Generation Support](generation-2.md): Generations 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> Eav4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor that can achieve a boosted maximum frequency of 3.35GHz. The Eav4-series sizes are ideal for memory-intensive enterprise applications. Data disk storage is billed separately from virtual machines. To use premium SSD, use the Easv4-series sizes. The pricing and billing meters for Easv4 sizes are the same as the Eav3-series.
Eav4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
[VM Generation Support](generation-2.md): Generations 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> Easv4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor that can achieve a boosted maximum frequency of 3.35GHz and use premium SSD. The Easv4-series sizes are ideal for memory-intensive enterprise applications.
virtual-machines Ecasv5 Ecadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ecasv5-ecadsv5-series.md
This series supports Standard SSD, Standard HDD, and Premium SSD disk types. Bil
### ECasv5-series feature support
-*Supported* features in DCasv5-series VMs:
+*Supported* features in ECasv5-series VMs:
- [Premium Storage](premium-storage-performance.md) - [Premium Storage caching](premium-storage-performance.md) - [VM Generation 2](generation-2.md)
-*Unsupported* features in DCasv5-series VMs:
+*Unsupported* features in ECasv5-series VMs:
- [Live Migration](maintenance-and-updates.md) - [Memory Preserving Updates](maintenance-and-updates.md) - [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md) - [Ephemeral OS Disks](ephemeral-os-disks.md)
+- [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization)
### ECasv5-series products
virtual-machines Edv4 Edsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/edv4-edsv4-series.md
Edv4-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice L
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<sup>1</sup> <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max NICs|Max network bandwidth (Mbps) |
Edsv4-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs|Max network bandwidth (Mbps) |
virtual-machines Ev3 Esv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ev3-esv3-series.md
Ev3-series VM’s feature Intel® Hyper-Threading Technology.
[VM Generation Support](generation-2.md): Generation 1<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / Read MBps / Write MBps | Max NICs / Network bandwidth |
Esv3-series VM’s feature Intel® Hyper-Threading Technology.
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Burst cached and temp storage throughput: IOPS/MBps<sup>3</sup> | Max uncached disk throughput: IOPS/MBps | Burst uncached disk throughput: IOPS/MBps<sup>3</sup>| Max NICs/ Expected network bandwidth (Mbps) |
virtual-machines Ev4 Esv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ev4-esv4-series.md
Remote Data disk storage is billed separately from virtual machines. To use prem
[VM Generation Support](generation-2.md): Generation 1<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max NICs|Expected network bandwidth (Mbps) |
Esv4-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice L
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br>
virtual-machines Ev5 Esv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ev5-esv5-series.md
Ev5-series supports Standard SSD and Standard HDD disk types. To use Premium SSD
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Required <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max NICs|Max network bandwidth (Mbps) |
Esv5-series supports Standard SSDs, Standard HDDs, and Premium SSDs disk types.
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Required <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>5</sup> | Max NICs | Max network bandwidth (Mbps) |
virtual-machines Fsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/fsv2-series.md
Fsv2-series VMs feature Intel® Hyper-Threading Technology.
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU's | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> |Max NICs|Expected network bandwidth (Mbps) |
virtual-machines Fx Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/fx-series.md
FX-series VMs feature [Intel® Turbo Boost Technology 2.0](https://www.intel.com
[VM Generation Support](generation-2.md): Generation 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU's | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps | Max uncached disk throughput: IOPS/MBps | Max NICs|Expected network bandwidth (Mbps) |
virtual-machines Create Upload Centos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/create-upload-centos.md
Preparing a CentOS 7 virtual machine for Azure is very similar to CentOS 6, howe
* Use a cloud-init directive baked into the image that will do this every time the VM is created: ```console
- echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
+ echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF #cloud-config # Generated by Azure cloud image build
Preparing a CentOS 7 virtual machine for Azure is very similar to CentOS 6, howe
filesystem: swap mounts: - ["ephemeral0.1", "/mnt"]
- - ["ephemeral0.2", "none", "swap", "sw", "0", "0"]
+ - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"]
EOF ```
Preparing a CentOS 7 virtual machine for Azure is very similar to CentOS 6, howe
## Next steps
-You're now ready to use your CentOS Linux virtual hard disk to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
+You're now ready to use your CentOS Linux virtual hard disk to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/image-builder-json.md
This is the basic template format:
"osDiskSizeGB": <sizeInGB>, "vnetConfig": { "subnetId": "/subscriptions/<subscriptionID>/resourceGroups/<vnetRgName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/<subnetName>"
- }
+ },
+ "userAssignedIdentities": [
+ "/subscriptions/<subscriptionID>/resourceGroups/<identityRgName>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identityName1>",
+ "/subscriptions/<subscriptionID>/resourceGroups/<identityRgName>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identityName2>",
+ "/subscriptions/<subscriptionID>/resourceGroups/<identityRgName>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identityName3>",
+ ...
+ ]
}, "source": {}, "customize": {},
The location is the region where the custom image will be created. The following
``` ### Data Residency
-The Azure VM Image Builder service doesn't store/process customer data outside regions that have strict single region data residency requirements when a customer requests a build in that region. In the event of a service outage for regions that have data residency requirements, you will need to create templates in a different region and geography.
+The Azure VM Image Builder service doesn't store or process customer data outside regions that have strict single region data residency requirements when a customer requests a build in that region. In the event of a service outage for regions that have data residency requirements, you will need to create templates in a different region and geography.
### Zone Redundancy
-Distribution supports zone redundancy, VHDs are distributed to a Zone Redundant Storage account by default and the Azure Compute Gallery (formerly known as Shared Image Gallery) version will support a [ZRS storage type](../disks-redundancy.md#zone-redundant-storage-for-managed-disks) if specified.
+Distribution supports zone redundancy, VHDs are distributed to a Zone Redundant Storage (ZRS) account by default and the Azure Compute Gallery (formerly known as Shared Image Gallery) version will support a [ZRS storage type](../disks-redundancy.md#zone-redundant-storage-for-managed-disks) if specified.
## vmProfile ## buildVM
-By default Image Builder will use a "Standard_D1_v2" build VM for Gen1 images and a "Standard_D2ds_v4" build VM for Gen2 images, this is built from the image you specify in the `source`. You can override this and may wish to do this for these reasons:
+Image Builder will use a default SKU size of "Standard_D1_v2" for Gen1 images and "Standard_D2ds_v4" for Gen2 images. The generation is defined by the image you specify in the `source`. You can override this and may wish to do this for these reasons:
1. Performing customizations that require increased memory, CPU and handling large files (GBs). 2. Running Windows builds, you should use "Standard_D2_v2" or equivalent VM size. 3. Require [VM isolation](../isolation.md).
-4. Customize an Image that require specific hardware, e.g. for a GPU VM, you need a GPU VM size.
+4. Customize an image that requires specific hardware. For example, for a GPU VM, you need a GPU VM size.
5. Require end to end encryption at rest of the build VM, you need to specify the support build [VM size](../azure-vms-no-temp-disk.yml) that don't use local temporary disks. This is optional.
By default, Image Builder will not change the size of the image, it will use the
``` ## vnetConfig
-If you do not specify any VNET properties, then Image Builder will create its own VNET, Public IP, and NSG. The Public IP is used for the service to communicate with the build VM, however if you do not want a Public IP or want Image Builder to have access to your existing VNET resources, such as configuration servers (DSC, Chef, Puppet, Ansible), file shares etc., then you can specify a VNET. For more information, review the [networking documentation](image-builder-networking.md), this is optional.
+If you don't specify any VNET properties, then Image Builder will create its own VNET, Public IP, and network security group (NSG). The Public IP is used for the service to communicate with the build VM, however if you don't want a Public IP or want Image Builder to have access to your existing VNET resources, such as configuration servers (DSC, Chef, Puppet, Ansible), file shares, then you can specify a VNET. For more information, review the [networking documentation](image-builder-networking.md), this is optional.
```json "vnetConfig": {
These are key/value pairs you can specify for the image that's generated.
## Identity
+There are two ways to add user assigned identities explained below.
+
+### User Assigned Identity for Azure Image Builder image template resource
+ Required - For Image Builder to have permissions to read/write images, read in scripts from Azure Storage you must create an Azure User-Assigned Identity, that has permissions to the individual resources. For details on how Image Builder permissions work, and relevant steps, please review the [documentation](image-builder-user-assigned-identity.md).
Required - For Image Builder to have permissions to read/write images, read in s
```
-Image Builder support for a User-Assigned Identity:
+The Image Builder service User Assigned Identity:
* Supports a single identity only
-* Does not support custom domain names
+* Doesn't support custom domain names
To learn more, see [What is managed identities for Azure resources?](../../active-directory/managed-identities-azure-resources/overview.md). For more information on deploying this feature, see [Configure managed identities for Azure resources on an Azure VM using Azure CLI](../../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md#user-assigned-managed-identity).
+### User Assigned Identity for the Image Builder Build VM
+
+This field is only available in API versions 2021-10-01 and newer.
+
+Optional - The Image Builder Build VM, that is created by the Image Builder service in your subscription, is used to build and customize the image. For the Image Builder Build VM to have permissions to authenticate with other services like Azure Key Vault in your subscription, you must create one or more Azure User Assigned Identities that have permissions to the individual resources. Azure Image Builder can then associate these User Assigned Identities with the Build VM. Customizer scripts running inside the Build VM can then fetch tokens for these identities and interact with other Azure resources as needed. Please be aware, the user assigned identity for Azure Image Builder must have the "Managed Identity Operator" role assignment on all the user assigned identities for Azure Image Builder to be able to associate them to the build VM.
+
+> [!NOTE]
+> Please be aware that multiple identities can be specified for the Image Builder Build VM, including the identity you created for the [image template resource](#user-assigned-identity-for-azure-image-builder-image-template-resource). By default, the identity you created for the image template resource will not automatically be added to the build VM.
+
+```json
+ "properties": {
+ "vmProfile": {
+ "userAssignedIdentities": [
+ "/subscriptions/<subscriptionID>/resourceGroups/<identityRgName>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identityName>"
+ ]
+ },
+ },
+```
+
+The Image Builder Build VM User Assigned Identity:
+* Supports a list of one or more user assigned managed identities to be configured on the VM
+* Supports cross subscription scenarios (identity created in one subscription while the image template is created in another subscription under the same tenant)
+* Doesn't support cross tenant scenarios (identity created in one tenant while the image template is created in another tenant)
+
+To learn more, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](/active-directory/managed-identities-azure-resources/how-to-use-vm-token) and [How to use managed identities for Azure resources on an Azure VM](/active-directory/managed-identities-azure-resources/how-to-use-vm-sign-in).
+ ## Properties: source
-The `source` section contains information about the source image that will be used by Image Builder. Image Builder currently only natively supports creating Hyper-V generation (Gen1) 1 images to the Azure Compute Gallery (SIG) or Managed Image. If you want to create Gen2 images, then you need to use a source Gen2 image, and distribute to VHD. After, you will then need to create a Managed Image from the VHD, and inject it into the SIG as a Gen2 image.
+The `source` section contains information about the source image that will be used by Image Builder. Image Builder currently only natively supports creating Hyper-V generation (Gen1) 1 images to the Azure Compute Gallery (SIG) or managed image. If you want to create Gen2 images, then you need to use a source Gen2 image, and distribute to VHD. After, you will then need to create a managed image from the VHD, and inject it into the SIG as a Gen2 image.
-The API requires a 'SourceType' that defines the source for the image build, currently there are three types:
+The API requires a `SourceType` that defines the source for the image build, currently there are three types:
- PlatformImage - indicated the source image is a Marketplace image. - ManagedImage - use this when starting from a regular managed image.-- SharedImageVersion - this is used when you are using an image version in an Azure Compute Gallery as the source.
+- SharedImageVersion - this is used when you're using an image version in an Azure Compute Gallery as the source.
> [!NOTE]
The properties here are the same that are used to create VM's, using AZ CLI, run
az vm image list -l westus -f UbuntuServer -p Canonical --output table ΓÇô-all ```
-You can use 'latest' in the version, the version is evaluated when the image build takes place, not when the template is submitted. If you use this functionality with the Azure Compute Gallery destination, you can avoid resubmitting the template, and rerun the image build at intervals, so your images are recreated from the most recent images.
+You can use `latest` in the version, the version is evaluated when the image build takes place, not when the template is submitted. If you use this functionality with the Azure Compute Gallery destination, you can avoid resubmitting the template, and rerun the image build at intervals, so your images are recreated from the most recent images.
#### Support for Market Place Plan Information You can also specify plan information, for example:
By default, the Image Builder will run for 240 minutes. After that, it will time
[ERROR] complete: 'context deadline exceeded' ```
-If you do not specify a buildTimeoutInMinutes value, or set it to 0, this will use the default value. You can increase or decrease the value, up to the maximum of 960mins (16hrs). For Windows, we do not recommend setting this below 60 minutes. If you find you are hitting the timeout, review the [logs](image-builder-troubleshoot.md#customization-log), to see if the customization step is waiting on something like user input.
+If you don't specify a buildTimeoutInMinutes value, or set it to 0, this will use the default value. You can increase or decrease the value, up to the maximum of 960mins (16hrs). For Windows, we don't recommend setting this below 60 minutes. If you find you're hitting the timeout, review the [logs](image-builder-troubleshoot.md#customization-log), to see if the customization step is waiting on something like user input.
-If you find you need more time for customizations to complete, set this to what you think you need, with a little overhead. But, do not set it too high because you might have to wait for it to timeout before seeing an error.
+If you find you need more time for customizations to complete, set this to what you think you need, with a little overhead. But, don't set it too high because you might have to wait for it to timeout before seeing an error.
> [!NOTE] > If you don't set the value to 0, the minimum supported value is 6 minutes. Using values 1 through 5 will fail. ## Properties: customize
-Image Builder supports multiple ΓÇÿcustomizersΓÇÖ. Customizers are functions that are used to customize your image, such as running scripts, or rebooting servers.
+Image Builder supports multiple `customizers`. Customizers are functions that are used to customize your image, such as running scripts, or rebooting servers.
When using `customize`: - You can use multiple customizers, but they must have a unique `name`. - Customizers execute in the order specified in the template. - If one customizer fails, then the whole customization component will fail and report back an error. - It is strongly advised you test the script thoroughly before using it in a template. Debugging the script on your own VM will be easier.-- Do not put sensitive data in the scripts. -- The script locations need to be publicly accessible, unless you are using [MSI](./image-builder-user-assigned-identity.md).
+- don't put sensitive data in the scripts.
+- The script locations need to be publicly accessible, unless you're using [MSI](./image-builder-user-assigned-identity.md).
```json "customize": [
Customize properties:
- **restartTimeout** - Restart timeout specified as a string of magnitude and unit. For example, `5m` (5 minutes) or `2h` (2 hours). The default is: '5m' ### Linux restart
-There is no Linux Restart customizer, however, if you are installing drivers, or components that require a restart, you can install them and invoke a restart using the Shell customizer, there is a 20min SSH timeout to the build VM.
+There is no Linux restart customizer. If you're installing drivers, or components that require a restart, you can install them and invoke a restart using the Shell customizer. There is a 20min SSH timeout to the build VM.
### PowerShell customizer The shell customizer supports running PowerShell scripts and inline command, the scripts must be publicly accessible for the IB to access them.
OS support: Windows
Customizer properties: - **type** ΓÇô WindowsUpdate.-- **searchCriteria** - Optional, defines which type of updates are installed (Recommended, Important etc.), BrowseOnly=0 and IsInstalled=0 (Recommended) is the default.
+- **searchCriteria** - Optional, defines which type of updates are installed (like Recommended or Important), BrowseOnly=0 and IsInstalled=0 (Recommended) is the default.
- **filters** ΓÇô Optional, allows you to specify a filter to include or exclude updates. - **updateLimit** ΓÇô Optional, defines how many updates can be installed, default 1000.
Customizer properties:
> The Windows Update customizer can fail if there are any outstanding Windows restarts, or application installations still running, typically you may see this error in the customization.log, `System.Runtime.InteropServices.COMException (0x80240016): Exception from HRESULT: 0x80240016`. We strongly advise you consider adding in a Windows Restart, and/or allowing applications enough time to complete their installations using [sleep](/powershell/module/microsoft.powershell.utility/start-sleep) or wait commands in the inline commands or scripts before running Windows Update. ### Generalize
-By default, Azure Image Builder will also run ΓÇÿdeprovisionΓÇÖ code at the end of each image customization phase, to ΓÇÿgeneralizeΓÇÖ the image. Generalizing is a process where the image is set up so it can be reused to create multiple VMs. For Windows VMs, Azure Image Builder uses Sysprep. For Linux, Azure Image Builder runs ΓÇÿwaagent -deprovisionΓÇÖ.
+By default, Azure Image Builder will also run `deprovision` code at the end of each image customization phase, to generalize the image. Generalizing is a process where the image is set up so it can be reused to create multiple VMs. For Windows VMs, Azure Image Builder uses Sysprep. For Linux, Azure Image Builder runs `waagent -deprovision`.
The commands Image Builder users to generalize may not be suitable for every situation, so Azure Image Builder will allow you to customize this command, if needed.
-If you are migrating existing customization, and you are using different Sysprep/waagent commands, you can use the Image Builder generic commands, and if the VM creation fails, use your own Sysprep or waagent commands.
+If you're migrating existing customization, and you're using different Sysprep/waagent commands, you can use the Image Builder generic commands, and if the VM creation fails, use your own Sysprep or waagent commands.
-If Azure Image Builder creates a Windows custom image successfully, and you create a VM from it, then find that the VM creation fails or does not complete successfully, you will need to review the Windows Server Sysprep documentation or raise a support request with the Windows Server Sysprep Customer Services Support team, who can troubleshoot and advise on the correct Sysprep usage.
+If Azure Image Builder creates a Windows custom image successfully, and you create a VM from it, then find that the VM creation fails or doesn't complete successfully, you will need to review the Windows Server Sysprep documentation or raise a support request with the Windows Server Sysprep Customer Services Support team, who can troubleshoot and advise on the correct Sysprep usage.
#### Default Sysprep command
To override the commands, use the PowerShell or Shell script provisioners to cre
* Windows: c:\DeprovisioningScript.ps1 * Linux: /tmp/DeprovisioningScript.sh
-Image Builder will read these commands, these are written out to the AIB logs, ΓÇÿcustomization.logΓÇÖ. See [troubleshooting](image-builder-troubleshoot.md#customization-log) on how to collect logs.
+Image Builder will read these commands, these are written out to the AIB logs, `customization.log`. See [troubleshooting](image-builder-troubleshoot.md#customization-log) on how to collect logs.
## Properties: distribute
Azure Image Builder supports three distribution targets:
You can distribute an image to both of the target types in the same configuration. > [!NOTE]
-> The default AIB sysprep command does not include "/mode:vm", however this maybe required when create images that will have the HyperV role installed. If you need to add this command argument, you must override the sysprep command.
+> The default AIB sysprep command doesn't include "/mode:vm", however this maybe required when create images that will have the HyperV role installed. If you need to add this command argument, you must override the sysprep command.
Because you can have more than one target to distribute to, Image Builder maintains a state for every distribution target that can be accessed by querying the `runOutputName`. The `runOutputName` is an object you can query post distribution for information about that distribution. For example, you can query the location of the VHD, or regions where the image version was replicated to, or SIG Image version created. This is a property of every distribution target. The `runOutputName` must be unique to each distribution target. Here is an example, this is querying an Azure Compute Gallery distribution:
Distribute properties:
- **imageId** ΓÇô Resource ID of the destination image, expected format: /subscriptions/\<subscriptionId>/resourceGroups/\<destinationResourceGroupName>/providers/Microsoft.Compute/images/\<imageName> - **location** - location of the managed image. - **runOutputName** ΓÇô unique name for identifying the distribution. -- **artifactTags** - Optional user specified key value pair tags.
+- **artifactTags** - Optional user specified key\value tags.
> [!NOTE]
Distribute properties for galleries:
- **galleryImageId** ΓÇô ID of the Azure Compute Gallery, this can specified in two formats: * Automatic versioning - Image Builder will generate a monotonic version number for you, this is useful for when you want to keep rebuilding images from the same template: The format is: `/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/galleries/<sharedImageGalleryName>/images/<imageGalleryName>`. * Explicit versioning - You can pass in the version number you want image builder to use. The format is:
- `/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/galleries/<sharedImageGalName>/images/<imageDefName>/versions/<version e.g. 1.1.1>`
+ `/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/galleries/<sharedImageGalName>/images/<imageDefName>/versions/<version - for example: 1.1.1>`
- **runOutputName** ΓÇô unique name for identifying the distribution. -- **artifactTags** - Optional user specified key value pair tags.-- **replicationRegions** - Array of regions for replication. One of the regions must be the region where the Gallery is deployed. Adding regions will mean an increase of build time, as the build does not complete until the replication has completed.
+- **artifactTags** - Optional user specified key\value tags.
+- **replicationRegions** - Array of regions for replication. One of the regions must be the region where the Gallery is deployed. Adding regions will mean an increase of build time, as the build doesn't complete until the replication has completed.
- **excludeFromLatest** (optional) This allows you to mark the image version you create not be used as the latest version in the gallery definition, the default is 'false'. - **storageAccountType** (optional) AIB supports specifying these types of storage for the image version that is to be created: * "Standard_LRS"
Distribute properties for galleries:
> [!NOTE]
-> If the image template and referenced `image definition` are not in the same location, you will see additional time to create images. Image Builder currently does not have a `location` parameter for the image version resource, we take it from its parent `image definition`. For example, if an image definition is in westus and you want the image version replicated to eastus, a blob is copied to westus, from this, an image version resource in westus is created, and then replicate to eastus. To avoid the additional replication time, ensure the `image definition` and image template are in the same location.
+> If the image template and referenced `image definition` are not in the same location, you will see additional time to create images. Image Builder currently doesn't have a `location` parameter for the image version resource, we take it from its parent `image definition`. For example, if an image definition is in westus and you want the image version replicated to eastus, a blob is copied to westus, from this, an image version resource in westus is created, and then replicate to eastus. To avoid the additional replication time, ensure the `image definition` and image template are in the same location.
### Distribute: VHD
Distribute VHD parameters:
- **runOutputName** ΓÇô unique name for identifying the distribution. - **tags** - Optional user specified key value pair tags.
-Azure Image Builder does not allow the user to specify a storage account location, but you can query the status of the `runOutputs` to get the location.
+Azure Image Builder doesn't allow the user to specify a storage account location, but you can query the status of the `runOutputs` to get the location.
```azurecli-interactive az resource show \
az resource invoke-action \
``` ### Cancelling an Image Build
-If you are running an image build that you believe is incorrect, waiting for user input, or you feel will never complete successfully, then you can cancel the build.
+If you're running an image build that you believe is incorrect, waiting for user input, or you feel will never complete successfully, then you can cancel the build.
-The build can be canceled any time. If the distribution phase has started you can still cancel, but you will need to clean up any images that may not be completed. The cancel command does not wait for cancel to complete, please monitor `lastrunstatus.runstate` for canceling progress, using these status [commands](image-builder-troubleshoot.md#customization-log).
+The build can be canceled any time. If the distribution phase has started you can still cancel, but you will need to clean up any images that may not be completed. The cancel command doesn't wait for cancel to complete, please monitor `lastrunstatus.runstate` for canceling progress, using these status [commands](image-builder-troubleshoot.md#customization-log).
Examples of `cancel` commands:
virtual-machines Suse Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/suse-create-upload-vhd.md
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
* Use a cloud-init directive baked into the image that will do this every time the VM is created: ```console
+ echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF #cloud-config # Generated by Azure cloud image build
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
filesystem: swap mounts: - ["ephemeral0.1", "/mnt"]
- - ["ephemeral0.2", "none", "swap", "sw", "0", "0"]
+ - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"]
EOF ```
virtual-machines Lsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/lsv2-series.md
The Lsv2-series features high throughput, low latency, directly mapped local NVM
Bursting: Supported<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
<br> | Size | vCPU | Memory (GiB) | Temp disk<sup>1</sup> (GiB) | NVMe Disks<sup>2</sup> | NVMe Disk throughput<sup>3</sup> (Read IOPS/MBps) | Uncached data disk throughput (IOPs/MBps)<sup>4</sup> | Max burst uncached data disk throughput (IOPs/MBps)<sup>5</sup>| Max Data Disks | Max NICs | Expected network bandwidth (Mbps) |
virtual-machines M Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/m-series.md
M-series VM's feature Intel&reg; Hyper-Threading Technology.
[Write Accelerator](./