Updates from: 08/26/2022 01:10:52
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample React Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-react-spa-app.md
Previously updated : 07/07/2022 Last updated : 08/25/2022
# Configure authentication in a sample React single-page application by using Azure Active Directory B2C
-This article uses a sample React single-page application (SPA) to illustrate how to add Azure Active Directory B2C (Azure AD B2C) authentication to your React apps.
+This article uses a sample React single-page application (SPA) to illustrate how to add Azure Active Directory B2C (Azure AD B2C) authentication to your React apps. The React SPA also calls an API that's protected by Azure AD B2C itself.
## Overview
Now that you've obtained the SPA sample, update the code with your Azure AD B2C
| b2cPolicies | authorities | Replace `your-tenant-name` with your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, use `contoso.onmicrosoft.com`. Then, replace the policy name with the user flow or custom policy that you created in [step 1](#step-1-configure-your-user-flow). For example: `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<your-sign-in-sign-up-policy>`. | | b2cPolicies | authorityDomain|Your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example: `contoso.onmicrosoft.com`. | | Configuration | clientId | The React application ID from [step 2.3](#23-register-the-react-app). |
-| protectedResources| endpoint| The URL of the web API: `http://localhost:5000/api/todolist`. |
+| protectedResources| endpoint| The URL of the web API: `http://localhost:5000/hello`. |
| protectedResources| scopes| The web API scopes that you created in [step 2.2](#22-configure-scopes). For example: `b2cScopes: ["https://<your-tenant-name>.onmicrosoft.com/tasks-api/tasks.read"]`. | Your resulting *src/authConfig.js* code should look similar to the following sample:
export const msalConfig: Configuration = {
export const protectedResources = { todoListApi: {
- endpoint: "http://localhost:5000/api/todolist",
+ endpoint: "http://localhost:5000/hello",
scopes: ["https://your-tenant-namee.onmicrosoft.com/tasks-api/tasks.read"], }, }
Your final configuration file should look like the following JSON:
## Step 5: Run the React SPA and web API
-You're now ready to test the React scoped access to the API. In this step, run both the web API and the sample React application on your local machine. Then, sign in to the React application, and select the **TodoList** button to start a request to the protected API.
+You're now ready to test the React scoped access to the API. In this step, run both the web API and the sample React application on your local machine. Then, sign in to the React application, and select the **HelloAPI** button to start a request to the protected API.
### Run the web API
You're now ready to test the React scoped access to the API. In this step, run b
![Screenshot that shows the React sample app with the login link.](./media/configure-authentication-sample-react-spa-app/sample-app-sign-in.png) 1. Choose **Sign in using Popup**, or **Sign in using Redirect**.
-1. Complete the sign-up or sign in process. Upon successful sign in, you should see your profile.
-1. From the menu, select **Hello API**.
-1. Check out the result of the REST API call. The following screenshot shows the React sample REST API return value.
-
+1. Complete the sign-up or sign in process. Upon successful sign-in, you should see a page with three buttons, **HelloAPI**, **Edit Profile** and **Sign Out**.
![Screenshot that shows the React sample app with the user profile, and the call to the A P I.](./media/configure-authentication-sample-react-spa-app/sample-app-call-api.png)
+1. From the menu, select **HelloAPI** button.
+1. Check out the result of the REST API call. The following screenshot shows the React sample REST API return value:
+
+ :::image type="content" source="./media/configure-authentication-sample-react-spa-app/sample-app-call-api-result.png" alt-text="Screenshot of the React sample app with the user profile, and the result of calling the web A P I.":::
+ ## Deploy your application
active-directory Active Directory Certificate Based Authentication Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/active-directory-certificate-based-authentication-android.md
Last updated 02/16/2022
-+
active-directory Active Directory Certificate Based Authentication Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/active-directory-certificate-based-authentication-get-started.md
Last updated 05/04/2022
-+
active-directory Active Directory Certificate Based Authentication Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/active-directory-certificate-based-authentication-ios.md
Last updated 05/04/2022
-+
active-directory Concept Authentication Authenticator App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-authenticator-app.md
Last updated 06/23/2022
-+
active-directory Concept Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods.md
Last updated 08/17/2022
-+
active-directory Concept Authentication Oath Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-oath-tokens.md
Last updated 08/07/2022
-+
active-directory Concept Authentication Operator Assistance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-operator-assistance.md
Last updated 04/27/2022
-+
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md
Last updated 08/17/2022
-+
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-phone-options.md
Last updated 06/23/2022
-+
active-directory Concept Authentication Security Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-security-questions.md
Last updated 09/02/2020
-+
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Last updated 06/15/2022
-+
active-directory Concept Mfa Authprovider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-authprovider.md
Last updated 11/21/2019
-+
active-directory Concept Mfa Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-data-residency.md
Last updated 08/01/2022
-+
active-directory Concept Mfa Howitworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-howitworks.md
Last updated 02/07/2022
-+
active-directory Concept Mfa Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-licensing.md
Last updated 03/22/2022
-+
active-directory Concept Password Ban Bad Combined Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-combined-policy.md
Last updated 05/04/2022
-+
active-directory Concept Password Ban Bad On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-on-premises.md
Last updated 08/22/2022
-+
active-directory Concept Password Ban Bad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad.md
Last updated 07/13/2021
-+
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Last updated 06/17/2022
-+
active-directory Concept Resilient Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-resilient-controls.md
Title: Create a resilient access control management strategy - Azure AD
description: This document provides guidance on strategies an organization should adopt to provide resilience to reduce the risk of lockout during unforeseen disruptions -+ tags: azuread
active-directory Concept Sspr Howitworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-howitworks.md
Last updated 08/17/2022
-+
active-directory Concept Sspr Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-licensing.md
Last updated 07/13/2021
-+
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md
Last updated 05/04/2022
-+
active-directory Concept Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-writeback.md
Last updated 10/25/2021 -+
active-directory Concepts Azure Multi Factor Authentication Prompts Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md
Last updated 11/12/2021
-+
active-directory How To Authentication Find Coverage Gaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-find-coverage-gaps.md
Last updated 02/22/2022
-+
active-directory How To Authentication Sms Supported Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-sms-supported-apps.md
Last updated 11/19/2021 -+
active-directory How To Authentication Two Way Sms Unsupported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-two-way-sms-unsupported.md
Last updated 07/19/2021
-+
active-directory How To Mfa Microsoft Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-microsoft-managed.md
Last updated 02/22/2022
-+
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
Last updated 06/23/2022
-+
active-directory Howto Authentication Methods Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-methods-activity.md
Last updated 07/13/2021
-+
active-directory Howto Authentication Passwordless Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-faqs.md
Last updated 02/22/2021
-+
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Last updated 07/19/2022
-+
active-directory Howto Authentication Passwordless Security Key On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md
Last updated 08/22/2022
-+
active-directory Howto Authentication Passwordless Security Key Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-windows.md
Last updated 07/06/2022
-+
active-directory Howto Authentication Passwordless Security Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key.md
Last updated 11/12/2021
-+
active-directory Howto Authentication Passwordless Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-troubleshoot.md
Last updated 02/22/2021
-+
active-directory Howto Authentication Sms Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-sms-signin.md
Last updated 08/08/2022 -+
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Last updated 08/08/2022
-+
active-directory Howto Authentication Use Email Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-use-email-signin.md
Last updated 06/17/2022
-+
active-directory Howto Mfa Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-adfs.md
Last updated 04/15/2022
-+
active-directory Howto Mfa App Passwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-app-passwords.md
Last updated 06/20/2022
-+
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md
Last updated 08/16/2022
-+
active-directory Howto Mfa Nps Extension Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-advanced.md
Last updated 06/17/2022
-+
active-directory Howto Mfa Nps Extension Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-errors.md
Last updated 05/12/2022
-+
active-directory Howto Mfa Nps Extension Rdg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-rdg.md
Last updated 11/21/2019
-+
active-directory Howto Mfa Nps Extension Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-vpn.md
Last updated 08/04/2021
-+
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md
Last updated 01/12/2022
-+
active-directory Howto Mfa Reporting Datacollection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-reporting-datacollection.md
Last updated 01/07/2021
-+
active-directory Howto Mfa Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-reporting.md
Last updated 06/20/2022
-+
active-directory Howto Mfa Server Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-server-settings.md
Last updated 06/05/2020
-+
active-directory Howto Mfa Userdevicesettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userdevicesettings.md
Last updated 08/17/2022
-+
active-directory Howto Mfa Userstates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userstates.md
Last updated 08/17/2022
-+
active-directory Howto Mfaserver Adfs 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-adfs-2.md
Last updated 08/27/2021
-+
active-directory Howto Mfaserver Adfs Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-adfs-windows-server.md
Last updated 08/25/2021
-+
active-directory Howto Mfaserver Deploy Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy-ha.md
Last updated 11/21/2019
-+
active-directory Howto Mfaserver Deploy Mobileapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy-mobileapp.md
Last updated 06/23/2022
-+
active-directory Howto Mfaserver Deploy Upgrade Pf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy-upgrade-pf.md
Last updated 07/11/2018
-+
active-directory Howto Mfaserver Deploy Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy-upgrade.md
Last updated 11/12/2018
-+
active-directory Howto Mfaserver Deploy Userportal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy-userportal.md
Last updated 07/11/2018
-+
active-directory Howto Mfaserver Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy.md
Last updated 11/21/2019
-+
active-directory Howto Mfaserver Dir Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-dir-ad.md
Last updated 11/21/2019
-+
active-directory Howto Mfaserver Dir Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-dir-ldap.md
Last updated 07/11/2018 -+
active-directory Howto Mfaserver Dir Radius https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-dir-radius.md
Last updated 07/29/2021
-+
active-directory Howto Mfaserver Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-iis.md
Last updated 07/11/2018
-+
active-directory Howto Mfaserver Nps Rdg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-nps-rdg.md
Last updated 07/11/2018
-+
active-directory Howto Mfaserver Nps Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-nps-vpn.md
Last updated 11/21/2019
-+
active-directory Howto Mfaserver Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-windows.md
Last updated 07/11/2018
-+
active-directory Howto Password Ban Bad On Premises Agent Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-ban-bad-on-premises-agent-versions.md
Last updated 06/04/2021
-+
active-directory Howto Password Ban Bad On Premises Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-ban-bad-on-premises-deploy.md
Last updated 08/22/2022
-+
active-directory Howto Password Ban Bad On Premises Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-ban-bad-on-premises-monitor.md
Last updated 11/21/2019
-+
active-directory Howto Password Ban Bad On Premises Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-ban-bad-on-premises-operations.md
Last updated 03/05/2020
-+
active-directory Howto Password Ban Bad On Premises Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-ban-bad-on-premises-troubleshoot.md
Last updated 11/21/2019
-+
active-directory Howto Password Smart Lockout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-smart-lockout.md
Last updated 07/20/2020
-+
active-directory Howto Registration Mfa Sspr Combined Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined-troubleshoot.md
Last updated 01/19/2021
-+
active-directory Howto Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined.md
Last updated 03/1/2022
-+
active-directory Howto Sspr Authenticationdata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-authenticationdata.md
Last updated 07/12/2022
-+
active-directory Howto Sspr Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-customization.md
Last updated 07/17/2020
-+
active-directory Howto Sspr Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-reporting.md
Last updated 10/25/2021 -+
active-directory Howto Sspr Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-windows.md
Last updated 03/18/2022
-+
active-directory Multi Factor Authentication Get Started Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/multi-factor-authentication-get-started-adfs.md
Last updated 08/27/2021
-+
active-directory Overview Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/overview-authentication.md
Last updated 01/22/2021
-+
active-directory Troubleshoot Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-certificate-based-authentication.md
Last updated 06/15/2022
-+
active-directory Troubleshoot Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-sspr-writeback.md
Last updated 02/22/2022
-+
active-directory Troubleshoot Sspr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-sspr.md
Last updated 08/17/2022
-+
active-directory Tutorial Risk Based Sspr Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-risk-based-sspr-mfa.md
Last updated 07/13/2020
-+ # Customer intent: As an Azure AD Administrator, I want to learn how to use Azure Identity Protection to protect users by automatically detecting risk sign-in behavior and prompting for additional forms of authentication or request a password change.
active-directory Console Quickstart Portal Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/console-quickstart-portal-nodejs.md
+
+ Title: "Quickstart: Call Microsoft Graph from a Node.js console app"
+description: In this quickstart, you download and run a code sample that shows how a Node.js console application can get an access token and call an API protected by a Microsoft identity platform endpoint, using the app's own identity
++++++ Last updated : 08/22/2022+++
+#Customer intent: As an application developer, I want to learn how my Node.js app can get an access token and call an API that is protected by a Microsoft identity platform endpoint using client credentials flow.
++
+# Quickstart: Acquire a token and call Microsoft Graph API from a Node.js console app using app's identity
+
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Node.js console app that calls an API](console-app-quickstart.md?pivots=devlang-nodejs)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> # Quickstart: Acquire a token and call Microsoft Graph API from a Node.js console app using app's identity
+>
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how a Node.js console application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
+>
+> This quickstart uses the [Microsoft Authentication Library for Node.js (MSAL Node)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) with the [client credentials grant](v2-oauth2-client-creds-grant-flow.md).
+>
+> ## Prerequisites
+>
+> * [Node.js](https://nodejs.org/en/download/)
+> * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+>
+>
+> ### Download and configure the sample app
+>
+> #### Step 1: Configure the application in Azure portal
+> For the code sample for this quickstart to work, you need to create a client secret, and add Graph API's **User.Read.All** application permission.
+>
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the Node.js sample project
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> #### Step 3: Admin consent
+>
+> If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires **admin consent**: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role:
+>
+> ##### Global tenant administrator
+>
+> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for > Enter_the_Tenant_Name_Here**
+> > > [!div id="apipermissionspage"]
+> > > [Go to the API Permissions page]()
+>
+> ##### Standard user
+>
+> If you're a standard user of your tenant, then you need to ask a global administrator to grant **admin consent** for your application. To do this, give the following URL to your administrator:
+>
+> ```url
+> https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here
+> ```
+>
+> #### Step 4: Run the application
+>
+> Locate the sample's root folder (where `package.json` resides) in a command prompt or console. You'll need to install the dependencies of this sample once:
+>
+> ```console
+> npm install
+> ```
+>
+> Then, run the application via command prompt or console:
+>
+> ```console
+> node . --op getUsers
+> ```
+>
+> You should see on the console output some JSON fragment representing a list of users in your Azure AD directory.
+>
+> ## About the code
+>
+> Below, some of the important aspects of the sample application are discussed.
+>
+> ### MSAL Node
+>
+> [MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. As described, this quickstart requests tokens by application permissions (using the application's own identity) instead of delegated permissions. The authentication flow used in this case is known as [OAuth 2.0 client credentials flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL Node with daemon apps, see [Scenario: Daemon application](scenario-daemon-overview.md).
+>
+> You can install MSAL Node by running the following npm command.
+>
+> ```console
+> npm install @azure/msal-node --save
+> ```
+>
+> ### MSAL initialization
+>
+> You can add the reference for MSAL by adding the following code:
+>
+> ```javascript
+> const msal = require('@azure/msal-node');
+> ```
+>
+> Then, initialize MSAL using the following code:
+>
+> ```javascript
+> const msalConfig = {
+> auth: {
+> clientId: "Enter_the_Application_Id_Here",
+> authority: "https://login.microsoftonline.com/Enter_the_Tenant_Id_Here",
+> clientSecret: "Enter_the_Client_Secret_Here",
+> }
+> };
+> const cca = new msal.ConfidentialClientApplication(msalConfig);
+> ```
+>
+> > | Where: |Description |
+> > |||
+> > | `clientId` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
+> > | `authority` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
+> > | `clientSecret` | Is the client secret created for the application in Azure Portal. |
+>
+> For more information, please see the [reference documentation for `ConfidentialClientApplication`](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/initialize-confidential-client-application.md)
+>
+> ### Requesting tokens
+>
+> To request a token using app's identity, use `acquireTokenByClientCredential` method:
+>
+> ```javascript
+> const tokenRequest = {
+> scopes: [ 'https://graph.microsoft.com/.default' ],
+> };
+>
+> const tokenResponse = await cca.acquireTokenByClientCredential(tokenRequest);
+> ```
+>
+> > |Where:| Description |
+> > |||
+> > | `tokenRequest` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure Portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under **Expose an API** section in Azure Portal's Application Registration. |
+> > | `tokenResponse` | The response contains an access token for the scopes requested. |
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> To learn more about daemon/console app development with MSAL Node, see the tutorial:
+>
+> > [!div class="nextstepaction"]
+> > [Daemon application that calls web APIs](tutorial-v2-nodejs-console.md)
active-directory Daemon Quickstart Portal Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-java.md
+
+ Title: "Quickstart: Call Microsoft Graph from a Java daemon"
+description: In this quickstart, you learn how a Java app can get an access token and call an API protected by Microsoft identity platform endpoint, using the app's own identity
+++++++ Last updated : 08/22/2022+++
+#Customer intent: As an application developer, I want to learn how my Java app can get an access token and call an API that's protected by Microsoft identity platform endpoint using client credentials flow.
++
+# Quickstart: Acquire a token and call Microsoft Graph API from a Java console app using app's identity
+
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Java daemon that calls a protected API](console-app-quickstart.md?pivots=devlang-java)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> # Quickstart: Acquire a token and call Microsoft Graph API from a Java console app using app's identity
+>
+> In this quickstart, you download and run a code sample that demonstrates how a Java application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
+>
+> ## Prerequisites
+>
+> To run this sample, you need:
+>
+> - [Java Development Kit (JDK)](https://openjdk.java.net/) 8 or greater
+> - [Maven](https://maven.apache.org/)
+>
+> > [!div class="sxs-lookup"]
+> ### Download and configure the quickstart app
+>
+> #### Step 1: Configure the application in Azure portal
+> For the code sample for this quickstart to work, you need to create a client secret, and add Graph API's **User.Read.All** application permission.
+>
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the Java project
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> #### Step 3: Admin consent
+>
+> If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires Admin consent: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role:
+>
+> ##### Global tenant administrator
+>
+> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**.
+> > [!div id="apipermissionspage"]
+> > [Go to the API Permissions page]()
+>
+> ##### Standard user
+>
+> If you're a standard user of your tenant, then you need to ask a global administrator to grant admin consent for your application. To do this, give the following URL to your administrator:
+>
+> ```url
+> https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here
+> ```
+> #### Step 4: Run the application
+>
+> You can test the sample directly by running the main method of ClientCredentialGrant.java from your IDE.
+>
+> From your shell or command line:
+>
+> ```
+> $ mvn clean compile assembly:single
+> ```
+>
+> This will generate a msal-client-credential-secret-1.0.0.jar file in your /targets directory. Run this using your Java executable like below:
+>
+> ```
+> $ java -jar msal-client-credential-secret-1.0.0.jar
+> ```
+>
+> After running, the application should display the list of users in the configured tenant.
+>
+> > [!IMPORTANT]
+> > This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/ms-identity-java-daemon/tree/master/msal-client-credential-certificate) in the same GitHub repository for this sample, but in the second folder **msal-client-credential-certificate**.
+>
+> ## More information
+>
+> ### MSAL Java
+>
+> [MSAL Java](https://github.com/AzureAD/microsoft-authentication-library-for-java) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. As described, this quickstart requests tokens by using the application own identity instead of delegated permissions. The authentication flow used in this case is known as *[client credentials oauth flow](v2-oauth2-client-creds-grant-flow.md)*. For more information on how to use MSAL Java with daemon apps, see [this article](scenario-daemon-overview.md).
+>
+> Add MSAL4J to your application by using Maven or Gradle to manage your dependencies by making the following changes to the application's pom.xml (Maven) or build.gradle (Gradle) file.
+>
+> In pom.xml:
+>
+> ```xml
+> <dependency>
+> <groupId>com.microsoft.azure</groupId>
+> <artifactId>msal4j</artifactId>
+> <version>1.0.0</version>
+> </dependency>
+> ```
+>
+> In build.gradle:
+>
+> ```$xslt
+> compile group: 'com.microsoft.azure', name: 'msal4j', version: '1.0.0'
+> ```
+>
+> ### MSAL initialization
+>
+> Add a reference to MSAL for Java by adding the following code to the top of the file where you will be using MSAL4J:
+>
+> ```Java
+> import com.microsoft.aad.msal4j.*;
+> ```
+>
+> Then, initialize MSAL using the following code:
+>
+> ```Java
+> IClientCredential credential = ClientCredentialFactory.createFromSecret(CLIENT_SECRET);
+>
+> ConfidentialClientApplication cca =
+> ConfidentialClientApplication
+> .builder(CLIENT_ID, credential)
+> .authority(AUTHORITY)
+> .build();
+> ```
+>
+> > | Where: |Description |
+> > |||
+> > | `CLIENT_SECRET` | Is the client secret created for the application in Azure portal. |
+> > | `CLIENT_ID` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
+> > | `AUTHORITY` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
+>
+> ### Requesting tokens
+>
+> To request a token using app's identity, use `acquireToken` method:
+>
+> ```Java
+> IAuthenticationResult result;
+> try {
+> SilentParameters silentParameters =
+> SilentParameters
+> .builder(SCOPE)
+> .build();
+>
+> // try to acquire token silently. This call will fail since the token cache does not
+> // have a token for the application you are requesting an access token for
+> result = cca.acquireTokenSilently(silentParameters).join();
+> } catch (Exception ex) {
+> if (ex.getCause() instanceof MsalException) {
+>
+> ClientCredentialParameters parameters =
+> ClientCredentialParameters
+> .builder(SCOPE)
+> .build();
+>
+> // Try to acquire a token. If successful, you should see
+> // the token information printed out to console
+> result = cca.acquireToken(parameters).join();
+> } else {
+> // Handle other exceptions accordingly
+> throw ex;
+> }
+> }
+> return result;
+> ```
+>
+> > |Where:| Description |
+> > |||
+> > | `SCOPE` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure portal.|
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> To learn more about daemon applications, see the scenario landing page.
+>
+> > [!div class="nextstepaction"]
+> > [Daemon application that calls web APIs](scenario-daemon-overview.md)
active-directory Daemon Quickstart Portal Netcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-netcore.md
+
+ Title: "Quickstart: Get token & call Microsoft Graph in a console app"
+description: In this quickstart, you learn how a .NET Core sample app can use the client credentials flow to get a token and call Microsoft Graph.
+++++++ Last updated : 08/22/2022++++
+#Customer intent: As an application developer, I want to learn how my .NET Core app can get an access token and call an API that's protected by the Microsoft identity platform by using the client credentials flow.
++
+# Quickstart: Get a token and call the Microsoft Graph API by using a console app's identity
+
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: .NET Core console that calls an API](console-app-quickstart.md?pivots=devlang-dotnet-core)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> # Quickstart: Get a token and call the Microsoft Graph API by using a console app's identity
+>
+> In this quickstart, you download and run a code sample that demonstrates how a .NET Core console application can get an access token to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample also demonstrates how a job or a Windows service can run with an application identity, instead of a user's identity. The sample console application in this quickstart is also a daemon application, so it's a confidential client application.
+>
+> ## Prerequisites
+>
+> This quickstart requires [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) but will also work with .NET 6.0 SDK.
+>
+> > [!div class="sxs-lookup"]
+> ### Download and configure your quickstart app
+>
+> #### Step 1: Configure your application in the Azure portal
+> For the code sample in this quickstart to work, create a client secret and add the Graph API's **User.Read.All** application permission.
+>
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download your Visual Studio project
+>
+> > [!div class="sxs-lookup"]
+> > Run the project by using Visual Studio 2019.
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
+>
+> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> #### Step 3: Admin consent
+>
+> If you try to run the application at this point, you'll receive an *HTTP 403 - Forbidden* error: "Insufficient privileges to complete the operation." This error happens because any app-only permission requires a global administrator of your directory to give consent to your application. Select one of the following options, depending on your role.
+>
+> ##### Global tenant administrator
+>
+> If you're a global administrator, go to the **API Permissions** page and select **Grant admin consent for Enter_the_Tenant_Name_Here**.
+> > [!div id="apipermissionspage"]
+> > [Go to the API Permissions page]()
+>
+> ##### Standard user
+>
+> If you're a standard user of your tenant, ask a global administrator to grant admin consent for your application. To do this, give the following URL to your administrator:
+>
+> ```url
+> https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here
+> ```
+>
+> You might see the error "AADSTS50011: No reply address is registered for the application" after you grant consent to the app by using the preceding URL. This error happens because this application and the URL don't have a redirect URI. You can ignore it.
+>
+> #### Step 4: Run the application
+>
+> If you're using Visual Studio or Visual Studio for Mac, press **F5** to run the application. Otherwise, run the application via command prompt, console, or terminal:
+>
+> ```dotnetcli
+> cd {ProjectFolder}\1-Call-MSGraph\daemon-console
+> dotnet run
+> ```
+> In that code:
+> * `{ProjectFolder}` is the folder where you extracted the .zip file. An example is `C:\Azure-Samples\active-directory-dotnetcore-daemon-v2`.
+>
+> You should see a list of users in Azure Active Directory as result.
+>
+> This quickstart application uses a client secret to identify itself as a confidential client. The client secret is added as a plain-text file to your project files. For security reasons, we recommend that you use a certificate instead of a client secret before considering the application as a production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/#variation-daemon-application-using-client-credentials-with-certificates) in the GitHub repository for this sample.
+>
+> ## More information
+> This section gives an overview of the code required to sign in users. This overview can be useful to understand how the > code works, what the main arguments are, and how to add sign-in to an existing .NET Core console application.
+>
+> > [!div class="sxs-lookup"]
+> ### How the sample works
+>
+> ![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-netcore-daemon/> netcore-daemon-intro.svg)
+>
+> ### MSAL.NET
+>
+> Microsoft Authentication Library (MSAL, in the [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) package) is the library that's used to sign in users and request tokens for accessing an API protected by the Microsoft identity platform. This quickstart requests tokens by using the application's own identity instead of delegated permissions. The authentication flow in this case is known as a [client credentials OAuth flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL.NET with a client credentials flow, see [this article](https://aka.ms/msal-net-client-credentials).
+>
+> You can install MSAL.NET by running the following command in the Visual Studio Package Manager Console:
+>
+> ```dotnetcli
+> dotnet add package Microsoft.Identity.Client
+> ```
+>
+> ### MSAL initialization
+>
+> You can add the reference for MSAL by adding the following code:
+>
+> ```csharp
+> using Microsoft.Identity.Client;
+> ```
+>
+> Then, initialize MSAL by using the following code:
+>
+> ```csharp
+> IConfidentialClientApplication app;
+> app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
+> .WithClientSecret(config.ClientSecret)
+> .WithAuthority(new Uri(config.Authority))
+> .Build();
+> ```
+>
+> | Element | Description |
+> |||
+> | `config.ClientSecret` | The client secret created for the application in the Azure portal. |
+> | `config.ClientId` | The application (client) ID for the application registered in the Azure portal. You can find this value on the app's **Overview** page in the Azure portal. |
+> | `config.Authority` | (Optional) The security token service (STS) endpoint for the user to authenticate. It's usually `https://login.microsoftonline.com/{tenant}` for the public cloud, where `{tenant}` is the name of your tenant or your tenant ID.|
+>
+> For more information, see the [reference documentation for `ConfidentialClientApplication`](/dotnet/api/microsoft.identity.client.iconfidentialclientapplication).
+>
+> ### Requesting tokens
+>
+> To request a token by using the app's identity, use the `AcquireTokenForClient` method:
+>
+> ```csharp
+> result = await app.AcquireTokenForClient(scopes)
+> .ExecuteAsync();
+> ```
+>
+> |Element| Description |
+> |||
+> | `scopes` | Contains the requested scopes. For confidential clients, this value should use a format similar to `{Application ID URI}/.default`. This format indicates that the requested scopes are the ones that are statically defined in the app object set in the Azure portal. For Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`. For custom web APIs, `{Application ID URI}` is defined in the Azure portal, under **Application Registration ** > **Expose an API**. |
+>
+> For more information, see the [reference documentation for `AcquireTokenForClient`](/dotnet/api/microsoft.identity.client.confidentialclientapplication.acquiretokenforclient).
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> To learn more about daemon applications, see the scenario overview:
+>
+> > [!div class="nextstepaction"]
+> > [Daemon application that calls web APIs](scenario-daemon-overview.md)
active-directory Daemon Quickstart Portal Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-python.md
+
+ Title: "Quickstart: Call Microsoft Graph from a Python daemon"
+description: In this quickstart, you learn how a Python process can get an access token and call an API protected by Microsoft identity platform, using the app's own identity
+++++++ Last updated : 08/22/2022+++
+#Customer intent: As an application developer, I want to learn how my Python app can get an access token and call an API that's protected by the Microsoft identity platform using client credentials flow.
++
+# Quickstart: Acquire a token and call Microsoft Graph API from a Python console app using app's identity
+
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Python console app that calls an API](console-app-quickstart.md?pivots=devlang-python)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+>
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how a Python application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
+>
+> ## Prerequisites
+>
+> To run this sample, you need:
+>
+> - [Python 2.7+](https://www.python.org/downloads/release/python-2713) or [Python 3+](https://www.python.org/downloads/release/python-364/)
+> - [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python)
+>
+> > [!div class="sxs-lookup"]
+> ### Download and configure the quickstart app
+>
+> #### Step 1: Configure your application in Azure portal
+> For the code sample in this quickstart to work, create a client secret and add Graph API's **User.Read.All** application permission.
+>
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the Python project
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> #### Step 3: Admin consent
+>
+> If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires Admin consent: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role:
+>
+> ##### Global tenant administrator
+>
+> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**.
+> > [!div id="apipermissionspage"]
+> > [Go to the API Permissions page]()
+>
+> ##### Standard user
+>
+> If you're a standard user of your tenant, ask a global administrator to grant admin consent for your application. To do this, give the following URL to your administrator:
+>
+> ```url
+> https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here
+> ```
+>
+>
+> #### Step 4: Run the application
+>
+> You'll need to install the dependencies of this sample once.
+>
+> ```console
+> pip install -r requirements.txt
+> ```
+>
+> Then, run the application via command prompt or console:
+>
+> ```console
+> python confidential_client_secret_sample.py parameters.json
+> ```
+>
+> You should see on the console output some Json fragment representing a list of users in your Azure AD directory.
+>
+> > [!IMPORTANT]
+> > This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/ms-identity-python-daemon/blob/master/2-Call-MsGraph-WithCertificate/README.md) in the same GitHub repository for this sample, but in the second folder **2-Call-MsGraph-WithCertificate**.
+>
+> ## More information
+>
+> ### MSAL Python
+>
+> [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. As described, this quickstart requests tokens by using the application own identity instead of delegated permissions. The authentication flow used in this case is known as *[client credentials oauth flow](v2-oauth2-client-creds-grant-flow.md)*. For more information on how to use MSAL Python with daemon apps, see [this article](scenario-daemon-overview.md).
+>
+> You can install MSAL Python by running the following pip command.
+>
+> ```powershell
+> pip install msal
+> ```
+>
+> ### MSAL initialization
+>
+> You can add the reference for MSAL by adding the following code:
+>
+> ```Python
+> import msal
+> ```
+>
+> Then, initialize MSAL using the following code:
+>
+> ```Python
+> app = msal.ConfidentialClientApplication(
+> config["client_id"], authority=config["authority"],
+> client_credential=config["secret"])
+> ```
+>
+> > | Where: |Description |
+> > |||
+> > | `config["secret"]` | Is the client secret created for the application in Azure portal. |
+> > | `config["client_id"]` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
+> > | `config["authority"]` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
+>
+> For more information, please see the [reference documentation for `ConfidentialClientApplication`](https://msal-python.readthedocs.io/en/latest/#confidentialclientapplication).
+>
+> ### Requesting tokens
+>
+> To request a token using app's identity, use `AcquireTokenForClient` method:
+>
+> ```Python
+> result = None
+> result = app.acquire_token_silent(config["scope"], account=None)
+>
+> if not result:
+> logging.info("No suitable token exists in cache. Let's get a new one from AAD.")
+> result = app.acquire_token_for_client(scopes=config["scope"])
+> ```
+>
+> > |Where:| Description |
+> > |||
+> > | `config["scope"]` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure portal.|
+>
+> For more information, please see the [reference documentation for `AcquireTokenForClient`](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_for_client).
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> To learn more about daemon applications, see the scenario landing page.
+>
+> > [!div class="nextstepaction"]
+> > [Daemon application that calls web APIs](scenario-daemon-overview.md)
active-directory Desktop Quickstart Portal Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-nodejs-desktop.md
+
+ Title: "Quickstart: Call Microsoft Graph from a Node.js desktop app"
+description: In this quickstart, you learn how a Node.js Electron desktop application can sign-in users and get an access token to call an API protected by a Microsoft identity platform endpoint
++++++ Last updated : 08/18/2022+++
+#Customer intent: As an application developer, I want to learn how my Node.js Electron desktop application can get an access token and call an API that's protected by a Microsoft identity platform endpoint.
++
+# Quickstart: Acquire an access token and call the Microsoft Graph API from an Electron desktop app
+
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Node.js Electron desktop app with user sign-in](desktop-app-quickstart.md?pivots=devlang-nodejs-electron)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> # Quickstart: Acquire an access token and call the Microsoft Graph API from an Electron desktop app
+>
+> In this quickstart, you download and run a code sample that demonstrates how an Electron desktop application can sign in users and acquire access tokens to call the Microsoft Graph API.
+>
+> This quickstart uses the [Microsoft Authentication Library for Node.js (MSAL Node)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) with the [authorization code flow with PKCE](v2-oauth2-auth-code-flow.md).
+>
+> ## Prerequisites
+>
+> * [Node.js](https://nodejs.org/en/download/)
+> * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+>
+> #### Step 1: Configure the application in Azure portal
+> For the code sample for this quickstart to work, you need to add a reply URL as **msal://redirect**.
+> > [!div class="nextstepaction"]
+> > [Make this change for me]()
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the Electron sample project
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> #### Step 4: Run the application
+>
+> You'll need to install the dependencies of this sample once:
+>
+> ```console
+> npm install
+> ```
+>
+> Then, run the application via command prompt or console:
+>
+> ```console
+> npm start
+> ```
+>
+> You should see application's UI with a **Sign in** button.
+>
+> ## About the code
+>
+> Below, some of the important aspects of the sample application are discussed.
+>
+> ### MSAL Node
+>
+> [MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. For more information on how to use MSAL Node with desktop apps, see [this article](scenario-desktop-overview.md).
+>
+> You can install MSAL Node by running the following npm command.
+>
+> ```console
+> npm install @azure/msal-node --save
+> ```
+>
+> ### MSAL initialization
+>
+> You can add the reference for MSAL Node by adding the following code:
+>
+> ```javascript
+> const { PublicClientApplication } = require('@azure/msal-node');
+> ```
+>
+> Then, initialize MSAL using the following code:
+>
+> ```javascript
+> const MSAL_CONFIG = {
+> auth: {
+> clientId: "Enter_the_Application_Id_Here",
+> authority: "https://login.microsoftonline.com/Enter_the_Tenant_Id_Here",
+> },
+> };
+>
+> const pca = new PublicClientApplication(MSAL_CONFIG);
+> ```
+>
+> > | Where: |Description |
+> > |||
+> > | `clientId` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
+> > | `authority` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
+>
+> ### Requesting tokens
+>
+> In the first leg of authorization code flow with PKCE, prepare and send an authorization code request with the appropriate parameters. Then, in the second leg of the flow, listen for the authorization code response. Once the code is obtained, exchange it to obtain a token.
+>
+> ```javascript
+> // The redirect URI you setup during app registration with a custom file protocol "msal"
+> const redirectUri = "msal://redirect";
+>
+> const cryptoProvider = new CryptoProvider();
+>
+> const pkceCodes = {
+> challengeMethod: "S256", // Use SHA256 Algorithm
+> verifier: "", // Generate a code verifier for the Auth Code Request first
+> challenge: "" // Generate a code challenge from the previously generated code verifier
+> };
+>
+> /**
+> * Starts an interactive token request
+> * @param {object} authWindow: Electron window object
+> * @param {object} tokenRequest: token request object with scopes
+> */
+> async function getTokenInteractive(authWindow, tokenRequest) {
+>
+> /**
+> * Proof Key for Code Exchange (PKCE) Setup
+> *
+> * MSAL enables PKCE in the Authorization Code Grant Flow by including the codeChallenge and codeChallengeMethod
+> * parameters in the request passed into getAuthCodeUrl() API, as well as the codeVerifier parameter in the
+> * second leg (acquireTokenByCode() API).
+> */
+>
+> const {verifier, challenge} = await cryptoProvider.generatePkceCodes();
+>
+> pkceCodes.verifier = verifier;
+> pkceCodes.challenge = challenge;
+>
+> const authCodeUrlParams = {
+> redirectUri: redirectUri
+> scopes: tokenRequest.scopes,
+> codeChallenge: pkceCodes.challenge, // PKCE Code Challenge
+> codeChallengeMethod: pkceCodes.challengeMethod // PKCE Code Challenge Method
+> };
+>
+> const authCodeUrl = await pca.getAuthCodeUrl(authCodeUrlParams);
+>
+> // register the custom file protocol in redirect URI
+> protocol.registerFileProtocol(redirectUri.split(":")[0], (req, callback) => {
+> const requestUrl = url.parse(req.url, true);
+> callback(path.normalize(`${__dirname}/${requestUrl.path}`));
+> });
+>
+> const authCode = await listenForAuthCode(authCodeUrl, authWindow); // see below
+>
+> const authResponse = await pca.acquireTokenByCode({
+> redirectUri: redirectUri,
+> scopes: tokenRequest.scopes,
+> code: authCode,
+> codeVerifier: pkceCodes.verifier // PKCE Code Verifier
+> });
+>
+> return authResponse;
+> }
+>
+> /**
+> * Listens for auth code response from Azure AD
+> * @param {string} navigateUrl: URL where auth code response is parsed
+> * @param {object} authWindow: Electron window object
+> */
+> async function listenForAuthCode(navigateUrl, authWindow) {
+>
+> authWindow.loadURL(navigateUrl);
+>
+> return new Promise((resolve, reject) => {
+> authWindow.webContents.on('will-redirect', (event, responseUrl) => {
+> try {
+> const parsedUrl = new URL(responseUrl);
+> const authCode = parsedUrl.searchParams.get('code');
+> resolve(authCode);
+> } catch (err) {
+> reject(err);
+> }
+> });
+> });
+> }
+> ```
+>
+> > |Where:| Description |
+> > |||
+> > | `authWindow` | Current Electron window in process. |
+> > | `tokenRequest` | Contains the scopes being requested, such as `"User.Read"` for Microsoft Graph or `"api://<Application ID>/access_as_user"` for custom web APIs. |
+>
+> ## Next steps
+>
+> To learn more about Electron desktop app development with MSAL Node, see the tutorial:
+>
+> > [!div class="nextstepaction"]
+> > [Tutorial: Sign in users and call the Microsoft Graph API in an Electron desktop app](tutorial-v2-nodejs-desktop.md)
active-directory Desktop Quickstart Portal Uwp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-uwp.md
+
+ Title: "Quickstart: Sign in users and call Microsoft Graph in a Universal Windows Platform app"
+description: In this quickstart, learn how a Universal Windows Platform (UWP) application can get an access token and call an API protected by Microsoft identity platform.
+++++++ Last updated : 08/18/2022+++
+#Customer intent: As an application developer, I want to learn how my Universal Windows Platform (XAML) application can get an access token and call an API that's protected by the Microsoft identity platform.
++
+# Quickstart: Call the Microsoft Graph API from a Universal Windows Platform (UWP) application
+
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Universal Windows Platform (UWP) desktop app with user sign-in](desktop-app-quickstart.md?pivots=devlang-uwp)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> # Quickstart: Call the Microsoft Graph API from a Universal Windows Platform (UWP) application
+>
+> In this quickstart, you download and run a code sample that > demonstrates how a Universal Windows Platform (UWP) application can sign in users and get an access token to call the Microsoft Graph API.
+>
+> See [How the sample works](#how-the-sample-works) for an illustration.
+>
+> ## Prerequisites
+>
+> * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+> * [Visual Studio 2019](https://visualstudio.microsoft.com/vs/)
+>
+> #### Step 1: Configure the application
+> For the code sample in this quickstart to work, add a **Redirect URI** of `https://login.microsoftonline.com/common/oauth2/nativeclient`.
+>
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-uwp/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the Visual Studio project
+>
+> Run the project using Visual Studio 2019.
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
+>
+> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
+>
+>
+> #### Step 3: Your app is configured and ready to run
+> We have configured your project with values of your app's properties and it's ready to run.
+> #### Step 4: Run the application
+>
+> To run the sample application on your local machine:
+>
+> 1. In the Visual Studio toolbar, choose the right platform (probably **x64** or **x86**, not ARM). The target device should change from *Device* to *Local Machine*.
+> 1. Select **Debug** > **Start Without Debugging**.
+>
+> If you're prompted to do so, you might first need to enable **Developer Mode**, and then **Start Without Debugging** again to launch the app.
+>
+> When the app's window appears, you can select the **Call Microsoft Graph API** button, enter your credentials, and consent to the permissions requested by the application. If successful, the application displays some token information and data obtained from the call to the Microsoft Graph API.
+>
+> ## How the sample works
+>
+> ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-uwp/uwp-intro.svg)
+>
+> ### MSAL.NET
+>
+> MSAL ([Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client)) is the library used to sign in users and request security tokens. The security tokens are used to access an API protected by the Microsoft Identity platform. You can install MSAL by running the following command in Visual Studio's *Package Manager Console*:
+>
+> ```powershell
+> Install-Package Microsoft.Identity.Client
+> ```
+>
+> ### MSAL initialization
+>
+> You can add the reference for MSAL by adding the following code:
+>
+> ```csharp
+> using Microsoft.Identity.Client;
+> ```
+>
+> Then, MSAL is initialized using the following code:
+>
+> ```csharp
+> public static IPublicClientApplication PublicClientApp;
+> PublicClientApp = PublicClientApplicationBuilder.Create(ClientId)
+> .WithRedirectUri("https://login.microsoftonline.com/common/oauth2/> nativeclient")
+> .Build();
+> ```
+>
+> The value of `ClientId` is the **Application (client) ID** of the app you registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal.
+>
+> ### Requesting tokens
+>
+> MSAL has two methods for acquiring tokens in a UWP app: `AcquireTokenInteractive` and `AcquireTokenSilent`.
+>
+> #### Get a user token interactively
+>
+> Some situations require forcing users to interact with the Microsoft identity platform through a pop-up window to either validate their credentials or to give consent. Some examples include:
+>
+> - The first-time users sign in to the application
+> - When users may need to reenter their credentials because the password has expired
+> - When your application is requesting access to a resource, that the user needs to consent to
+> - When two factor authentication is required
+>
+> ```csharp
+> authResult = await App.PublicClientApp.AcquireTokenInteractive(scopes)
+> .ExecuteAsync();
+> ```
+>
+> The `scopes` parameter contains the scopes being requested, such as `{ "user.read" }` for Microsoft Graph or `{ "api://<Application ID>/access_as_user" }` for custom web APIs.
+>
+> #### Get a user token silently
+>
+> Use the `AcquireTokenSilent` method to obtain tokens to access protected resources after the initial `AcquireTokenInteractive` method. You donΓÇÖt want to require the user to validate their credentials every time they need to access a resource. Most of the time you want token acquisitions and renewal without any user interaction
+>
+> ```csharp
+> var accounts = await App.PublicClientApp.GetAccountsAsync();
+> var firstAccount = accounts.FirstOrDefault();
+> authResult = await App.PublicClientApp.AcquireTokenSilent(scopes, firstAccount)
+> .ExecuteAsync();
+> ```
+>
+> * `scopes` contains the scopes being requested, such as `{ "user.read" }` for Microsoft Graph or `{ "api://<Application ID>/access_as_user" }` for custom web APIs.
+> * `firstAccount` specifies the first user account in the cache (MSAL supports multiple users in a single app).
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> Try out the Windows desktop tutorial for a complete step-by-step guide on building applications and new features, including a full explanation of this quickstart.
+>
+> > [!div class="nextstepaction"]
+> > [UWP - Call Graph API tutorial](tutorial-v2-windows-uwp.md)
active-directory Desktop Quickstart Portal Wpf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-wpf.md
+
+ Title: "Quickstart: Sign in users and call Microsoft Graph in a Windows desktop app"
+description: In this quickstart, learn how a Windows Presentation Foundation (WPF) application can get an access token and call an API protected by the Microsoft identity platform.
+++++++ Last updated : 08/18/2022++
+#Customer intent: As an application developer, I want to learn how my Windows Presentation Foundation (WPF) application can get an access token and call an API that's protected by the Microsoft identity platform.
++
+# Quickstart: Acquire a token and call the Microsoft Graph API from a Windows desktop app
+
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Windows Presentation Foundation (WPF) desktop app that signs in users and calls a web API](desktop-app-quickstart.md?pivots=devlang-windows-desktop)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+
+> In this quickstart, you download and run a code sample that demonstrates how a Windows Presentation Foundation (WPF) application can sign in users and get an access token to call the Microsoft Graph API.
+>
+> See [How the sample works](#how-the-sample-works) for an illustration.
+>
+>
+> #### Step 1: Configure your application in Azure portal
+> For the code sample in this quickstart to work, add a **Redirect URI** of `https://login.microsoftonline.com/common/oauth2/nativeclient` and `ms-appx-web://microsoft.aad.brokerplugin/{client_id}`.
+>
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download your Visual Studio project
+>
+> Run the project using Visual Studio.
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
+>
+> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
+>
+> #### Step 3: Your app is configured and ready to run
+> We have configured your project with values of your app's properties and it's ready to run.
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> ## More information
+>
+> ### How the sample works
+> ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-windows-desktop/windesktop-intro.svg)
+>
+> ### MSAL.NET
+> MSAL ([Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client)) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. You can install MSAL by running the following command in Visual Studio's **Package Manager Console**:
+>
+> ```powershell
+> Install-Package Microsoft.Identity.Client -IncludePrerelease
+> ```
+>
+> ### MSAL initialization
+>
+> You can add the reference for MSAL by adding the following code:
+>
+> ```csharp
+> using Microsoft.Identity.Client;
+> ```
+>
+> Then, initialize MSAL using the following code:
+>
+> ```csharp
+> IPublicClientApplication publicClientApp = PublicClientApplicationBuilder.Create(ClientId)
+> .WithRedirectUri("https://login.microsoftonline.com/common/oauth2/nativeclient")
+> .WithAuthority(AzureCloudInstance.AzurePublic, Tenant)
+> .Build();
+> ```
+>
+> |Where: | Description |
+> |||
+> | `ClientId` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
+>
+> ### Requesting tokens
+>
+> MSAL has two methods for acquiring tokens: `AcquireTokenInteractive` and `AcquireTokenSilent`.
+>
+> #### Get a user token interactively
+>
+> Some situations require forcing users interact with the Microsoft identity platform through a pop-up window to either validate their credentials or to give consent. Some examples include:
+>
+> - The first time users sign in to the application
+> - When users may need to reenter their credentials because the password has expired
+> - When your application is requesting access to a resource that the user needs to consent to
+> - When two factor authentication is required
+>
+> ```csharp
+> authResult = await App.PublicClientApp.AcquireTokenInteractive(_scopes)
+> .ExecuteAsync();
+> ```
+>
+> |Where:| Description |
+> |||
+> | `_scopes` | Contains the scopes being requested, such as `{ "user.read" }` for Microsoft Graph or `{ "api://<Application ID>/access_as_user" }` for custom web APIs. |
+>
+> #### Get a user token silently
+>
+> You don't want to require the user to validate their credentials every time they need to access a resource. Most of the time you want token acquisitions and renewal without any user interaction. You can use the `AcquireTokenSilent` method to obtain tokens to access protected resources after the initial `AcquireTokenInteractive` method:
+>
+> ```csharp
+> var accounts = await App.PublicClientApp.GetAccountsAsync();
+> var firstAccount = accounts.FirstOrDefault();
+> authResult = await App.PublicClientApp.AcquireTokenSilent(scopes, firstAccount)
+> .ExecuteAsync();
+> ```
+>
+> |Where: | Description |
+> |||
+> | `scopes` | Contains the scopes being requested, such as `{ "user.read" }` for Microsoft Graph or `{ "api://<Application ID>/access_as_user" }` for custom web APIs. |
+> | `firstAccount` | Specifies the first user in the cache (MSAL support multiple users in a single app). |
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> Try out the Windows desktop tutorial for a complete step-by-step guide on building applications and new features, including a full explanation of this quickstart.
+>
+> > [!div class="nextstepaction"]
+> > [Call Graph API tutorial](./tutorial-v2-windows-desktop.md)
active-directory Quickstart V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-windows-desktop.md
Title: "Quickstart: Sign in users and call Microsoft Graph in a Windows desktop app"
-description: In this quickstart, learn how a Windows Presentation Foundation (WPF) application can get an access token and call an API protected by the Microsoft identity platform.
+ Title: "Quickstart: Sign in users and call Microsoft Graph in a Windows desktop application"
+description: In this quickstart, learn how a Windows Presentation Foundation (WPF) app can get an access token and call an API protected by the Microsoft identity platform.
#Customer intent: As an application developer, I want to learn how my Windows Presentation Foundation (WPF) application can get an access token and call an API that's protected by the Microsoft identity platform.
-# Quickstart: Acquire a token and call Microsoft Graph API from a Windows desktop app
+# Quickstart: Acquire a token and call Microsoft Graph API from a Windows desktop application
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
active-directory Scenario Protected Web Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-protected-web-api-app-configuration.md
HttpResponseMessage response = await _httpClient.GetAsync(apiUri);
> [!IMPORTANT] > A client application requests the bearer token to the Microsoft identity platform *for the web API*. The web API is the only application that should verify the token and view the claims it contains. Client apps should never try to inspect the claims in tokens.
->
+>
> In the future, the web API might require that the token be encrypted. This requirement would prevent access for client apps that can view access tokens. ## JwtBearer configuration
Microsoft recommends you use the [Microsoft.Identity.Web](https://www.nuget.org/
_Microsoft.Identity.Web_ provides the glue between ASP.NET Core, the authentication middleware, and the [Microsoft Authentication Library (MSAL)](msal-overview.md) for .NET. It allows for a clearer, more robust developer experience and leverages the power of the Microsoft identity platform and Azure AD B2C.
-#### Using Microsoft.Identity.Web templates
+#### ASP.NET for .NET 6.0
-You can create a web API from scratch by using Microsoft.Identity.Web project templates. For details see [Microsoft.Identity.Web - Web API project template](https://aka.ms/ms-id-web/webapi-project-templates).
+To create a new web API project that uses Microsoft.Identity.Web, use a project template in the .NET 6.0 CLI or Visual Studio.
-#### Starting from an existing ASP.NET Core 3.1 application
+**Dotnet core CLI**
+
+```dotnetcli
+# Create new web API that uses Microsoft.Identity.Web
+dotnet new webapi --auth SingleOrg
+```
+
+**Visual Studio** - To create a web API project in Visual Studio, select **File** > **New** > **Project** > **ASP.NET Core Web API**.
+
+Both the .NET CLI and Visual Studio project templates create a _Program.cs_ file that looks similar this code snippet. Notice the `Microsoft.Identity.Web` using directive and the lines containing authentication and authorization.
+
+```csharp
+using Microsoft.AspNetCore.Authentication;
+using Microsoft.AspNetCore.Authentication.JwtBearer;
+using Microsoft.Identity.Web;
+
+var builder = WebApplication.CreateBuilder(args);
+
+// Add services to the container.
+builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
+ .AddMicrosoftIdentityWebApi(builder.Configuration.GetSection("AzureAd"));
+
+builder.Services.AddControllers();
+// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
+builder.Services.AddEndpointsApiExplorer();
+builder.Services.AddSwaggerGen();
+
+var app = builder.Build();
+
+// Configure the HTTP request pipeline.
+if (app.Environment.IsDevelopment())
+{
+ app.UseSwagger();
+ app.UseSwaggerUI();
+}
+
+app.UseHttpsRedirection();
+
+app.UseAuthentication();
+app.UseAuthorization();
+
+app.MapControllers();
+
+app.Run();
+```
+
+#### ASP.NET Core 3.1
++
+To create a new web API project by using the Microsoft.Identity.Web-enabled project templates in ASP.NET Core 3.1, see [Microsoft.Identity.Web - Web API project template](https://aka.ms/ms-id-web/webapi-project-templates).
+
+To add Microsoft.Identity.Web to an existing ASP.NET Core 3.1 web API project, add this using directive to your _Program.cs_ file:
ASP.NET Core 3.1 uses the Microsoft.AspNetCore.Authentication.JwtBearer library. The middleware is initialized in the Startup.cs file.
using Microsoft.Identity.Web;
public void ConfigureServices(IServiceCollection services) { // Adds Microsoft Identity platform (AAD v2.0) support to protect this API
- services.AddMicrosoftIdentityWebApiAuthentication(Configuration, "AzureAd");
+ services.AddAuthentication(AzureADDefaults.JwtBearerAuthenticationScheme)
+ .AddMicrosoftIdentityWebApi(Configuration, "AzureAd");
services.AddControllers(); } ```
-you can also write the following (which is equivalent)
+Make sure you have `app.UseAuthentication()` and `app.UseAuthorization()` in the `Configure` method.
```csharp
-public void ConfigureServices(IServiceCollection services)
+public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
- // Adds Microsoft Identity platform (AAD v2.0) support to protect this API
- services.AddAuthentication(AzureADDefaults.JwtBearerAuthenticationScheme)
- .AddMicrosoftIdentityWebApi(Configuration, "AzureAd");
+ // More code here
+ app.UseAuthentication();
+ app.UseAuthorization();
-services.AddControllers();
-}
+ // More code here
``` > [!NOTE]
-> If you use Microsoft.Identity.Web and don't set the `Audience` in *appsettings.json*, the following is used:
-> - `$"{ClientId}"` if you have set the [access token accepted version](scenario-protected-web-api-app-registration.md#accepted-token-version) to `2`, or for Azure AD B2C web APIs.
-> - `$"api://{ClientId}` in all other cases (for v1.0 [access tokens](access-tokens.md)).
-> For details, see Microsoft.Identity.Web [source code](https://github.com/AzureAD/microsoft-identity-web/blob/d2ad0f5f830391a34175d48621a2c56011a45082/src/Microsoft.Identity.Web/Resource/RegisterValidAudience.cs#L70-L83).
-
-The preceding code snippet is extracted from the [ASP.NET Core web API incremental tutorial](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/blob/63087e83326e6a332d05fee6e1586b66d840b08f/1.%20Desktop%20app%20calls%20Web%20API/TodoListService/Startup.cs#L23-L28). The detail of **AddMicrosoftIdentityWebApiAuthentication** is available in [Microsoft.Identity.Web](microsoft-identity-web.md). This method calls [AddMicrosoftIdentityWebAPI](/dotnet/api/microsoft.identity.web.microsoftidentitywebapiauthenticationbuilderextensions.addmicrosoftidentitywebapi), which itself instructs the middleware on how to validate the token.
+> If you use Microsoft.Identity.Web and don't set the `Audience` in *appsettings.json*, `$"{ClientId}"` is automatically used if you have set the [access token accepted version](scenario-protected-web-api-app-registration.md#accepted-token-version) to `2`, or for Azure AD B2C web APIs.
## Token validation
The validators are associated with properties of the **TokenValidationParameters
In most cases, you don't need to change the parameters. Apps that aren't single tenants are exceptions. These web apps accept users from any organization or from personal Microsoft accounts. Issuers in this case must be validated. Microsoft.Identity.Web takes care of the issuer validation as well. - In ASP.NET Core, if you want to customize the token validation parameters, use the following snippet in your *Startup.cs*: ```c#
You can also validate incoming access tokens in Azure Functions. You can find ex
Move on to the next article in this scenario, [Verify scopes and app roles in your code](scenario-protected-web-api-verification-scope-app-roles.md).+
active-directory Scenario Web Api Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-app-configuration.md
Instead of a client secret, you can provide a client certificate. The following
Microsoft.Identity.Web provides several ways to describe certificates, both by configuration or by code. For details, see [Microsoft.Identity.Web wiki - Using certificates](https://github.com/AzureAD/microsoft-identity-web/wiki/Using-certificates) on GitHub.
-## Startup.cs
+## Program.cs
-Your web API will need to acquire a token for the downstream API. You specify it by adding the `.EnableTokenAcquisitionToCallDownstreamApi()` line after `.AddMicrosoftIdentityWebApi(Configuration)`. This line exposes the `ITokenAcquisition` service, that you can use in your controller/pages actions. However, as you'll see in the next two bullet points, you can do even simpler. You'll also need to choose a token cache implementation, for example `.AddInMemoryTokenCaches()`, in *Startup.cs*:
+Your web API will need to acquire a token for the downstream API. You specify it by adding the `.EnableTokenAcquisitionToCallDownstreamApi()` line after `.AddMicrosoftIdentityWebApi(Configuration)`. This line exposes the `ITokenAcquisition` service, that you can use in your controller/pages actions. However, as you'll see in the next two bullet points, you can do even simpler. You'll also need to choose a token cache implementation, for example `.AddInMemoryTokenCaches()`, in *Program.cs*. If you use ASP.NET Core 3.1 or 5.0 the code will be similar but in the *Startup.cs* file.
```csharp using Microsoft.Identity.Web;
-public class Startup
-{
- // ...
- public void ConfigureServices(IServiceCollection services)
- {
- // ...
- services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
- .AddMicrosoftIdentityWebApi(Configuration, Configuration.GetSection("AzureAd"))
- .EnableTokenAcquisitionToCallDownstreamApi()
- .AddInMemoryTokenCaches();
- // ...
- }
- // ...
-}
+// ...
+builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
+ .AddMicrosoftIdentityWebApi(Configuration, Configuration.GetSection("AzureAd"))
+ .EnableTokenAcquisitionToCallDownstreamApi()
+ .AddInMemoryTokenCaches();
+// ...
``` If you don't want to acquire the token yourself, *Microsoft.Identity.Web* provides two mechanisms for calling a downstream web API from another API. The option you choose depends on whether you want to call Microsoft Graph or another API.
If you don't want to acquire the token yourself, *Microsoft.Identity.Web* provid
If you want to call Microsoft Graph, Microsoft.Identity.Web enables you to directly use the `GraphServiceClient` (exposed by the Microsoft Graph SDK) in your API actions. To expose Microsoft Graph: 1. Add the [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) NuGet package to your project.
-1. Add `.AddMicrosoftGraph()` after `.EnableTokenAcquisitionToCallDownstreamApi()` in the *Startup.cs* file. `.AddMicrosoftGraph()` has several overrides. Using the override that takes a configuration section as a parameter, the code becomes:
+1. Add `.AddMicrosoftGraph()` after `.EnableTokenAcquisitionToCallDownstreamApi()` in the *Program.cs* file. `.AddMicrosoftGraph()` has several overrides. Using the override that takes a configuration section as a parameter, the code becomes:
```csharp using Microsoft.Identity.Web;
-public class Startup
-{
- // ...
- public void ConfigureServices(IServiceCollection services)
- {
- // ...
- services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
- .AddMicrosoftIdentityWebApi(Configuration, Configuration.GetSection("AzureAd"))
- .EnableTokenAcquisitionToCallDownstreamApi()
- .AddMicrosoftGraph(Configuration.GetSection("GraphBeta"))
- .AddInMemoryTokenCaches();
- // ...
- }
- // ...
-}
+// ...
+builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
+ .AddMicrosoftIdentityWebApi(Configuration, Configuration.GetSection("AzureAd"))
+ .EnableTokenAcquisitionToCallDownstreamApi()
+ .AddMicrosoftGraph(Configuration.GetSection("GraphBeta"))
+ .AddInMemoryTokenCaches();
+// ...
``` ### Option 2: Call a downstream web API other than Microsoft Graph
To call a downstream API other than Microsoft Graph, *Microsoft.Identity.Web* pr
```csharp using Microsoft.Identity.Web;
-public class Startup
-{
- // ...
- public void ConfigureServices(IServiceCollection services)
- {
- // ...
- services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
- .AddMicrosoftIdentityWebApi(Configuration, "AzureAd")
- .EnableTokenAcquisitionToCallDownstreamApi()
- .AddDownstreamWebApi("MyApi", Configuration.GetSection("GraphBeta"))
- .AddInMemoryTokenCaches();
- // ...
- }
- // ...
-}
+// ...
+builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
+ .AddMicrosoftIdentityWebApi(Configuration, "AzureAd")
+ .EnableTokenAcquisitionToCallDownstreamApi()
+ .AddDownstreamWebApi("MyApi", Configuration.GetSection("GraphBeta"))
+ .AddInMemoryTokenCaches();
+// ...
``` As with web apps, you can choose various token cache implementations. For details, see [Microsoft identity web - Token cache serialization](https://aka.ms/ms-id-web/token-cache-serialization) on GitHub.
-The following image shows the various possibilities of *Microsoft.Identity.Web* and their impact on the *Startup.cs* file:
+The following image shows the various possibilities of *Microsoft.Identity.Web* and their impact on the *Program.cs* file:
:::image type="content" source="media/scenarios/microsoft-identity-web-startup-cs.svg" alt-text="Block diagram showing service configuration options in startup dot C S for calling a web API and specifying a token cache implementation":::
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
Previously updated : 06/15/2022 Last updated : 08/24/2022
For an overview of the feature, view this "Azure Active Directory: What is Stage
## Prerequisites -- You have an Azure Active Directory (Azure AD) tenant with federated domains.
+- You have an Azure Active Directory (Azure AD) tenant with [federated domains](./whatis-fed.md).
- You have decided to move one of the following options: - **Password hash synchronization (sync)**. For more information, see [What is password hash sync](whatis-phs.md) - **Pass-through authentication**. For more information, see [What is pass-through authentication](how-to-connect-pta.md)
- - **Azure AD Certificate-based authentication (CBA) settings**. For more information, see [What is pass-through authentication](../authentication/concept-certificate-based-authentication.md)
+ - **Azure AD Certificate-based authentication (CBA) settings**. For more information, see [Overview of Azure AD certificate-based authentication](../authentication/concept-certificate-based-authentication.md)
For both options, we recommend enabling single sign-on (SSO) to achieve a silent sign-in experience. For Windows 7 or 8.1 domain-joined devices, we recommend using seamless SSO. For more information, see [What is seamless SSO](how-to-connect-sso.md).
- For Windows 10, Windows Server 2016 and later versions, itΓÇÖs recommended to use SSO via [Primary Refresh Token (PRT)](../devices/concept-primary-refresh-token.md) with [Azure AD joined devices](../devices/concept-azure-ad-join.md), [hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md) or personal registered devices via Add Work or School Account.
+ For Windows 10, Windows Server 2016 and later versions, itΓÇÖs recommended to use SSO via [Primary Refresh Token (PRT)](../devices/concept-primary-refresh-token.md) with [Azure AD joined devices](../devices/concept-azure-ad-join.md), [hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md) or [personal registered devices](../devices/concept-azure-ad-register.md) via Add Work or School Account.
- You have configured all the appropriate tenant-branding and conditional access policies you need for users who are being migrated to cloud authentication.
The following scenarios are supported for Staged Rollout. The feature works only
- Users who are provisioned to Azure AD by using Azure AD Connect. It does not apply to cloud-only users. -- User sign-in traffic on browsers and *modern authentication* clients. Applications or cloud services that use legacy authentication will fall back to federated authentication flows. An example might be Exchange online with modern authentication turned off, or Outlook 2010, which does not support modern authentication.
+- User sign-in traffic on browsers and *modern authentication* clients. Applications or cloud services that use [legacy authentication](../conditional-access/block-legacy-authentication.md) will fall back to federated authentication flows. An example of legacy authentication might be Exchange online with modern authentication turned off, or Outlook 2010, which does not support modern authentication.
- Group size is currently limited to 50,000 users. If you have groups that are larger than 50,000 users, it is recommended to split this group over multiple groups for Staged Rollout.
You can roll out these options:
- **Not supported** - **Password hash sync** + **Pass-through authentication** + **Seamless SSO** - **Certificate-based authentication settings**
-Do the following:
+To configure Staged Rollout, follow these steps:
-1. To access the UX, sign in to the [Azure AD portal](https://aka.ms/stagedrolloutux).
+1. Sign in to the [Azure portal](https://portal.azure.com/) in the User Administrator role for the organization.
-2. Select the **Enable Staged Rollout for managed user sign-in** link.
+1. Search for and select **Azure Active Directory**.
- For example, if you want to enable **Password Hash Sync** and **Seamless single sign-on**, slide both controls to **On**.
+1. From the left menu, select **Azure AD Connect**.
-
+1. On the *Azure AD Connect* page, under the *Staged rollout of cloud authentication*, select the **Enable staged rollout for managed user sign-in** link.
-
+1. On the *Enable staged rollout feature* page, select the options you want to enable: [Password Hash Sync](./whatis-phs.md), [Pass-through authentication](./how-to-connect-pta.md), [Seamless single sign-on](./how-to-connect-sso.md), or [Certificate-based Authentication (Preview)](../authentication/active-directory-certificate-based-authentication-get-started.md). For example, if you want to enable **Password Hash Sync** and **Seamless single sign-on**, slide both controls to **On**.
-3. Add the groups to the feature to enable *pass-through authentication* and *seamless SSO*. To avoid a UX time-out, ensure that the security groups contain no more than 200 members initially.
+1. Add groups to the features you selected. For example, *pass-through authentication* and *seamless SSO*. To avoid a time-out, ensure that the security groups contain no more than 200 members initially.
active-directory Groups Activate Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-activate-roles.md
description: Learn how to activate your privileged access group roles in Azure A
documentationcenter: '' -+ na Previously updated : 02/24/2022 Last updated : 08/24/2022 -+ # Activate my privileged access group roles in Privileged Identity Management
-Use Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra,to allow eligible role members for privileged access groups to schedule role activation for a specified date and time. They can also select a activation duration up to the maximum duration configured by administrators.
+Use Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, to allow eligible role members for privileged access groups to schedule role activation for a specified date and time. They can also select an activation duration up to the maximum duration configured by administrators.
This article is for eligible members who want to activate their privileged access group role in Privileged Identity Management.
active-directory Pim Resource Roles Activate Your Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-activate-your-roles.md
description: Learn how to activate your Azure resource roles in Azure AD Privile
documentationcenter: '' -+ na Previously updated : 06/24/2022 Last updated : 08/24/2022 -+
When you need to take on an Azure resource role, you can request activation by u
1. If your role requires multi-factor authentication, select **Verify your identity before proceeding**. You only have to authenticate once per session.
- ![Verify my identity with MFA before role activation](./media/pim-resource-roles-activate-your-roles/resources-my-roles-mfa.png)
- 1. Select **Verify my identity** and follow the instructions to provide additional security verification. ![Screen to provide security verification such as a PIN code](./media/pim-resource-roles-activate-your-roles/resources-mfa-enter-code.png)
When you need to take on an Azure resource role, you can request activation by u
1. In the **Reason** box, enter the reason for the activation request.
- ![Completed Activate pane with scope, start time, duration, and reason](./media/pim-resource-roles-activate-your-roles/resources-my-roles-activate-done.png)
- 1. Select **Activate**.
- If the [role requires approval](pim-resource-roles-approval-workflow.md) to activate, a notification will appear in the upper right corner of your browser informing you the request is pending approval.
-
- ![Activation request is pending approval notification](./media/pim-resource-roles-activate-your-roles/resources-my-roles-activate-notification.png)
+ >[!NOTE]
+ >If the [role requires approval](pim-resource-roles-approval-workflow.md) to activate, a notification will appear in the upper right corner of your browser informing you the request is pending approval.
## Activate a role with ARM API
active-directory Embark Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/embark-tutorial.md
Previously updated : 02/11/2022 Last updated : 08/23/2022
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Embark supports **SP** initiated SSO.
+* Embark supports **SP and IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
- In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<ENVIRONMENT>.ehr.com/microsoftbenefits`
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://<ENVIRONMENT>.ehr.com`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<ENVIRONMENT>.ehr.com`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<ENVIRONMENT>.ehr.com`
> [!NOTE]
- > The Sign on URL value is not real. Update the value with the actual Sign on URL. Contact [Embark support team](mailto:wtw.software.support.notification@willistowerswatson.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Embark support team](mailto:wtw.software.support.notification@willistowerswatson.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. Your Embark application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Embark expects this to be mapped with the user's employee id. For that you can use **user.employeeid** attribute from the list or use the appropriate attribute value based on your organization configuration..
+1. Your Embark application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Embark expects this to be mapped with the user's employee id. For that you can use **user.employeeid** attribute from the list or use the appropriate attribute value based on your organization configuration.
![image](common/default-attributes.png)
+1. In addition to above, Embark platform application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | --| |
+ | EmployeeID | user.employeeid |
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
In this section, you create a user called Britta Simon in Embark. Work with [Em
In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Embark Sign-on URL where you can initiate the login flow.
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Embark platform Sign-on URL where you can initiate the login flow.
+
+* Go to Embark platform Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Embark platform for which you set up the SSO.
-* Go to Embark Sign-on URL directly and initiate the login flow from there.
+You can also use Microsoft My Apps to test the application in any mode. When you click the Embark platform tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Embark platform for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-* You can use Microsoft My Apps. When you click the Embark tile in the My Apps, this will redirect to Embark Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
aks Azure Csi Blob Storage Dynamic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-dynamic.md
In this example, the following manifest configures mounting a Blob storage conta
protocol: nfs tags: environment=Development volumeBindingMode: Immediate
- mountOptions:
- - nconnect=8 # only supported on linux kernel version >= 5.3
``` 2. Create the storage class with the [kubectl apply][kubectl-apply] command:
In this example, the following manifest configures using blobfuse and mount a Bl
[azure-csi-blob-storage-static]: azure-csi-blob-storage-static.md [blob-storage-csi-driver]: azure-blob-csi.md [azure-blob-storage-nfs-support]: ../storage/blobs/network-file-system-protocol-support.md
-[enable-blob-csi-driver]: azure-blob-csi.md#before-you-begin
+[enable-blob-csi-driver]: azure-blob-csi.md#before-you-begin
aks Azure Csi Blob Storage Static https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-static.md
The following example demonstrates how to mount a Blob storage container as a pe
- ReadWriteMany persistentVolumeReclaimPolicy: Retain # If set as "Delete" container would be removed after pvc deletion storageClassName: azureblob-nfs-premium
- mountOptions:
- - nconnect=8 # only supported on linux kernel version >= 5.3
csi: driver: blob.csi.azure.com readOnly: false
The following YAML creates a pod that uses the persistent volume or persistent v
[az-aks-show]: /cli/azure/aks#az-aks-show [manage-blob-storage]: ../storage/blobs/blob-containers-cli.md [azure-blob-storage-nfs-support]: ../storage/blobs/network-file-system-protocol-support.md
-[enable-blob-csi-driver]: azure-blob-csi.md#before-you-begin
+[enable-blob-csi-driver]: azure-blob-csi.md#before-you-begin
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
In addition to in-tree driver features, Azure Disks CSI driver supports the foll
|Name | Meaning | Available Value | Mandatory | Default value | | | | | |skuName | Azure Disks storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Premium_LRS`, `StandardSSD_LRS`, `UltraSSD_LRS`, `Premium_ZRS`, `StandardSSD_ZRS` | No | `StandardSSD_LRS`|
-|kind | Managed or unmanaged (blob based) disk | `managed` (`dedicated` and `shared` are deprecated) | No | `managed`|
|fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows| |cachingMode | [Azure Data Disk Host Cache Setting](../virtual-machines/windows/premium-storage-performance.md#disk-caching) | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`| |location | Specify Azure region where Azure Disks will be created | `eastus`, `westus`, etc. | No | If empty, driver will use the same location name as current AKS cluster|
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
In the following example:
If the `microsoft.flux` extension isn't already installed in the cluster, it'll be installed. When the flux configuration is installed, the initial compliance state may be "Pending" or "Non-compliant" because reconciliation is still on-going. After a minute you can query the configuration again and see the final compliance state. ```console
-az k8s-configuration flux create -g flux-demo-rg -c flux-demo-arc -n cluster-config --namespace cluster-config -t connectedClusters --scope cluster -u https://github.com/Azure/gitops-flux2-kustomize-helm-mt --branch main --kustomization name=infra path=./infrastructure prune=true --kustomization name=apps path=./apps/staging prune=true dependsOn=["infra"]
+az k8s-configuration flux create -g flux-demo-rg \
+-c flux-demo-arc \
+-n cluster-config \
+--namespace cluster-config \
+-t connectedClusters \
+--scope cluster \
+-u https://github.com/Azure/gitops-flux2-kustomize-helm-mt \
+--branch main \
+--kustomization name=infra path=./infrastructure prune=true \
+--kustomization name=apps path=./apps/staging prune=true dependsOn=\["infra"\]
'Microsoft.Flux' extension not found on the cluster, installing it now. This may take a few minutes... 'Microsoft.Flux' extension was successfully installed on the cluster
azure-arc Use Azure Policy Flux 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-azure-policy-flux-2.md
Verify you have `Microsoft.Authorization/policyAssignments/write` permissions on
1. Give the policy assignment an easily identifiable **Name** and **Description**. 1. Ensure **Policy enforcement** is set to **Enabled**. 1. Select **Next**.
-1. Set the parameter values to be used while creating the `fluxConfigurations` resource.
+1. Set the parameter values to be used.
* For more information about parameters, see the [tutorial on deploying Flux v2 configurations](./tutorial-use-gitops-flux2.md).
+ * When creating Flux configurations you must provide a value for one (and only one) of these parameters: `repositoryRefBranch`, `repositoryRefTag`, `repositoryRefSemver`, `repositoryRefCommit`.
1. Select **Next**. 1. Enable **Create a remediation task**. 1. Verify **Create a managed identity** is checked, and that the identity will have **Contributor** permissions.
For existing clusters, you may need to manually run a remediation task. This tas
* You should see the namespace and artifacts that were created by the Flux configuration. * You should see the objects described by the manifests in the Git repo deployed on the cluster.
+## Customizing a policy
+
+The built-in policies cover the main scenarios for using GitOps with Flux v2 in your Kubernetes clusters. However, due to limitations on the number of parameters allowed in Azure Policy assignments (max of 20), not all parameters are present in the built-in policies. Also, to fit within the 20-parameter limit, only a single Kustomization can be created with the built-in policies.
+
+If you have a scenario that differs from the built-in policies, you can overcome the limitations by creating [custom policies](../../governance/policy/tutorials/create-custom-policy-definition.md) using the built-in policies as templates. You can create custom policies that contain only the parameters you need, and hard-code the rest, therefore working around the 20-parameter limit.
+ ## Next steps [Set up Azure Monitor for Containers with Azure Arc-enabled Kubernetes clusters](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md).
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
This is usually caused when trying to run commands from remote PowerShell, which
To install Azure Arc resource bridge on an Azure Stack HCI cluster, `az arcappliance` commands must be run locally on a node in the cluster. Sign in to the node through Remote Desktop Protocol (RDP) or use a console session to run these commands.
-## Azure Arc-enabled VMWare VCenter issues
+## Azure Arc-enabled VMware VCenter issues
### `az arcappliance prepare` failure
When deploying the resource bridge on VMware vCenter, you specify the folder in
### Insufficient permissions
-When deploying the resource bridge on VMWare Vcenter, you may get an error saying that you have insufficient permission. To resolve this issue, make sure that your user account has all of the following privileges in VMware vCenter and then try again.
+When deploying the resource bridge on VMware Vcenter, you may get an error saying that you have insufficient permission. To resolve this issue, make sure that your user account has all of the following privileges in VMware vCenter and then try again.
``` "Datastore.AllocateSpace"
When deploying the resource bridge on VMWare Vcenter, you may get an error sayin
## Next steps
+[Understand recovery operations for resource bridge in Azure Arc-enabled VMware vSphere disaster scenarios](../vmware-vsphere/disaster-recovery.md)
+ If you don't see your problem here or you can't resolve your issue, try one of the following channels for support: * Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html). * Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
-* [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
+* [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
azure-arc Day2 Operations Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/day2-operations-resource-bridge.md
Title: Perform ongoing administration for Arc-enabled VMware vSphere description: Learn how to perform day 2 administrator operations related to Azure Arc-enabled VMware vSphere Previously updated : 03/28/2022 Last updated : 08/25/2022
Last updated 03/28/2022
In this article, you'll learn how to perform various administrative operations related to Azure Arc-enabled VMware vSphere (preview): -- Upgrading the Arc resource bridge
+- Upgrading the Azure Arc resource bridge (preview)
- Updating the credentials - Collecting logs from the Arc resource bridge
Each of these operations requires either SSH key to the resource bridge VM or th
Azure Arc-enabled VMware vSphere requires the Arc resource bridge to connect your VMware vSphere environment with Azure. Periodically, new images of Arc resource bridge will be released to include security and feature updates. > [!NOTE]
-> To upgrade the arc resource bridge VM to the latest version, you will need to perform the onboarding again with the **same resource IDs**. This will cause some downtime as operations performed through Arc during this time might fail.
+> To upgrade the Arc resource bridge VM to the latest version, you will need to perform the onboarding again with the **same resource IDs**. This will cause some downtime as operations performed through Arc during this time might fail.
To upgrade to the latest version of the resource bridge, perform the following steps:
Azure Arc-enabled VMware vSphere uses the vSphere account credentials you provid
As part of your security practices, you might need to rotate credentials for your vCenter accounts. As credentials are rotated, you must also update the credentials provided to Azure Arc to ensure the functioning of Azure Arc-enabled VMware services.
-There are two different sets of credentials stored on the Arc resource bridge. But you can use the same account credentials for both.
+There are two different sets of credentials stored on the Arc resource bridge. You can use the same account credentials for both.
- **Account for Arc resource bridge**. This account is used for deploying the Arc resource bridge VM and will be used for upgrade. - **Account for VMware cluster extension**. This account is used to discover inventory and perform all VM operations through Azure Arc-enabled VMware vSphere
az arcappliance logs <provider> --out-dir <path to specified output directory> -
During initial onboarding, SSH keys are saved to the workstation. If you're running this command from the workstation that was used for onboarding, no other steps are required.
-If you're running this command from a different workstation, you must make sure the following files are copied to the new workstation in the same location.
+If you're running this command from a different workstation, make sure the following files are copied to the new workstation in the same location.
- For a Windows workstation, `C:\ProgramData\kva\.ssh\logkey` and `C:\ProgramData\kva\.ssh\logkey.pub`
If you're running this command from a different workstation, you must make sure
## Next steps
-[Troubleshoot common issues related to resource bridge](../resource-bridge/troubleshoot-resource-bridge.md)
+- [Troubleshoot common issues related to resource bridge](../resource-bridge/troubleshoot-resource-bridge.md)
+- [Understand disaster recovery operations for resource bridge](disaster-recovery.md)
azure-arc Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/disaster-recovery.md
+
+ Title: Perform disaster recovery operations
+description: Learn how to perform recovery operations for the Azure Arc resource bridge VM in Azure Arc-enabled VMware vSphere disaster scenarios.
+ Last updated : 08/16/2022+++
+# Perform disaster recovery operations
+
+In this article, you'll learn how to perform recovery operations for the Azure Arc resource bridge (preview) VM in Azure Arc-enabled VMware vSphere disaster scenarios.
+
+## Disaster scenarios & recovery goals
+
+In disaster scenarios for the Azure Arc resource bridge virtual machine (VM), including accidental deletion and hardware failure, the resource bridge Azure resource will have a status of `offline`. This means that the connection between on-premises infrastructure and Azure is lost, and previously managed Arc-enabled resources are disconnected from their on-premises counterparts.
+
+By performing recovery options, you can recreate a healthy Arc resource bridge and automatically reenable disconnected Arc-enabled resources.
+
+## Recovering the Arc resource bridge
+
+> [!NOTE]
+> When prompted for names for the Arc resource bridge, custom locations, and vCenter Azure resources, you'll need to provide the **same resource IDs** as the original resources in Azure.
+
+To recover the Arc resource bridge VM, you'll need to:
+
+- Delete the existing Arc resource bridge.
+- Create a new Arc resource bridge.
+- Recreate necessary custom extensions and custom locations.
+- Reconnect the new Arc resource bridge to existing resources in Azure.
+
+Follow the [Perform manual recovery for Arc resource bridge](#perform-manual-recovery-for-arc-resource-bridge) if any of the following apply:
+
+- The Arc resource bridge VM template is still present in vSphere.
+- The old Arc resource bridge contained multiple cluster extensions.
+- The old Arc resource bridge contained multiple custom locations.
+
+If none of the above apply, you can use the automated recovery process described in [Use a script to recover Arc resource bridge](#use-a-script-to-recover-arc-resource-bridge).
+
+## Perform manual recovery for Arc resource bridge
+
+1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location, and vCenter Azure resources.
+
+1. If the original configuration files for setting up Arc-enabled VMware vSphere are still present, move to the next step.
+
+ Otherwise, recreate the configuration files and validate them. vSphere-related configurations can be changed from the original settings, but any Azure-related configurations (resource groups, Azure IDs, location) must be the same as in the original setup.
+
+ ```azurecli
+ az arcappliance createconfig vmware --resource-group <resource group of original Arc resource bridge> --name <name of original Arc resource bridge> --location <Azure region of original Arc resource bridge>
+ ```
+
+ ```azurecli
+ az arcappliance validate vmware --config-file <path to configuration "name-appliance.yaml" file>
+ ```
+
+1. If the original Arc resource bridge VM template for setting up Arc-enabled VMware vSphere is still present in vSphere, move to the next step.
+
+ Otherwise, prepare a new VM template:
+
+ ```azurecli
+ az arcappliance prepare vmware --config-file <path to configuration "name-appliance.yaml" file>
+ ```
+
+1. Delete the existing Arc resource bridge. This command will delete both the on-premises VM in vSphere and the associated Azure resource.
+
+ ```azurecli
+ az arcappliance delete vmware --config-file <path to configuration "name-appliance.yaml" file>
+ ```
+
+1. Deploy a new Arc resource bridge VM.
+
+ ```azurecli
+ az arcappliance deploy vmware --config-file <path to configuration "name-appliance.yaml" file>
+ ```
+
+1. Create a new Arc resource bridge Azure resource and establish the connection between vCenter and Azure.
+
+ ```azurecli
+ az arcappliance create vmware --config-file <path to configuration "name-appliance.yaml" file> --kubeconfig <path to kubeconfig file>
+ ```
+
+1. Wait for the new Arc resource bridge to have a status of "Running". This process can take up to 5 minutes. Check the status in the Azure portal or use the following command:
+
+ ```azurecli
+ az arcappliance show --resource-group <resource-group-name> --name <Arc-resource-bridge-name>
+ ```
+
+1. Recreate necessary custom extensions. For Arc-enabled VMware vSphere:
+
+ ```azurecli
+ az k8s-extension create --resource-group <resource-group-name> --name azure-vmwareoperator --cluster-name <cluster-name> --cluster-type appliances --scope cluster --extension-type Microsoft.VMWare --release-train stable --release-namespace azure-vmwareoperator --auto-upgrade true --config Microsoft.CustomLocation.ServiceAccount=azure-vmwareoperatorΓÇ»
+ ```
+
+1. Recreate original custom locations. The name must be the same as the resource ID of the existing custom location in Azure. This method will allow the newly created custom location to automatically connect to the existing Azure resource.
+
+ ```azurecli
+ az customlocation create --name <name of existing custom location resource in Azure> --namespace azure-vmwareoperator --resource-group <resource group of the existing custom location> --host-resource-id <extension-name>
+ ```
+
+1. Reconnect to the existing vCenter Azure resource. The name must be the same as the resource ID of the existing vCenter resource in Azure.
+
+ ```azurecli
+ az connectedvmware vcenter connect --custom-location <custom-location-name> --location <Azure-region> --name <name of existing vCenter resource in Azure> --resource-group <resource group of the existing vCenter resource> --username <username to the vSphere account> --password <password to the vSphere account>
+ ```
+
+1. Once the above commands are successfully completed, the resource bridge should be recovered, and the previously disconnected Arc-enabled resources will be manageable in Azure again.
+
+## Use a script to recover Arc resource bridge
+
+> [!NOTE]
+> The script used in this automated recovery process will also upgrade the resource bridge to the latest version.
+
+To recover the Arc resource bridge, perform the following steps:
+
+1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location, and vCenter Azure resources.
+
+1. Find and delete the old Arc resource bridge **template** from your vCenter.
+
+1. Download the [onboarding script](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#run-the-script) from the Azure portal and update the following section in the script, using the **same information** as the original resources in Azure.
+
+ ```powershell
+ $location = <Azure region of the resources>
+
+ $applianceSubscriptionId = <subscription-id>
+ $applianceResourceGroupName = <resource-group-name>
+ $applianceName = <resource-bridge-name>
+
+ $customLocationSubscriptionId = <subscription-id>
+ $customLocationResourceGroupName = <resource-group-name>
+ $customLocationName = <custom-location-name>
+
+ $vCenterSubscriptionId = <subscription-id>
+ $vCenterResourceGroupName = <resource-group-name>
+ $vCenterName = <vcenter-name-in-azure>
+ ```
+
+1. [Run the onboarding script](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#run-the-script) again with the `--force` parameter.
+
+ ``` powershell-interactive
+ ./resource-bridge-onboarding-script.ps1 --force
+ ```
+
+1. [Provide the inputs](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#inputs-for-the-script) as prompted.
+
+1. Once the script successfully finishes, the resource bridge should be recovered, and the previously disconnected Arc-enabled resources will be manageable in Azure again.
+
+## Next steps
+
+[Troubleshoot Azure Arc resource bridge (preview) issues](../resource-bridge/troubleshoot-resource-bridge.md)
+
+If the recovery steps above are unsuccessful in restoring Arc resource bridge to its original state, try one of the following channels for support:
+
+- Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html).
+- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
+- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
azure-government Documentation Government Get Started Connect With Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-get-started-connect-with-portal.md
Title: Connect to Azure Government using portal
-description: This quickstart shows how to connect to Azure Government and create a web app in Azure Government using portal
-
-cloud: gov
-
+description: This quickstart shows how to connect to Azure Government and create a web app using portal
- Previously updated : 03/09/2021+
-#Customer intent: As a developer working for a federal government agency "x", I want to connect to Azure Government using portal so I can start creating apps and developing against Azure Government's secure isolated datacenters.
+recommendations: false
Last updated : 08/25/2022 # Quickstart: Connect to Azure Government using portal
-Microsoft Azure Government delivers a dedicated cloud with world-class security and compliance, enabling US government agencies and their partners to transform their workloads to the cloud. To manage your Azure Government cloud workloads and applications you can connect to Azure Government using different tools, as described in the following video.
+Microsoft Azure Government delivers a dedicated cloud with world-class security and compliance, enabling US government agencies and their partners to transform their workloads to the cloud. To manage your Azure Government cloud workloads and applications you can connect to Azure Government using different tools, as described in the following video.
+
+</br>
> [!VIDEO https://www.youtube.com/embed/Q3kx4cmRkCA]
-This quickstart shows how to use the Azure Government portal to access and start managing resources in Azure Government. The Azure Government portal is the primary way most people will connect to their Azure Government environment. If you don't have an Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin.
+This quickstart shows how to use the Azure Government portal to access and start managing resources in Azure Government. The Azure Government portal is the primary way to connect to your Azure Government environment. If you don't have an Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin.
## Prerequisites -- Review [Guidance for developers](./documentation-government-developer-guide.md).<br/> This article discusses Azure Government's unique URLs and endpoints for managing your environment. You must know about these endpoints in order to connect to Azure Government.
+- Review [Guidance for developers](./documentation-government-developer-guide.md) to learn about Azure Government unique URLs and endpoints for managing your environment. You must know about these endpoints in order to connect to Azure Government.
- Review [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md) and click on a service of interest to see variations between Azure Government and global Azure. ## Sign in to Azure Government To connect, browse to the portal at [https://portal.azure.us](https://portal.azure.us).
-Sign in using your Azure Government credentials. Once you sign it, you should see "Microsoft Azure Government" in the upper left of the main navigation bar.
+Sign in using your Azure Government credentials. Once you sign in, you should see **Microsoft Azure Government** in the upper left section of the main navigation bar.
-![Azure Government Portal](./media/connect-with-portal/azure-gov-portal.png)
## Check out Service health You can take a look at Azure Government regions and their health status by clicking on **Service Health**. Choose one of the available US government-only datacenter regions.
-![Screenshot shows the Service Health page for Azure Government with the Region drop-down menu open.](./media/connect-with-portal/connect-with-portal.png)
## Next steps
-This quickstart showed you how to use the Azure Government portal to connect to Azure Government. Once you are connected to Azure Government, you may want to explore Azure services. Make sure you check out the variations, described in [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md). To learn more about Azure services, continue to the Azure documentation.
+This quickstart showed you how to use the Azure Government portal to connect to Azure Government. Once you are connected to Azure Government, you may want to explore Azure services. Make sure you check out the variations and usage limitations, described in [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md). To learn more about Azure services, continue to the Azure documentation.
> [!div class="nextstepaction"] > [Azure documentation](../index.yml)+
+For more information about Azure Government, see the following resources:
+
+- [Acquiring and accessing Azure Government](https://azure.microsoft.com/offers/azure-government/)
+- [How to buy Azure Government](https://azure.microsoft.com/global-infrastructure/government/how-to-buy/)
+- [Azure Government Blog](https://devblogs.microsoft.com/azuregov/)
+- [Azure Government overview](./documentation-government-welcome.md)
+- [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md)
+- [Azure Government security](./documentation-government-plan-security.md)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
+- [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope)
+- [Azure Government DoD overview](./documentation-government-overview-dod.md)
+- [FedRAMP ΓÇô Azure compliance](/azure/compliance/offerings/offering-fedramp)
+- [DoD Impact Level 5 ΓÇô Azure compliance](/azure/compliance/offerings/offering-dod-il5)
+- [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md)
azure-government Documentation Government Stig Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-stig-linux-vm.md
Previously updated : 06/14/2021
+recommendations: false
Last updated : 08/25/2022 # Deploy STIG-compliant Linux Virtual Machines (Preview)
To learn more about Azure services, continue to the Azure documentation.
> [!div class="nextstepaction"] > [Azure documentation](../index.yml)+
+For more information about Azure Government, see the following resources:
+
+- [Azure Government overview](./documentation-government-welcome.md)
+- [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md)
+- [Azure Government security](./documentation-government-plan-security.md)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
+- [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope)
+- [Azure Government DoD overview](./documentation-government-overview-dod.md)
+- [FedRAMP ΓÇô Azure compliance](/azure/compliance/offerings/offering-fedramp)
+- [DoD Impact Level 5 ΓÇô Azure compliance](/azure/compliance/offerings/offering-dod-il5)
+- [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md)
+- [Secure Azure Computing Architecture](./compliance/secure-azure-computing-architecture.md)
azure-government Documentation Government Stig Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-stig-windows-vm.md
Previously updated : 06/14/2021
+recommendations: false
Last updated : 08/25/2022 # Deploy STIG-compliant Windows Virtual Machines (Preview)
To learn more about Azure services, continue to the Azure documentation.
> [!div class="nextstepaction"] > [Azure documentation](../index.yml)+
+For more information about Azure Government, see the following resources:
+
+- [Azure Government overview](./documentation-government-welcome.md)
+- [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md)
+- [Azure Government security](./documentation-government-plan-security.md)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
+- [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope)
+- [Azure Government DoD overview](./documentation-government-overview-dod.md)
+- [FedRAMP ΓÇô Azure compliance](/azure/compliance/offerings/offering-fedramp)
+- [DoD Impact Level 5 ΓÇô Azure compliance](/azure/compliance/offerings/offering-dod-il5)
+- [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md)
+- [Secure Azure Computing Architecture](./compliance/secure-azure-computing-architecture.md)
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Azure Monitor Agent is available in all public regions and Azure Government clou
There's no cost for the Azure Monitor Agent, but you might incur charges for the data ingested. For information on Log Analytics data collection and retention and for customer metrics, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-## Networking
-
-The Azure Monitor Agent supports Azure service tags. Both *AzureMonitor* and *AzureResourceManager* tags are required. It supports connecting via *direct proxies, Log Analytics gateway, and private links* as described in the following sections.
-
-### Firewall requirements
-
-| Cloud |Endpoint |Purpose |Port |Direction |Bypass HTTPS inspection|
-|||||--|--|
-| Azure Commercial |global.handler.control.monitor.azure.com |Access control service|Port 443 |Outbound|Yes |
-| Azure Commercial |`<virtual-machine-region-name>`.handler.control.monitor.azure.com |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
-| Azure Commercial |`<log-analytics-workspace-id>`.ods.opinsights.azure.com |Ingest logs data |Port 443 |Outbound|Yes |
-| Azure Commercial | management.azure.com | Only needed if sending time series data (metrics) to Azure Monitor [Custom metrics](../essentials/metrics-custom-overview.md) database | Port 443 | Outbound | Yes |
-| Azure Government | Replace '.com' above with '.us' | Same as above | Same as above | Same as above| Same as above |
-| Azure China | Replace '.com' above with '.cn' | Same as above | Same as above | Same as above| Same as above |
-
-If you use private links on the agent, you must also add the [DCE endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint).
-
-### Proxy configuration
-
-If the machine connects through a proxy server to communicate over the internet, review the following requirements to understand the network configuration required.
-
-The Azure Monitor Agent extensions for Windows and Linux can communicate either through a proxy server or a [Log Analytics gateway](./gateway.md) to Azure Monitor by using the HTTPS protocol. Use it for Azure virtual machines, Azure virtual machine scale sets, and Azure Arc for servers. Use the extensions settings for configuration as described in the following steps. Both anonymous and basic authentication by using a username and password are supported.
-
-> [!IMPORTANT]
-> Proxy configuration is not supported for [Azure Monitor Metrics (Public preview)](../essentials/metrics-custom-overview.md) as a destination. If you're sending metrics to this destination, it will use the public internet without any proxy.
-
-1. Use this flowchart to determine the values of the *`Settings` and `ProtectedSettings` parameters first.
-
- ![Diagram that shows a flowchart to determine the values of settings and protectedSettings parameters when you enable the extension.](media/azure-monitor-agent-overview/proxy-flowchart.png)
-
-1. After determining the `Settings` and `ProtectedSettings` parameter values, *provide these other parameters* when you deploy Azure Monitor Agent, using PowerShell commands, as shown in the following examples:
-
-# [Windows VM](#tab/PowerShellWindows)
-
-```powershell
-$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
-$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
-
-Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString
-```
-
-# [Linux VM](#tab/PowerShellLinux)
-
-```powershell
-$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
-$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
-
-Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString
-```
-
-# [Windows Arc-enabled server](#tab/PowerShellWindowsArc)
-
-```powershell
-$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
-$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
-
-New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString
-```
-
-# [Linux Arc-enabled server](#tab/PowerShellLinuxArc)
-
-```powershell
-$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
-$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
-
-New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString
-```
---
-### Log Analytics gateway configuration
-
-1. Follow the preceding instructions to configure proxy settings on the agent and provide the IP address and port number that corresponds to the gateway server. If you've deployed multiple gateway servers behind a load balancer, the agent proxy configuration is the virtual IP address of the load balancer instead.
-1. Add the **configuration endpoint URL** to fetch data collection rules to the allowlist for the gateway
- `Add-OMSGatewayAllowedHost -Host global.handler.control.monitor.azure.com`
- `Add-OMSGatewayAllowedHost -Host <gateway-server-region-name>.handler.control.monitor.azure.com`.
- (If you're using private links on the agent, you must also add the [data collection endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint).)
-1. Add the **data ingestion endpoint URL** to the allowlist for the gateway
- `Add-OMSGatewayAllowedHost -Host <log-analytics-workspace-id>.ods.opinsights.azure.com`.
-1. Restart the **OMS Gateway** service to apply the changes
- `Stop-Service -Name <gateway-name>`
- `Start-Service -Name <gateway-name>`.
-
-### Private link configuration
-
-To configure the agent to use private links for network communications with Azure Monitor, follow instructions to [enable network isolation](./azure-monitor-agent-data-collection-endpoint.md#enable-network-isolation-for-the-azure-monitor-agent) by using [data collection endpoints](azure-monitor-agent-data-collection-endpoint.md).
- ## Compare to legacy agents The tables below provide a comparison of Azure Monitor Agent with the legacy the Azure Monitor telemetry agents for Windows and Linux.
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
Title: Enable network isolation for the Azure Monitor agent
-description: Use data collection endpoints to uniquely configure ingestion settings for your machines.
+ Title: Define Azure Monitor Agent network settings
+description: Define network settings and enable network isolation for Azure Monitor Agent.
+# Define Azure Monitor Agent network settings
-# Enable network isolation for the Azure Monitor agent
+Azure Monitor Agent supports connecting using direct proxies, Log Analytics gateway, and private links. This article explains how to define network settings and enable network isolation for Azure Monitor Agent.
+
+## Virtual network service tags
+
+The Azure Monitor Agent supports [Azure virtual network service tags](../../virtual-network/service-tags-overview.md). Both *AzureMonitor* and *AzureResourceManager* tags are required.
+
+## Firewall requirements
+
+| Cloud |Endpoint |Purpose |Port |Direction |Bypass HTTPS inspection|
+|||||--|--|
+| Azure Commercial |global.handler.control.monitor.azure.com |Access control service|Port 443 |Outbound|Yes |
+| Azure Commercial |`<virtual-machine-region-name>`.handler.control.monitor.azure.com |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
+| Azure Commercial |`<log-analytics-workspace-id>`.ods.opinsights.azure.com |Ingest logs data |Port 443 |Outbound|Yes |
+| Azure Commercial | management.azure.com | Only needed if sending time series data (metrics) to Azure Monitor [Custom metrics](../essentials/metrics-custom-overview.md) database | Port 443 | Outbound | Yes |
+| Azure Government | Replace '.com' above with '.us' | Same as above | Same as above | Same as above| Same as above |
+| Azure China | Replace '.com' above with '.cn' | Same as above | Same as above | Same as above| Same as above |
+
+If you use private links on the agent, you must also add the [DCE endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint).
+
+## Proxy configuration
+
+If the machine connects through a proxy server to communicate over the internet, review the following requirements to understand the network configuration required.
+
+The Azure Monitor Agent extensions for Windows and Linux can communicate either through a proxy server or a [Log Analytics gateway](./gateway.md) to Azure Monitor by using the HTTPS protocol. Use it for Azure virtual machines, Azure virtual machine scale sets, and Azure Arc for servers. Use the extensions settings for configuration as described in the following steps. Both anonymous and basic authentication by using a username and password are supported.
+
+> [!IMPORTANT]
+> Proxy configuration is not supported for [Azure Monitor Metrics (Public preview)](../essentials/metrics-custom-overview.md) as a destination. If you're sending metrics to this destination, it will use the public internet without any proxy.
+
+1. Use this flowchart to determine the values of the *`Settings` and `ProtectedSettings` parameters first.
+
+ ![Diagram that shows a flowchart to determine the values of settings and protectedSettings parameters when you enable the extension.](media/azure-monitor-agent-overview/proxy-flowchart.png)
+
+1. After determining the `Settings` and `ProtectedSettings` parameter values, *provide these other parameters* when you deploy Azure Monitor Agent, using PowerShell commands, as shown in the following examples:
+
+# [Windows VM](#tab/PowerShellWindows)
+
+```powershell
+$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
+$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
+
+Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString
+```
+
+# [Linux VM](#tab/PowerShellLinux)
+
+```powershell
+$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
+$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
+
+Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString
+```
+
+# [Windows Arc-enabled server](#tab/PowerShellWindowsArc)
+
+```powershell
+$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
+$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
+
+New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString
+```
+
+# [Linux Arc-enabled server](#tab/PowerShellLinuxArc)
+
+```powershell
+$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
+$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
+
+New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString
+```
+++
+## Log Analytics gateway configuration
+
+1. Follow the preceding instructions to configure proxy settings on the agent and provide the IP address and port number that corresponds to the gateway server. If you've deployed multiple gateway servers behind a load balancer, the agent proxy configuration is the virtual IP address of the load balancer instead.
+1. Add the **configuration endpoint URL** to fetch data collection rules to the allowlist for the gateway
+ `Add-OMSGatewayAllowedHost -Host global.handler.control.monitor.azure.com`
+ `Add-OMSGatewayAllowedHost -Host <gateway-server-region-name>.handler.control.monitor.azure.com`.
+ (If you're using private links on the agent, you must also add the [data collection endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint).)
+1. Add the **data ingestion endpoint URL** to the allowlist for the gateway
+ `Add-OMSGatewayAllowedHost -Host <log-analytics-workspace-id>.ods.opinsights.azure.com`.
+1. Restart the **OMS Gateway** service to apply the changes
+ `Stop-Service -Name <gateway-name>`
+ `Start-Service -Name <gateway-name>`.
+
+## Enable network isolation for the Azure Monitor agent
By default, Azure Monitor agent will connect to a public endpoint to connect to your Azure Monitor environment. You can enable network isolation for your agents by creating [data collection endpoints](../essentials/data-collection-endpoint-overview.md) and adding them to your [Azure Monitor Private Link Scopes (AMPLS)](../logs/private-link-configure.md#connect-azure-monitor-resources).
-## Create data collection endpoint
+### Create data collection endpoint
To use network isolation, you must create a data collection endpoint for each of your regions for agents to connect instead of the public endpoint. See [Create a data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-data-collection-endpoint) for details on create a DCE. An agent can only connect to a DCE in the same region. If you have agents in multiple regions, then you must create a DCE in each one.
-## Create private link
+### Create private link
With [Azure Private Link](../../private-link/private-link-overview.md), you can securely link Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. An Azure Monitor Private Link connects a private endpoint to a set of Azure Monitor resources, defining the boundaries of your monitoring network. That set is called an Azure Monitor Private Link Scope (AMPLS). See [Configure your Private Link](../logs/private-link-configure.md) for details on creating and configuring your AMPLS.
-## Add DCE to AMPLS
+### Add DCE to AMPLS
Add the data collection endpoints to a new or existing [Azure Monitor Private Link Scopes (AMPLS)](../logs/private-link-configure.md#connect-azure-monitor-resources) resource. This adds the DCE endpoints to your private DNS zone (see [how to validate](../logs/private-link-configure.md#review-and-validate-your-private-link-setup)) and allows communication via private links. You can do this from either the AMPLS resource or from within an existing DCE resource's 'Network Isolation' tab. > [!NOTE]
For your data collection endpoint(s), ensure **Accept access from public network
:::image type="content" source="media/azure-monitor-agent-dce/data-collection-endpoint-network-isolation.png" lightbox="media/azure-monitor-agent-dce/data-collection-endpoint-network-isolation.png" alt-text="Screenshot for configuring data collection endpoint network isolation."::: - Associate the data collection endpoints to the target resources by editing the data collection rule in Azure portal. From the **Resources** tab, select **Enable Data Collection Endpoints** and select a DCE for each virtual machine. See [Configure data collection for the Azure Monitor agent](../agents/data-collection-rule-azure-monitor-agent.md). :::image type="content" source="media/azure-monitor-agent-dce/data-collection-rule-virtual-machines-with-endpoint.png" lightbox="media/azure-monitor-agent-dce/data-collection-rule-virtual-machines-with-endpoint.png" alt-text="Screenshot for configuring data collection endpoint for an agent."::: -- ## Next steps - [Associate endpoint to machines](../agents/data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) - [Add endpoint to AMPLS resource](../logs/private-link-configure.md#connect-azure-monitor-resources)
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
# Migrate to Azure Monitor Agent from Log Analytics agent
-[Azure Monitor Agent (AMA)](./agents-overview.md) collects monitoring data from the guest operating system of Azure and hybrid virtual machines. The agent delivers the data to Azure Monitor for use by features, insights, and other services, such as [Microsoft Sentinel](../../sentintel/../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md). Azure Monitor Agent replaces the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines Azure Monitor and introduces a simplified, flexible method of configuring collection configuration called [Data Collection Rules (DCRs)](../essentials/data-collection-rule-overview.md). This article provides high-level guidance on when and how to migrate to the new Azure Monitor Agent (AMA) based on the agent's benefits and limitations.
+[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines Azure Monitor and introduces a simplified, flexible method of configuring collection configuration called [Data Collection Rules (DCRs)](../essentials/data-collection-rule-overview.md). This article outlines the benefits of migrating to Azure Monitor Agent (AMA) and provides guidance on how to implement a successful migration.
+
+> [!IMPORTANT]
+> The Log Analytics agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are currently using the Log Analytics agent with Azure Monitor or other supported features and services, you should start planning your migration to Azure Monitor Agent using the information in this article.
+
+## Benefits
+
+Azure Monitor Agent provides the following benefits over legacy agents:
-## Benefits
- **Security and performance**
- - AMA uses Managed Identity or Azure Active Directory (Azure AD) tokens (for clients), which are much more secure than the legacy authentication methods.
- - AMA provides a higher events per second (EPS) upload rate compared to legacy agents.
-- **Cost savings** using data collection [using Data Collection Rules](data-collection-rule-azure-monitor-agent.md). This is one of the most useful advantages of using AMA.
- - DCRs lets you configure data collection for specific machines connected to a workspace as compared to the ΓÇ£all or nothingΓÇ¥ mode that legacy agents have.
+ - Enhanced security through Managed Identity and Azure Active Directory (Azure AD) tokens (for clients).
+ - A higher events per second (EPS) upload rate.
+- **Cost savings** using data collection [using Data Collection Rules](data-collection-rule-azure-monitor-agent.md). Using Data Collection Rules is one of the most useful advantages of using Azure Monitor Agent.
+ - DCRs lets you configure data collection for specific machines connected to a workspace as compared to the ΓÇ£all or nothingΓÇ¥ approach of legacy agents.
- Using DCRs you can define which data to ingest and which data to filter out to reduce workspace clutter and save on costs. - **Simpler management** of data collection, including ease of troubleshooting - Easy **multihoming** on Windows and Linux.
- - Centralized, ΓÇÿin the cloudΓÇÖ agent configuration makes every action, across the data collection lifecycle, simpler and more easily scalable, from onboarding to deployment to updates and changes over time.
+ - Centralized, ΓÇÿin the cloudΓÇÖ agent configuration makes every action simpler and more easily scalable throughout the data collection lifecycle, from onboarding to deployment to updates and changes over time.
- Greater transparency and control of more capabilities and services, such as Sentinel, Defender for Cloud, and VM Insights. - **A single agent** that consolidates all features necessary to address all telemetry data collection needs across servers and client devices (running Windows 10, 11). This is the goal, though Azure Monitor Agent currently converges with the Log Analytics agents.
-## When should I migrate to the Azure Monitor Agent?
-Your migration plan to the Azure Monitor Agent should include the following considerations:
--- **Environment requirements:** Azure Monitor Agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today. Support for future operating system versions, environment support, and networking requirements will only be provided in this new agent. If Azure Monitor Agent supports your current environment, start transitioning to it.--- **Current and new feature requirements:** Azure Monitor Agent introduces several new capabilities, such as filtering, scoping, and multi-homing. But it isn't at parity yet with the current agents for other functionality. For more information, see [Azure Monitor Agent's supported services and features](agents-overview.md#supported-services-and-features).-
- Most new capabilities in Azure Monitor will be made available only with Azure Monitor Agent. Review whether Azure Monitor Agent has the features you require and if there are some features that you can temporarily do without to get other important features in the new agent.
+## Migration plan considerations
- If Azure Monitor Agent has all the core capabilities you need, start transitioning to it. If there are critical features that you require, continue with the current agent until Azure Monitor Agent reaches parity.
+Your migration plan to the Azure Monitor Agent should take into account:
-- **Tolerance for rework:** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, assess the effort involved. If the setup will take a significant amount of work, consider setting up your new environment with the new agent as it's now generally available.
+- **Current and new feature requirements:** Review [Azure Monitor Agent's supported services and features](agents-overview.md#supported-services-and-features) to ensure that Azure Monitor Agent has the features you require. If you currently use unsupported features you can temporarily do without, consider migrating to benefit from other important features in the new agent. Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper-preview) to discover what solutions and features you're using the legacy agent for.
-> [!IMPORTANT]
-> The Log Analytics agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are currently using the Log Analytics agent with Azure Monitor or other supported features and services, you should start planning your migration to Azure Monitor Agent using the information in this article.
-
-## Should I install Azure Monitor Agent together with a legacy agent?
-
-Azure Monitor Agent can run alongside the legacy Log Analytics agents on the same machine so that you can continue to use their existing functionality during evaluation or migration. While this allows you to begin the transition given the limitations, keep in mind the considerations below:
-- Be careful in collecting duplicate data because it could skew query results and affect downstream features like alerts, dashboards or workbooks. For example, VM insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents. If you install Azure Monitor Agent and create a data collection rule for these events and performance data, you'll collect duplicate data. Make sure you're not collecting the same data from both agents. If you're collecting the same data with both agents, ensure they're **collecting from different machines** or **going to separate destinations**. Collecting duplicate data also generates more charges for data ingestion and retention.-- Running two telemetry agents on the same machine consumes double the resources, including, but not limited to CPU, memory, storage space, and network bandwidth.
+ If you use Microsoft Sentinel, see [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel.
-> [!NOTE]
-> When you use both agents during evaluation or migration, you can use the **Category** column of the [Heartbeat](/azure/azure-monitor/reference/tables/Heartbeat) table in your Log Analytics workspace, and filter for **Azure Monitor Agent**.
+- **Installing Azure Monitor Agent alongside a legacy agent:** If you're setting up a new environment with resources, such as deployment scripts and onboarding templates, and you still need a legacy agent, assess the effort of migrating to Azure Monitor Agent later. If the setup will take a significant amount of rework, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort.
-## Current capabilities
+ Azure Monitor Agent can run alongside the legacy Log Analytics agents on the same machine so that you can continue to use existing functionality during evaluation or migration. While this allows you to begin the transition given the limitations:
+ - Be careful in collecting duplicate data from the same machine, which could skew query results and affect downstream features like alerts, dashboards or workbooks. For example, VM Insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents.
+
+ If you install Azure Monitor Agent and create a data collection rule for these events and performance data, you'll collect duplicate data. If you're using both agents to collect the same type of data, make sure the agents are **collecting data from different machines** or **sending the data to different destinations**. Collecting duplicate data also generates more charges for data ingestion and retention.
+
+ - Running two telemetry agents on the same machine consumes double the resources, including, but not limited to CPU, memory, storage space, and network bandwidth.
-For full details about the capabilities of Azure Monitor Agent and a comparison with legacy agent capabilities, see [Azure Monitor Agent overview](../agents/agents-overview.md).
-
-If you use Microsoft Sentinel, see [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel.
-
-## Test migration
-To ensure safe deployment during migration, begin testing with few resources running the existing Log Analytics agent in your nonproduction environment. After you validate the data collected on these test resources, roll out to production by following the same steps.
+## Migration testing
+To ensure safe deployment during migration, begin testing with few resources running Azure Monitor Agent in your nonproduction environment. After you validate the data collected on these test resources, roll out to production by following the same steps.
See [create new data collection rules](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) to start collecting some of the existing data types. After you validate that data is flowing as expected with Azure Monitor Agent, check the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table for the value *Azure Monitor Agent* for AMA collected data. Ensure it matches data flowing through the existing Log Analytics agent. ## At-scale migration using Azure Policy
-[Azure Policy](../../governance/policy/overview.md) and [Resource Manager templates](../resource-manager-samples.md) provide scalability to migrate a large number of agents.
-Start by analyzing your current monitoring setup with the Log Analytics agent using the following criteria:
-
- - Sources, such as virtual machines, virtual machine scale sets, and on-premises servers
- - Data sources, such as performance counters, Windows event logs, and Syslog
- - Destinations, such as Log Analytics workspaces
+We recommend using [Azure Policy](../../governance/policy/overview.md) to migrate a large number of agents. Start by analyzing your current monitoring setup with the Log Analytics agent using the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper-preview) to find sources, such as virtual machines, virtual machine scale sets, and on-premises servers.
+
+Use the [DCR Config Generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator-preview) to migrate legacy agent configuration, including data sources and destinations, from the workspace to the new DCRs.
> [!IMPORTANT]
-> Before you deploy to a large number of agents, you should consider [configuring the workspace](agent-data-sources.md) to disable data collection for the Log Analytics agent. If you leave it enabled, you may collect duplicate data resulting in increased cost until you remove the Log Analytics agents from your virtual machines. Alternatively, you may choose to have duplicate collection during the migration period until you can confirm that the AMA has been deployed and configured correctly.
+> Before you deploy a large number of agents, consider [configuring the workspace](agent-data-sources.md) to disable data collection for the Log Analytics agent. If you leave data collection for the Log Analytics agent enabled, you may collect duplicate data and increase your costs. You might choose to collect duplicate data for a short period during migration until you verify that you've deployed and configured Azure Monitor Agent correctly.
-See [Using Azure Policy](azure-monitor-agent-manage.md#using-azure-policy) for details on deploying Azure Monitor Agent across a set of virtual machines. Associate the agents to the data collection rules developed during your [testing](#test-migration).
+Validate that Azure Monitor Agent is collecting data as expected and all downstream dependencies, such as dashboards, alerts, and workbooks, function properly.
-Validate that data is flowing as expected with the Azure Monitor Agent and that all downstream dependencies like dashboards, alerts, and runbook workers. Workbooks should all continue to function using data from the new agent.
+After you confirm that Azure Monitor Agent is collecting data properly, [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from monitored resources. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent.
-When you confirm that data is being collected properly, [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from the resources. Don't uninstall it if you need to use it for System Center Operations Manager scenarios or others solutions not yet available on Azure Monitor Agent. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent.
+> [!IMPORTANT]
+> Don't uninstall the legacy agent if you need to use it for System Center Operations Manager scenarios or others solutions not yet available on Azure Monitor Agent.
## Next steps
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
These are the valid `level` values that you can specify in the `applicationinsig
> [!NOTE] > If an exception object is passed to the logger, then the log message (and exception object details) > will show up in the Azure portal under the `exceptions` table instead of the `traces` table.
+> If you want to see the log messages across both the `traces` and `exceptions` tables,
+> you can write a Logs (Kusto) query to union across them, e.g.
+>
+> ```
+> union traces, (exceptions | extend message = outerMessage)
+> | project timestamp, message, itemType
+> ```
### LoggingLevel
azure-monitor Statsbeat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/statsbeat.md
Title: Statsbeat in Azure Application Insights | Microsoft Docs description: Statistics about Application Insights SDKs and Auto-Instrumentation Previously updated : 09/20/2021 Last updated : 08/24/2022 ms.reviwer: heya
Statsbeat collects essential and non-essential metrics.
#### [Java](#tab/eu-java)
-Statseat supports EU Data Boundary for Application Insights resources in the following regions:
+Statsbeat supports EU Data Boundary for Application Insights resources in the following regions:
| Geo Name | Region Name | |||
Statseat supports EU Data Boundary for Application Insights resources in the fol
#### [Node](#tab/eu-node)
-N/A
+Statsbeat supports EU Data Boundary for Application Insights resources in the following regions:
+
+| Geo Name | Region Name |
+|||
+| Europe | North Europe |
+| Europe | West Europe |
+| France | France Central |
+| France | France South |
+| Germany | Germany West Central |
+| Norway | Norway East |
+| Norway | Norway West |
+| Sweden | Sweden Central |
+| Switzerland | Switzerland North |
+| Switzerland | Switzerland West |
#### [Python](#tab/eu-python)
-N/A
+Statsbeat supports EU Data Boundary for Application Insights resources in the following regions:
+
+| Geo Name | Region Name |
+|||
+| Europe | North Europe |
+| Europe | West Europe |
+| France | France Central |
+| France | France South |
+| Germany | Germany West Central |
+| Norway | Norway East |
+| Norway | Norway West |
+| Sweden | Sweden Central |
+| Switzerland | Switzerland North |
+| Switzerland | Switzerland West |
+
N/A
### Non-essential Statsbeat
-Track the Disk I/O failure when using disk persistence for retriable telemetry
+Track the Disk I/O failure when using disk persistence for reliable telemetry.
|Metric Name|Unit|Supported dimensions| |--|--|--|
Not supported yet.
#### [Python](#tab/python)
-Not supported yet.
+Statsbeat is enabled by default. It can be disabled by setting the environment variable <code class="notranslate">APPLICATIONINSIGHTS_STATSBEAT_DISABLED_ALL</code> to <code class="notranslate">true</code>.
+
+Metrics are sent to the following locations, to which outgoing connections must be opened in firewalls.
+
+|Location |URL |
+|||
+|Europe |<code class="notranslate">westeurope-5.in.applicationinsights.azure.com</code> |
+|Outside Europe |<code class="notranslate">westus-0.in.applicationinsights.azure.com</code> |
azure-monitor Container Insights Agent Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-agent-config.md
Title: Configure Container insights agent data collection | Microsoft Docs description: This article describes how you can configure the Container insights agent to control stdout/stderr and environment variables log collection. Previously updated : 10/09/2020 Last updated : 08/25/2022
Container insights collects stdout, stderr, and environmental variables from con
This article demonstrates how to create ConfigMap and configure data collection based on your requirements.
->[!NOTE]
->For Azure Red Hat OpenShift V3, a template ConfigMap file is created in the *openshift-azure-logging* namespace.
->
- ## ConfigMap file settings overview A template ConfigMap file is provided that allows you to easily edit it with your customizations without having to create it from scratch. Before starting, you should review the Kubernetes documentation about [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) and familiarize yourself with how to create, configure, and deploy ConfigMaps. This will allow you to filter stderr and stdout per namespace or across the entire cluster, and environment variables for any container running across all pods/nodes in the cluster.
Perform the following steps to configure and deploy your ConfigMap configuration
1. Download the [template ConfigMap YAML file](https://aka.ms/container-azm-ms-agentconfig) and save it as container-azm-ms-agentconfig.yaml.
- > [!NOTE]
- > This step is not required when working with Azure Red Hat OpenShift V3 because the ConfigMap template already exists on the cluster.
-
-2. Edit the ConfigMap yaml file with your customizations to collect stdout, stderr, and/or environmental variables. If you are editing the ConfigMap yaml file for Azure Red Hat OpenShift V3, first run the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging` to open the file in a text editor.
+2. Edit the ConfigMap yaml file with your customizations to collect stdout, stderr, and/or environmental variables.
- To exclude specific namespaces for stdout log collection, you configure the key/value using the following example: `[log_collection_settings.stdout] enabled = true exclude_namespaces = ["my-namespace-1", "my-namespace-2"]`.
Perform the following steps to configure and deploy your ConfigMap configuration
Save your changes in the editor.
-3. For clusters other than Azure Red Hat OpenShift V3, create ConfigMap by running the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
+3. Create ConfigMap by running the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
The configuration change can take a few minutes to finish before taking effect,
## Verify configuration
-To verify the configuration was successfully applied to a cluster other than Azure Red Hat OpenShift V3, use the following command to review the logs from an agent pod: `kubectl logs omsagent-fdf58 -n kube-system`. If there are configuration errors from the omsagent pods, the output will show errors similar to the following:
+To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs omsagent-fdf58 -n kube-system`. If there are configuration errors from the omsagent pods, the output will show errors similar to the following:
``` ***************Start Config Processing********************
Errors related to applying configuration changes are also available for review.
- From an agent pod logs using the same `kubectl logs` command.
- >[!NOTE]
- >This command is not applicable to Azure Red Hat OpenShift V3 cluster.
- >
- - From Live logs. Live logs show errors similar to the following: ```
Errors related to applying configuration changes are also available for review.
- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Error* severity for configuration errors. If there are no errors, the entry in the table will have data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence and count in the last hour. -- With Azure Red Hat OpenShift V3, check the omsagent logs by searching the **ContainerLog** table to verify if log collection of openshift-azure-logging is enabled.-
-After you correct the error(s) in ConfigMap on clusters other than Azure Red Hat OpenShift V3, save the yaml file and apply the updated ConfigMaps by running the command: `kubectl apply -f <configmap_yaml_file.yaml`. For Azure Red Hat OpenShift V3, edit and save the updated ConfigMaps by running the command:
-
-``` bash
-oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging
-```
+After you correct the error(s) in ConfigMap, save the yaml file and apply the updated ConfigMaps by running the command: `kubectl apply -f <configmap_yaml_file.yaml`.
## Applying updated ConfigMap
-If you have already deployed a ConfigMap on clusters other than Azure Red Hat OpenShift V3 and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used and then apply using the same command as before, `kubectl apply -f <configmap_yaml_file.yaml`. For Azure Red Hat OpenShift V3, edit and save the updated ConfigMaps by running the command:
-
-``` bash
-oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging
-```
+If you have already deployed a ConfigMap on clusters and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used and then apply using the same command as before, `kubectl apply -f <configmap_yaml_file.yaml`.
The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" updated`.
azure-monitor Container Insights Transition Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-transition-hybrid.md
Title: "Transition to using Container Insights on Azure Arc-enabled Kubernetes clusters" Previously updated : 04/05/2021 Last updated : 08/25/2022
# Transition to using Container Insights on Azure Arc-enabled Kubernetes
-On May 31, 2022 Container Insights support for Azure Red Hat OpenShift v4.x will be retired. If you use the script-based model of Container Insights for Azure Red Hat OpenShift v4.x, make sure to transition to Container Insights on [Azure Arc enabled Kubernetes](./container-insights-enable-arc-enabled-clusters.md) prior to that date.
+On May 31, 2022 Container Insights support for Azure Red Hat OpenShift v4.x was retired. If you use the script-based model of Container Insights for Azure Red Hat OpenShift v4.x, make sure to transition to Container Insights on [Azure Arc enabled Kubernetes](./container-insights-enable-arc-enabled-clusters.md) prior to that date.
## Steps to complete the transition
azure-monitor Activity Log Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log-insights.md
To view activity log insights at the resource group or subscription level:
1. At the top of the **Activity Logs Insights** page, select:
- 1. One or more subscriptions from the **Subscriptions** dropdown.
- 1. Resources and resource groups from the **CurrentResource** dropdown.
- 1. A time range for which to view data from the **TimeRange** dropdown.
+ - One or more subscriptions from the **Subscriptions** dropdown.
+ - Resources and resource groups from the **CurrentResource** dropdown.
+ - A time range for which to view data from the **TimeRange** dropdown.
## View resource-level activity log insights
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Capabilities that require dedicated clusters:
| East US 2 | UK South | | | Southeast Asia | | South Central US | West Europe | | | East Asia | | US Gov Virginia | Sweden Central | | | China North 3 |
- | West US 2 | | | | |
+ | West US 2 | Switzerland North | | | |
| West US 3 | | | | |
azure-percept Azure Percept For Deepstream Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/deepstream/azure-percept-for-deepstream-overview.md
- Title: Azure Percept for DeepStream overview
-description: A description of Azure Percept for DeepStream developer tools that provide a custom developer experience.
----- Previously updated : 08/10/2022--
-# Azure Percept for DeepStream overview
-
-Azure Percept for DeepStream includes developer tools that provide a custom developer experience. It enables you to create NVIDIA DeepStream containers using Microsoft-based images and guidance, supported models from NVIDIA out of the box, and/or bring your own models.
-
-DeepStream is NVIDIAΓÇÖs toolkit to develop and deploy Vision AI applications and services. It provides multi-platform, scalable, Transport Layer Security (TLS)-encrypted security that can be deployed on-premises, on the edge, and in the cloud.
-
-## Azure Percept for DeepStream offers:
--- **Simplifying your development process** -
- Auto selection of AI model execution and inference provider: One of several execution providers, such as ORT, CUDA, and TENSORT, are automatically selected to simplify your development process.
--- **Customizing Region of Interest (ROI) to enable your business scenario**-
- Region of Interest (ROI) configuration widget: Percept Player, a web app widget, is included for customizing ROIs to enable event detection for your business scenario.
--- **Simplifying the configuration for pre/post processing** -
- You can add a Python-based model and parser using a configuration file, instead of hardcoding it into the pipeline.
--- **Offering a broad Pre-built AI model framework** -
- This solution supports many of the most common CV models in use today, for example NVIDIA TAO, ONNX, CAFFE, UFF (TensorFlow), and Triton.
--- **Supporting bring your own model** -
- Support for model and container customization, USB or RTSP camera and pre-recorded video streams, event-based video snippet storage in Azure Storage and Alerts, and AI model deployment via Azure IoT Module Twin update.
-
-## Azure Percept for DeepStream key components
-
-The following table provides a list of Azure Percept for DeepStreamΓÇÖs key components and a description of each one.
-
-| Components | Details |
-|-||
-| **Edge devices** | Azure Percept for DeepStream is available on the following devices:<br> - [Azure Stack HCI](/azure-stack/hci/overview): Requires a NVIDIA GPU (T4 or A2)<br> - [NVIDIA Jetson Orin](https://www.nvidia.com/autonomous-machines/embedded-systems/jetson-orin/)<br> - [NVIDIA Jetson Xavier](https://www.nvidia.com/autonomous-machines/embedded-systems/jetson-agx-xavier/)<br><br>**Note**<br>You can use any of the listed devices with any of the development paths. Some implementation steps may differ depending on the architecture of your device. Azure Stack HCI uses AMD64. Jetson devices use ARM64.<br><br> |
-| **Computer vision models** | Azure Percept for DeepStream can work with many different computer vision (CV) models as outlined:<br><br> - **NVIDIA Models** <br>For example: Body Pose Estimation and License Plate Recognition. License Plate Recognition includes three models: traffic cam net, license plate detection, and license plate reading and other Nivida Models.<br><br> - **ONNX Models** <br>For example: SSD-MobileNetV1, YOLOv4, Tiny YOLOv3, EfficentNet-Lite.<br><br> |
-| **Development Paths** | Azure Percept for DeepStream offers three development paths:<br><br> - **Getting started path** <br>This path uses pre-trained models and pre-recorded videos of simulated manufacturing environment to demonstrate the steps required to create an Edge AI solution using Azure Percept for DeepStream.<br>If you're just getting started on your computer vision (CV) app journey or simply want to learn more about Azure Percept for DeepStream, we recommend this path.<br><br> - **Pre-built model path** <br>This path provides pre-built parsers in Python for the CV models outlined earlier. You can easily deploy one of these models and integrate your own video stream.<br>If you're familiar with Azure IoT Edge solutions and want to leverage one of the supported models with an existing video stream, we recommend this path. <br><br> - **Bring your own model (BYOM) path**<br>This path provides you with steps of how to integrate your own custom model and parser into your Azure Percept for DeepStream Edge AI solution.<br>If you're an experienced developer who is familiar with cloud-based CV solutions and want a simplified deployment experience Azure Percept for DeepStream, we recommend this path.<br><br> |
-
-## Next steps
-
-Text to come.
-
-<!-- You're now ready to start using Azure Percept for DeepStream to create, manage, and deploy custom Edge AI solutions. We recommend the following resources to get started:
--- [Getting started checklist for Azure Percept for DeepStream](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/EeWQwQ8T-LVDmTMqC62Gss0Bo_1Fbjj9I8mDSLYwlICd_Q?e=f9FajM)--- [Tutorial: Deploy a supported model to your Azure Percept for DeepStream solution ](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/EQ9Wux4CkO5Iss8s82lcZj4B9XCwagaVoUEKyK0q2y-A1w?e=YfOaWn) -->
azure-percept Azure Percept On Azure Stack Hci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/hci/azure-percept-on-azure-stack-hci-overview.md
- Title: Azure Percept on Azure Stack HCI overview
-description: A description of Azure Percept on Azure Stack HCI.
----- Previously updated : 08/22/2022 --
-# Azure Percept on Azure Stack HCI overview
-Azure Percept on Azure Stack HCI is a virtualized workload that enables you to extend the capabilities of your existing [Azure Stack HCI](https://azure.microsoft.com/products/azure-stack/hci/) deployments quickly and easily by adding sophisticated AI solutions at the Edge. It is available as a preconfigured virtual hard disk (VHDX) that functions as an Azure IoT Edge device with AI capabilities.
-
-## Azure Percept on Azure Stack HCI enables you:
-
-### Maximize your investments easily
-Maximize your existing investments in the Azure Stack HCI computer infrastructure when you run Azure Percept on Azure Stack HCI. You can leverage [Windows Admin Center (WAC)](https://www.microsoft.com/windows-server/windows-admin-center) management expertise with Azure Percept for Azure Stack HCI extension to ingest and analyze data streams from your existing IP camera infrastructure. Using WAC also enables you to easily deploy, manage, scale, and secure your Azure Percept virtual machine (VM).
-
-### Bring data to storage and compute
-Use Azure Stack HCIΓÇÖs robust storage and compute options to pre-process raw data at the Edge before sending it to Azure for further processing and training. Since artificial intelligence/machine learning (AI/ML) solutions at the edge generate and process a significant amount of data, using Azure Stack HCI reduces the amount of data transfer or bandwidth consumed into Azure.
-
-### Maintain device security
-Azure Percept on Azure Stack HCI provides multiple layers of security. Leverage security mechanisms and processes built into the solution, including virtual trusted platform module (TPM), secure boot, secure provisioning, trusted software, secure update, and [Microsoft Defender for IoT](https://www.microsoft.com/security/blog/2021/11/02/how-microsoft-defender-for-iot-can-secure-your-iot-devices/#:~:text=Microsoft%20Defender%20for%20IoT%20is%20an%20open%20platform,to%20enrich%20the%20information%20coming%20from%20multiple%20sources).
-
-## Key components of Azure Percept on Azure Stack HCI
-Azure Percept on Azure Stack HCI integrates with Azure Percept Studio, Azure IoT Edge, IoT Hub, and Spatial Analysis from Azure Cognitive Services to create an end-to-end intelligent solution that leverages your existing IP camera devices.
-
-The following diagram provides a high-level view of the Azure Percept on Azure Stack HCI architecture.
-
-![Architecture diagram for Azure Percept on Azure Stack HCI.](./media/azure-percept-component-diagram.png)
-
-**Azure Percept on Azure Stack HCI includes the following key components:**
-
-### Azure Stack HCI
-[Azure Stack HCI](https://azure.microsoft.com/products/azure-stack/hci/) is a hyperconverged infrastructure (HCI) cluster solution that hosts virtualized Windows and Linux workloads and their storage in a hybrid environment that combines on-premises infrastructure with Azure cloud services. It requires a minimum of two clustered compute nodes, scales to as many as 16 clustered nodes, and enables data pre-processing at the edge by providing robust storage and compute options. Azure Percept on Azure Stack HCI runs as a pre-configured VM on Azure Stack HCI and has failover capability to ensure continuous operation. For information about customizable solutions that you can configure to meet your needs, see [certified Azure Stack HCI systems](https://azurestackhcisolutions.azure.microsoft.com/#/catalog).
-
-### Azure Percept virtual machine (VM)
-The Azure Percept VM leverages a virtual hard disk (VHDX) that runs on the Azure Stack HCI device. It enables you to host your own AI models, communicate with the cloud via IoT Hub, and update the Azure Percept virtual machine (VM) so you can update containers, download models, and manage devices remotely.
-
-The Percept VM leverages Azure IoT Edge to communicate with [Azure IoT Hub](https://www.bing.com/aclk?ld=e8d3D-tqxgHU7f2fug-xNf9TVUCUyRhu5fu58-tWHmwhmAtKIzkXCQETOv1QnKdXCr1kFm6NQ4SA4K5mukLPrpKC5z7nTlhrXnaiTqPPGu2a47SnDq-aKylUzhYQLxKs1yyOtnDuD1DDg4q04CZdFUFwPani9jnp6DLiQPMoYBkhhEJ3FV6SFro1VVB67p_n_4De1B7A&u=aHR0cHMlM2ElMmYlMmZhenVyZS5taWNyb3NvZnQuY29tJTJmZW4tdXMlMmZmcmVlJTJmaW90JTJmJTNmT0NJRCUzZEFJRDIyMDAyNzdfU0VNX2VhM2NkYWExN2Y5MzFkNDE2NTkwYjgyMjdlMjk0ZjdmJTNhRyUzYXMlMjZlZl9pZCUzZGVhM2NkYWExN2Y5MzFkNDE2NTkwYjgyMjdlMjk0ZjdmJTNhRyUzYXMlMjZtc2Nsa2lkJTNkZWEzY2RhYTE3ZjkzMWQ0MTY1OTBiODIyN2UyOTRmN2Y&rlid=ea3cdaa17f931d416590b8227e294f7f&ntb=1). It runs locally and securely, performs AI inferencing at the Edge, and communicates with Azure services for security and updates. It includes [Defender for IoT](https://www.bing.com/ck/a?!&&p=4b4f5983a77f5d870170a12cd507a8d967bd32e10eab125544ac7aad1691be23JmltdHM9MTY1Mjc1MzE3OCZpZ3VpZD1mZmQyZGJiNi1iOWFlLTRiYjgtOTQ1MC1iM2FlNmQ1ZTBlNmUmaW5zaWQ9NTQ1Mg&ptn=3&fclid=f087fcb3-d585-11ec-b34a-9f80cb12a098&u=a1aHR0cHM6Ly9henVyZS5taWNyb3NvZnQuY29tL2VuLXVzL3NlcnZpY2VzL2lvdC1kZWZlbmRlci8&ntb=1) to provide a lightweight security agent that proactively monitors for security threats like botnets, brute force attempts, crypto miners, malware, and chatbots, that you can also integrate into your Azure Monitor infrastructure.
-
-### Azure Percept Windows Admin Center Extension (WAC)
-[Windows Admin Center (WAC)](https://www.microsoft.com/windows-server/windows-admin-center) is a locally deployed application accessed via your browser for managing Azure Stack HCI clusters, Windows Server, and more. Azure Percept on Azure Stack HCI is installed through a WAC extension that guides the user through configuring and deploying the Percept VM and related services. It creates a secure and performant AI video inferencing solution usable from the edge to the cloud.
-
-### Azure Percept Solution Development Paths
-Whether you're a beginner, an expert, or anywhere in between, from zero to low code, to creating or bringing your own models, Azure Percept has a solution development path for you to build your Edge artificial intelligence (AI) solution. Azure Percept has three solution development paths that you can use to build Edge AI solutions: Azure Percept Studio, Azure Percept for DeepStream, and Azure Percept Open-Source Project. You aren't limited to one path; you can choose any or all of them depending on your business needs. For more information about the solution development paths, visit [Azure Percept solution development paths overview](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/EU92ZnNynDBGuVn3P5Xr5gcBFKS5HQguZm7O5sEENPUvPA?e=33T6Vi).
-
-#### *Azure Percept Studio*
-[Azure Percept Studio](/azure/azure-percept/studio/azure-percept-studio-overview) is a user-friendly portal for creating, deploying, and operating Edge artificial intelligence (AI) solutions. Using a low-code to no-code approach, you can discover and complete guided workflows and create an end-to-end Edge AI solution. This solution integrates Azure IoT and Azure AI cloud services like Azure IoT Hub, IoT Edge, Azure Storage, Log Analytics, and Spatial Analysis from Azure Cognitive Services.
-
-#### *Azure Percept for DeepStream*
-[Azure Percept for DeepStream](/azure/azure-percept/deepstream/azure-percept-for-deepstream-overview) includes developer tools that provide a custom developer experience. It enables you to create NVIDIA DeepStream containers using Microsoft-based images and guidance, supported models from NVIDIA out of the box, and/or bring your own models (BYOM). DeepStream is NVIDIAΓÇÖs toolkit to develop and deploy Vision AI applications and services. It provides multi-platform, scalable, Transport Layer Security (TLS)-encrypted security that can be deployed on-premises, on the edge, and in the cloud.
-
-#### *Azure Percept Open-Source Project*
-[Azure Percept Open-Source Project](/azure/azure-percept/open-source/azure-percept-open-source-project-overview) is a framework for creating, deploying, and operating Edge artificial intelligence (AI) solutions at scale with the control and flexibility of open-source natively on your environment. Azure Percept Open-Source Project is fully open-sourced and leverages the open-source software (OSS) community to deliver enhanced experiences. It's a self-managed solution where you host the environment in your own cluster.
-
-## Next steps
-
-Text to come.
-
-<!-- Before you start setting up your Azure Percept virtual machine (VM), we recommend the following articles:
-- [Getting started checklist for Azure Percept on Azure Stack HCI](https://github.com/microsoft/santa-cruz-workload/blob/main/articles/getting-started-checklist-for-azure-percept.md)-- [Azure Percept solution development paths overview](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/EU92ZnNynDBGuVn3P5Xr5gcBFKS5HQguZm7O5sEENPUvPA?e=DKZtr6) -
-If youΓÇÖre ready to start setting up your Azure Percept virtual machine (VM), we recommend the following tutorial:
-- [Tutorial: Setting up Azure Percept on Azure Stack HCI using WAC extension (Cluster server)](https://github.com/microsoft/santa-cruz-workload/blob/main/articles/tutorial-setting-up-azure-percept-using-wac-extension-cluster.md) -->
azure-percept Azure Percept Open Source Project Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/open-source/azure-percept-open-source-project-overview.md
- Title: Azure Percept Open-Source Project overview
-description: An overview of the Azure Percept Open-Source project
----- Previously updated : 08/17/2022 --
-# Azure Percept Open-Source Project overview
-
-Azure Percept Open-Source Project is a framework for creating, deploying, and operating Edge artificial intelligence (AI) solutions at scale with the control and flexibility of open-source natively on your environment. It's fully open-sourced and leverages the open-source software (OSS) community to deliver enhanced experiences. And, as a self-managed solution, you can host the experience on your own Kubernetes clusters.
-
-Azure Percept Open-Source Project has a no- to low-code portal experience as well as APIs that can be used to build custom Edge AI applications. It supports running Edge AI apps by utilizing cameras and Edge devices with different Edge runtimes and accelerators across multiple locations at scale. Since it's designed with machine learning operations (MLOps) in mind, it provides support for active learning, continuous training, and data gathering using your machine learning (ML) models running at the edge.
-
-## Azure Percept Open-Source Project offers
--- **An integrated developer experience** -
- You can easily build camera-based Edge AI apps using first- and third-party ML models. In one seamless flow, you can leverage pre-built models from our partnerΓÇÖs Model Zoo and create your own ML models with Azure Custom Vision.
--- **Solution deployment and management experience at scale**-
- Azure Percept Open-Source Project is Kubernetes native, so you can run the experience wherever Kubernetes runs; on-premises, hybrid, cloud, or multicloud environments. You can manage your experience using Kubernetes native tools such as Kubectl, our unique command line interface (CLI), and/or our no- to low-code native web portal. Edge AI apps and assets you create are projected and managed as Kubernetes objects, which allows you to rely on the Kubernetes control plane to manage the state of your Edge AI assets across many environments at scale.
--- **Standard-based**-
- Azure Percept Open-Source Project is built on and supports popular industrial standards, protocols, and frameworks like Open Platform Communications Unified Architecture (OPC-UA), Open Network Video Interface Forum (ONVIF), OpenTelemetry, CloudEvents, Distributed Application Runtime (Dapr), Message Queuing Telemetry Transport (MQTT), Open Neural Network Exchange (ONNX), Akri, Kubectl, Helm, and many others.
--- **Zero-friction adoption**-
- Even without any Edge hardware, you can get started with a few commands, then seamlessly transit from prototype/pilot to production at scale. Azure Percept Open-Source Project has an easy-to-use no- to low-code portal experience that allows developers to create and manage Edge AI solutions in minutes instead of days or months.
--- **Azure powered and platform agnostic**-
- Azure Percept Open-Source Project natively uses and supports Azure Edge and AI Services like Azure IoT Hub, Azure IoT Edge, Azure Cognitive Services, Azure Storage Server, Azure ML, and so on. At the same time, it also allows you to modify the experience for use cases that require the use of other services (Azure or non-Azure) or other Open-Source Software (OSS) tools.
-
-## Next steps
-
-Text to come.
-
-<!-- You're now ready to start using Azure Percept Open-Source Project. We recommend the following resources to get started.
---- [Introduction to Azure Percept for Open-Source Project core concepts](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/EQwRE6w96T1OiO_kstWw1lMBs1yZFUow_ik3kx3rV12EVg?e=bactOi) --- [Tutorial: Create an Edge AI solution with Azure Percept for Open-Source Project](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/ERF8mxgtOqhIt2YJWFafuZoBC6kZ6hC-iRAMuCJeyZjD-w?e=BS4cN5)>
azure-percept Azure Percept Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/studio/azure-percept-studio-overview.md
- Title: Azure Percept Studio overview
-description: Description of Azure Percept Studio.
----- Previously updated : 08/08/2022--
-# Azure Percept Studio overview
-
-Azure Percept Studio is a user-friendly portal for creating, deploying, and operating Edge artificial intelligence (AI) solutions. Using a low-code to no-code approach, you can discover and complete guided workflows and create an end-to-end Edge AI solution. This solution integrates Azure IoT and Azure AI cloud services like Azure IoT Hub, IoT Edge, Azure Storage, Log Analytics, and Spatial Analysis from Azure Cognitive Services.
-
-With Azure Percept Studio, you can connect your Edge AI compute devices and cameras and then configure and apply the pre-built AI skills included with Azure Precept Studio to automate and transform your operations at the edge. For example, you can use your cameras to count people in an area, detect when people cross a line, or when people enter/exit a restricted or secured area. You can then use AI skills to help you analyze this data in real-time so you can manage queues, space utilization, and occupancy, like a store entrance or exit, a curbside pickup area, or intruders on secure premises.
-
-## Azure Percept Studio offers:
--- **No code, low code integrated flows**-
- Whether you're a beginner or an advanced developer working on a pilot solution, Azure Percept Studio offers access to well-integrated workflows that you can use to reduce friction around building Edge AI solutions. You can create a pilot Edge AI solution in 10 minutes.
--- **People understanding AI skills**-
- Azure Spatial Analysis is fully integrated in Azure Percept. Spatial Analysis detects the presence and movements of people in real time video feed from IP cameras. There are three skills available around people understanding: people counting in an area, detecting when people cross a line, and detecting when people enter/ exit and area.
--- **Gain insights and act**-
- Once your solution is created, you can operate your devices and solutions remotely, monitor multiple video streams, and create live inference telemetry. To optimize your operations at the Edge, you can then aggregate inference data over time and derive insights and trends that you can use in real time to create alerts that help you be proactive instead of reactive.
-
-## Next steps
-
-Text to come.
-
-<!-- If you havenΓÇÖt set up your Azure Percept on Azure Stack HCI, we recommend the following tutorial to start setting up your VM using Azure Percept Windows Admin Center Extension (WAC):
--- [Set up Azure Percept on Azure Stack HCI using WAC extensions](set-up-azure-percept-using-wac-extension-cluster.md)-
-If you have already set up your Azure Percept on Azure Stack HCI and are ready to start building your edge AI solution, we recommend the following tutorial:
--- [Create a no-code Edge AI solution using Azure Percept Studio](AzP%20Studio%20Guide.md).-->
azure-resource-manager Concepts View Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/concepts-view-definition.md
description: Describes the concept of creating view definition for Azure Managed
Previously updated : 06/12/2019 Last updated : 08/25/2022 # View definition artifact in Azure Managed Applications
-View definition is an optional artifact in Azure Managed Applications. It allows to customize overview page and add more views such as Metrics and Custom resources.
+View definition is an optional artifact in Azure Managed Applications. It allows you to customize the overview page and add more views such as Metrics and Custom resources.
This article provides an overview of view definition artifact and its capabilities. ## View definition artifact
-The view definition artifact must be named **viewDefinition.json** and placed at the same level as **createUiDefinition.json** and **mainTemplate.json** in the .zip package that creates a managed application definition. To learn how to create the .zip package and publish a managed application definition, see [Publish an Azure Managed Application definition](publish-service-catalog-app.md)
+The view definition artifact must be named _viewDefinition.json_ and placed at the same level as _createUiDefinition.json_ and _mainTemplate.json_ in the _.zip_ package that creates a managed application definition. To learn how to create the _.zip_ package and publish a managed application definition, see [Publish an Azure Managed Application definition](publish-service-catalog-app.md)
## View definition schema
-The **viewDefinition.json** file has only one top level `views` property, which is an array of views. Each view is shown in the managed application user interface as a separate menu item in the table of contents. Each view has a `kind` property that sets the type of the view. It must be set to one of the following values: [Overview](#overview), [Metrics](#metrics), [CustomResources](#custom-resources), [Associations](#associations). For more information, see current [JSON schema for viewDefinition.json](https://schema.management.azure.com/schemas/viewdefinition/0.0.1-preview/ViewDefinition.json#).
+The _viewDefinition.json_ file has only one top level `views` property, which is an array of views. Each view is shown in the managed application user interface as a separate menu item in the table of contents. Each view has a `kind` property that sets the type of the view. It must be set to one of the following values: [Overview](#overview), [Metrics](#metrics), [CustomResources](#custom-resources), [Associations](#associations). For more information, see current [JSON schema for viewDefinition.json](https://schema.management.azure.com/schemas/viewdefinition/0.0.1-preview/ViewDefinition.json#).
Sample JSON for view definition: ```json {
- "$schema": "https://schema.management.azure.com/schemas/viewdefinition/0.0.1-preview/ViewDefinition.json#",
- "contentVersion": "0.0.0.1",
- "views": [
- {
- "kind": "Overview",
- "properties": {
- "header": "Welcome to your Azure Managed Application",
- "description": "This managed application is for demo purposes only.",
- "commands": [
- {
- "displayName": "Test Action",
- "path": "testAction"
- }
- ]
- }
- },
- {
- "kind": "Metrics",
- "properties": {
- "displayName": "This is my metrics view",
- "version": "1.0.0",
- "charts": [
- {
- "displayName": "Sample chart",
- "chartType": "Bar",
- "metrics": [
- {
- "name": "Availability",
- "aggregationType": "avg",
- "resourceTagFilter": [ "tag1" ],
- "resourceType": "Microsoft.Storage/storageAccounts",
- "namespace": "Microsoft.Storage/storageAccounts"
- }
- ]
- }
- ]
- }
- },
- {
- "kind": "CustomResources",
- "properties": {
- "displayName": "Test custom resource type",
- "version": "1.0.0",
- "resourceType": "testCustomResource",
- "createUIDefinition": { },
- "commands": [
- {
- "displayName": "Custom Context Action",
- "path": "testCustomResource/testContextAction",
- "icon": "Stop",
- "createUIDefinition": { }
- }
+ "$schema": "https://schema.management.azure.com/schemas/viewdefinition/0.0.1-preview/ViewDefinition.json#",
+ "contentVersion": "0.0.0.1",
+ "views": [
+ {
+ "kind": "Overview",
+ "properties": {
+ "header": "Welcome to your Azure Managed Application",
+ "description": "This managed application is for demo purposes only.",
+ "commands": [
+ {
+ "displayName": "Test Action",
+ "path": "testAction"
+ }
+ ]
+ }
+ },
+ {
+ "kind": "Metrics",
+ "properties": {
+ "displayName": "This is my metrics view",
+ "version": "1.0.0",
+ "charts": [
+ {
+ "displayName": "Sample chart",
+ "chartType": "Bar",
+ "metrics": [
+ {
+ "name": "Availability",
+ "aggregationType": "avg",
+ "resourceTagFilter": [
+ "tag1"
],
- "columns": [
- {"key": "name", "displayName": "Name"},
- {"key": "properties.myProperty1", "displayName": "Property 1"},
- {"key": "properties.myProperty2", "displayName": "Property 2", "optional": true}
- ]
- }
- },
- {
- "kind": "Associations",
- "properties": {
- "displayName": "Test association resource type",
- "version": "1.0.0",
- "targetResourceType": "Microsoft.Compute/virtualMachines",
- "createUIDefinition": { }
- }
- }
- ]
+ "resourceType": "Microsoft.Storage/storageAccounts",
+ "namespace": "Microsoft.Storage/storageAccounts"
+ }
+ ]
+ }
+ ]
+ }
+ },
+ {
+ "kind": "CustomResources",
+ "properties": {
+ "displayName": "Test custom resource type",
+ "version": "1.0.0",
+ "resourceType": "testCustomResource",
+ "createUIDefinition": {},
+ "commands": [
+ {
+ "displayName": "Custom Context Action",
+ "path": "testCustomResource/testContextAction",
+ "icon": "Stop",
+ "createUIDefinition": {}
+ }
+ ],
+ "columns": [
+ {
+ "key": "name",
+ "displayName": "Name"
+ },
+ {
+ "key": "properties.myProperty1",
+ "displayName": "Property 1"
+ },
+ {
+ "key": "properties.myProperty2",
+ "displayName": "Property 2",
+ "optional": true
+ }
+ ]
+ }
+ },
+ {
+ "kind": "Associations",
+ "properties": {
+ "displayName": "Test association resource type",
+ "version": "1.0.0",
+ "targetResourceType": "Microsoft.Compute/virtualMachines",
+ "createUIDefinition": {}
+ }
+ }
+ ]
} ```
Sample JSON for view definition:
`"kind": "Overview"`
-When you provide this view in **viewDefinition.json**, it overrides the default Overview page in your managed application.
+When you provide this view in _viewDefinition.json_, it overrides the default Overview page in your managed application.
```json {
- "kind": "Overview",
- "properties": {
- "header": "Welcome to your Azure Managed Application",
- "description": "This managed application is for demo purposes only.",
- "commands": [
- {
- "displayName": "Test Action",
- "path": "testAction"
- }
- ]
- }
+ "kind": "Overview",
+ "properties": {
+ "header": "Welcome to your Azure Managed Application",
+ "description": "This managed application is for demo purposes only.",
+ "commands": [
+ {
+ "displayName": "Test Action",
+ "path": "testAction"
+ }
+ ]
+ }
} ```
When you provide this view in **viewDefinition.json**, it overrides the default
|||| |header|No|The header of the overview page.| |description|No|The description of your managed application.|
-|commands|No|The array of additional toolbar buttons of the overview page, see [commands](#commands).|
+|commands|No|The array of more toolbar buttons of the overview page, see [commands](#commands).|
-![Screenshot shows the Overview for a managed application with a Test Action control to run a demo application.](./media/view-definition/overview.png)
## Metrics
The metrics view enables you to collect and aggregate data from your managed app
```json {
- "kind": "Metrics",
- "properties": {
- "displayName": "This is my metrics view",
- "version": "1.0.0",
- "charts": [
- {
- "displayName": "Sample chart",
- "chartType": "Bar",
- "metrics": [
- {
- "name": "Availability",
- "aggregationType": "avg",
- "resourceTagFilter": [ "tag1" ],
- "resourceType": "Microsoft.Storage/storageAccounts",
- "namespace": "Microsoft.Storage/storageAccounts"
- }
- ]
- }
+ "kind": "Metrics",
+ "properties": {
+ "displayName": "This is my metrics view",
+ "version": "1.0.0",
+ "charts": [
+ {
+ "displayName": "Sample chart",
+ "chartType": "Bar",
+ "metrics": [
+ {
+ "name": "Availability",
+ "aggregationType": "avg",
+ "resourceTagFilter": [
+ "tag1"
+ ],
+ "resourceType": "Microsoft.Storage/storageAccounts",
+ "namespace": "Microsoft.Storage/storageAccounts"
+ }
]
- }
+ }
+ ]
+ }
} ```
The metrics view enables you to collect and aggregate data from your managed app
|||| |name|Yes|The name of the metric.| |aggregationType|Yes|The aggregation type to use for this metric. Supported aggregation types: `none, sum, min, max, avg, unique, percentile, count`|
-|namespace|No|Additional information to use when determining the correct metrics provider.|
+|namespace|No| More information to use when determining the correct metrics provider.|
|resourceTagFilter|No|The resource tags array (will be separated with `or` word) for which metrics would be displayed. Applies on top of resource type filter.| |resourceType|Yes|The resource type for which metrics would be displayed.|
-![Screenshot shows a Monitoring page called This is my metrics view for a managed application.](./media/view-definition/metrics.png)
## Custom resources `"kind": "CustomResources"`
-You can define multiple views of this type. Each view represents a **unique** custom resource type from the custom provider you defined in **mainTemplate.json**. For an introduction to custom providers, see [Azure Custom Providers Preview overview](../custom-providers/overview.md).
+You can define multiple views of this type. Each view represents a **unique** custom resource type from the custom provider you defined in _mainTemplate.json_. For an introduction to custom providers, see [Azure Custom Providers Preview overview](../custom-providers/overview.md).
In this view you can perform GET, PUT, DELETE and POST operations for your custom resource type. POST operations could be global custom actions or custom actions in a context of your custom resource type. ```json {
- "kind": "CustomResources",
- "properties": {
- "displayName": "Test custom resource type",
- "version": "1.0.0",
- "resourceType": "testCustomResource",
- "icon": "Polychromatic.ResourceList",
- "createUIDefinition": { },
- "commands": [
- {
- "displayName": "Custom Context Action",
- "path": "testCustomResource/testContextAction",
- "icon": "Stop",
- "createUIDefinition": { },
- }
- ],
- "columns": [
- {"key": "name", "displayName": "Name"},
- {"key": "properties.myProperty1", "displayName": "Property 1"},
- {"key": "properties.myProperty2", "displayName": "Property 2", "optional": true}
- ]
- }
+ "kind": "CustomResources",
+ "properties": {
+ "displayName": "Test custom resource type",
+ "version": "1.0.0",
+ "resourceType": "testCustomResource",
+ "icon": "Polychromatic.ResourceList",
+ "createUIDefinition": {},
+ "commands": [
+ {
+ "displayName": "Custom Context Action",
+ "path": "testCustomResource/testContextAction",
+ "icon": "Stop",
+ "createUIDefinition": {},
+ }
+ ],
+ "columns": [
+ {
+ "key": "name",
+ "displayName": "Name"
+ },
+ {
+ "key": "properties.myProperty1",
+ "displayName": "Property 1"
+ },
+ {
+ "key": "properties.myProperty2",
+ "displayName": "Property 2",
+ "optional": true
+ }
+ ]
+ }
} ``` |Property|Required|Description| ||||
-|displayName|Yes|The displayed title of the view. The title should be **unique** for each CustomResources view in your **viewDefinition.json**.|
+|displayName|Yes|The displayed title of the view. The title should be **unique** for each CustomResources view in your _viewDefinition.json_.|
|version|No|The version of the platform used to render the view.| |resourceType|Yes|The custom resource type. Must be a **unique** custom resource type of your custom provider.| |icon|No|The icon of the view. List of example icons is defined in [JSON Schema](https://schema.management.azure.com/schemas/viewdefinition/0.0.1-preview/ViewDefinition.json#).| |createUIDefinition|No|Create UI Definition schema for create custom resource command. For an introduction to creating UI definitions, see [Getting started with CreateUiDefinition](create-uidefinition-overview.md)|
-|commands|No|The array of additional toolbar buttons of the CustomResources view, see [commands](#commands).|
+|commands|No|The array of more toolbar buttons of the CustomResources view, see [commands](#commands).|
|columns|No|The array of columns of the custom resource. If not defined the `name` column will be shown by default. The column must have `"key"` and `"displayName"`. For key, provide the key of the property to display in a view. If nested, use dot as delimiter, for example, `"key": "name"` or `"key": "properties.property1"`. For display name, provide the display name of the property to display in a view. You can also provide an `"optional"` property. When set to true, the column is hidden in a view by default.|
-![Screenshot shows a Resources page called Test custom resource type and the control Custom Context Action.](./media/view-definition/customresources.png)
## Commands
-Commands is an array of additional toolbar buttons that are displayed on page. Each command represents a POST action from your Azure Custom Provider defined in **mainTemplate.json**. For an introduction to custom providers, see [Azure Custom Providers overview](../custom-providers/overview.md).
+The `commands` property is an array of more toolbar buttons that are displayed on page. Each command represents a POST action from your Azure Custom Provider defined in _mainTemplate.json_. For an introduction to custom providers, see [Azure Custom Providers overview](../custom-providers/overview.md).
```json {
- "commands": [
- {
- "displayName": "Start Test Action",
- "path": "testAction",
- "icon": "Start",
- "createUIDefinition": { }
- },
- ]
+ "commands": [
+ {
+ "displayName": "Start Test Action",
+ "path": "testAction",
+ "icon": "Start",
+ "createUIDefinition": {}
+ },
+ ]
} ``` |Property|Required|Description| |||| |displayName|Yes|The displayed name of the command button.|
-|path|Yes|The custom provider action name. The action must be defined in **mainTemplate.json**.|
+|path|Yes| Must be a custom provider action name. The action must be defined in _mainTemplate.json_. <br><br> Doesn't accept dynamic values like a URI that's output from _mainTemplate.json_. |
|icon|No|The icon of the command button. List of example icons is defined in [JSON Schema](https://schema.management.azure.com/schemas/viewdefinition/0.0.1-preview/ViewDefinition.json#).| |createUIDefinition|No|Create UI Definition schema for command. For an introduction to creating UI definitions, see [Getting started with CreateUiDefinition](create-uidefinition-overview.md).|
Commands is an array of additional toolbar buttons that are displayed on page. E
`"kind": "Associations"`
-You can define multiple views of this type. This view allows you to link existing resources to the managed application through the custom provider you defined in **mainTemplate.json**. For an introduction to custom providers, see [Azure Custom Providers Preview overview](../custom-providers/overview.md).
+You can define multiple views of this type. This view allows you to link existing resources to the managed application through the custom provider you defined in _mainTemplate.json_. For an introduction to custom providers, see [Azure Custom Providers Preview overview](../custom-providers/overview.md).
-In this view you can extend existing Azure resources based on the `targetResourceType`. When a resource is selected, it will create an onboarding request to the **public** custom provider, which can apply a side effect to the resource.
+In this view, you can extend existing Azure resources based on the `targetResourceType`. When a resource is selected, it will create an onboarding request to the **public** custom provider, which can apply a side effect to the resource.
```json {
- "kind": "Associations",
- "properties": {
- "displayName": "Test association resource type",
- "version": "1.0.0",
- "targetResourceType": "Microsoft.Compute/virtualMachines",
- "createUIDefinition": { }
- }
+ "kind": "Associations",
+ "properties": {
+ "displayName": "Test association resource type",
+ "version": "1.0.0",
+ "targetResourceType": "Microsoft.Compute/virtualMachines",
+ "createUIDefinition": {}
+ }
} ``` |Property|Required|Description| ||||
-|displayName|Yes|The displayed title of the view. The title should be **unique** for each Associations view in your **viewDefinition.json**.|
+|displayName|Yes|The displayed title of the view. The title should be **unique** for each Associations view in your _viewDefinition.json_.|
|version|No|The version of the platform used to render the view.|
-|targetResourceType|Yes|The target resource type. This is the resource type that will be displayed for resource onboarding.|
+|targetResourceType|Yes|The target resource type. This resource type will be displayed for resource onboarding.|
|createUIDefinition|No|Create UI Definition schema for create association resource command. For an introduction to creating UI definitions, see [Getting started with CreateUiDefinition](create-uidefinition-overview.md)| ## Looking for help
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
The first time you visit the [www.videoindexer.ai/](https://www.videoindexer.ai/
With a trial, account Azure Video Indexer provides up to 600 minutes of free indexing to users and up to 2400 minutes of free indexing to users that subscribe to the Azure Video Indexer API on the [developer portal](https://aka.ms/avam-dev-portal).
-> [!NOTE]
-> The trial account is not available on the Azure Government cloud.
+The trial account is not available on the Azure Government cloud. For other Azure Government limitations, see [Limitations of Azure Video Indexer on Azure Government](connect-to-azure.md#limitations-of-azure-video-indexer-on-azure-government).
You can later create a paid account where you're not limited by the quota. Two types of paid accounts are available to you: Azure Resource Manager (ARM) (currently in preview) and classic (generally available). The main difference between the two is account management platform. While classic accounts are built on the API Management, ARM-based accounts management is built on Azure, which enables apply access control to all services with role-based access control (Azure RBAC) natively.
azure-video-indexer Compare Video Indexer With Media Services Presets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/compare-video-indexer-with-media-services-presets.md
# Compare Azure Media Services v3 presets and Azure Video Indexer
-This article compares the capabilities of **Azure Video Indexer APIs** and **Media Services v3 APIs**.
+This article compares the capabilities of **Azure Video Indexer(AVI) APIs** and **Media Services v3 APIs**.
Currently, there is an overlap between features offered by the [Azure Video Indexer APIs](https://api-portal.videoindexer.ai/) and the [Media Services v3 APIs](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/mediaservices/resource-manager/Microsoft.Media/stable/2018-07-01/Encoding.json). The following table offers the current guideline for understanding the differences and similarities.
Currently, there is an overlap between features offered by the [Azure Video Inde
|||| |Media Insights|[Enhanced](video-indexer-output-json-v2.md) |[Fundamentals](/azure/media-services/latest/analyze-video-audio-files-concept)| |Experiences|See the full list of supported features: <br/> [Overview](video-indexer-overview.md)|Returns video insights only|
-|Billing|[Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/#analytics) |[Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/#analytics) |
+|Pricing|[AVI pricing](https://azure.microsoft.com/pricing/details/video-indexer/) |[Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/#analytics) |
|Compliance|For the most current compliance updates, visit [Azure Compliance Offerings.pdf](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942/file/178110/23/Microsoft%20Azure%20Compliance%20Offerings.pdf) and search for "Azure Video Indexer" to see if it complies with a certificate of interest.|For the most current compliance updates, visit [Azure Compliance Offerings.pdf](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942/file/178110/23/Microsoft%20Azure%20Compliance%20Offerings.pdf) and search for "Media Services" to see if it complies with a certificate of interest.|
-|Free Trial|East US|Not available|
+|Trial|East US|Not available|
|Region availability|See [Cognitive Services availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services)|See [Media Services availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=media-services).| ## Next steps
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
Title: Create a classic Azure Video Indexer account connected to Azure description: Learn how to create a classic Azure Video Indexer account connected to Azure. Previously updated : 05/03/2022 Last updated : 08/24/2022
If your storage account is behind a firewall, see [storage account that is behin
1. Use the [Azure](https://portal.azure.com/) portal to create an Azure Media Services account, as described in [Create an account](/azure/media-services/previous/media-services-portal-create-account).
- Make sure the Media Services account was created with the classic APIs.
-
- :::image type="content" alt-text="Screenshot that shows how to use the classic API." source="./media/create-account/enable-classic-api.png":::
- > [!NOTE] > Make sure to write down the Media Services resource and account names. 1. Before you can play your videos in the Azure Video Indexer web app, you must start the default **Streaming Endpoint** of the new Media Services account.
The following Azure Media Services related considerations apply:
Streaming endpoints have a considerable startup time. Therefore, it may take several minutes from the time you connected your account to Azure until your videos can be streamed and watched in the Azure Video Indexer web app. * If you connect to an existing Media Services account, Azure Video Indexer doesn't change the default Streaming Endpoint configuration. If there's no running **Streaming Endpoint**, you can't watch videos from this Media Services account or in Azure Video Indexer.
-* If you connect automatically, Azure Video Indexer sets the media **Reserved Units** to 10 S3 units:
-
- ![Media Services reserved units](./media/create-account/ams-reserved-units.png)
## Create a classic account
To create a paid account in Azure Government, follow the instructions in [Create
### Limitations of Azure Video Indexer on Azure Government * Only paid accounts (ARM or classic) are available on Azure Government.
-* No manual content moderation available in Government cloud.
+* No manual content moderation available in Azure Government.
In the public cloud when content is deemed offensive based on a content moderation, the customer can ask for a human to look at that content and potentially revert that decision.
-* Bing description - in Gov cloud we won't present a description of celebrities and named entities identified. This is a UI capability only.
+* Bing description - in Azure Government we won't present a description of celebrities and named entities identified. This is a UI capability only.
## Clean up resources
azure-video-indexer Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-account-portal.md
Last updated 06/10/2022
[!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-This tutorial walks you through the steps of creating an Azure Video Indexer account and its accompanying resources by using the Azure portal. The created account is an Azure Resource Manager (ARM) based account (currently in preview). For information about different Azure Video Indexer account types, see the [Overview of account types](accounts-overview.md) topic.
+To start using unlimited features and robust capabilities of Azure Video Indexer, you need to create an Azure Video Indexer unlimited account.
+
+This tutorial walks you through the steps of creating the Azure Video Indexer account and its accompanying resources by using the Azure portal. The account that gets created is ARM (Azure Resource Manager) account. For information about different account types, see [Overview of account types](accounts-overview.md).
## Prerequisites
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
You need an Azure Media Services account. You can create one for free through [C
### managedIdentityId
+> [!NOTE]
+> User assigned managed Identify must have at least Contributor role on the Media Service before deployment, when using System Assigned Managed Identity the Contributor role should be assigned after deployment.
+ * Type: string * Description: The resource ID of the managed identity that's used to grant access between Azure Media Services resource and the Azure Video Indexer account. * Required: true
azure-video-indexer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-get-started.md
Title: Sign up for Azure Video Indexer and upload your first video - Azure description: Learn how to sign up and upload your first video using the Azure Video Indexer portal. Previously updated : 01/25/2021 Last updated : 08/24/2022
[!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-This getting started quickstart shows how to sign in to the Azure Video Indexer website and how to upload your first video.
+This quickstart shows you how to sign in to the Azure Video Indexer [website](https://www.videoindexer.ai/) and how to upload your first video. When visiting the Azure Video Indexer website for the first time, the free trial account is automatically created for you. With the free trial account, you get a certain number of free indexing minutes. When creating an unlimited/paid account, you aren't limited by the quota.
-When creating an Azure Video Indexer account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you aren't limited by the quota). With free trial, Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With paid option, you create an Azure Video Indexer account that is [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for minutes indexed, for more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
+With free trial, Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With paid option, you create an Azure Video Indexer account that is [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for minutes indexed, for more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
+
+For details about available accounts, see [Azure Video Indexer account types](accounts-overview.md).
## Sign up for Azure Video Indexer
backup Backup Azure Mars Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mars-troubleshoot.md
Title: Troubleshoot the Azure Backup agent description: In this article, learn how to troubleshoot the installation and registration of the Azure Backup agent. Previously updated : 05/31/2022 Last updated : 08/26/2022
We recommend that you check the following before you start troubleshooting Micro
| Causes | Recommended actions | | | |
-| **Vault credentials aren't valid** <br/> <br/> Vault credential files might be corrupt, might have expired, or they might have a different file extension than *.vaultCredentials*. (For example, they might have been downloaded more than 10 days before the time of registration.)| [Download new credentials](backup-azure-file-folder-backup-faq.yml#where-can-i-download-the-vault-credentials-file-) from the Recovery Services vault on the Azure portal. Then take these steps, as appropriate: <br><br>- If you've already installed and registered MARS, open the Microsoft Azure Backup Agent MMC console. Then select **Register Server** in the **Actions** pane to complete the registration with the new credentials. <br> - If the new installation fails, try reinstalling with the new credentials. <br><br> **Note**: If multiple vault credential files have been downloaded, only the latest file is valid for the next 10 days. We recommend that you download a new vault credential file. |
+| **Vault credentials aren't valid** <br/> <br/> Vault credential files might be corrupt, might have expired, or they might have a different file extension than *.vaultCredentials*. (For example, they might have been downloaded more than 10 days before the time of registration.) | [Download new credentials](backup-azure-file-folder-backup-faq.yml#where-can-i-download-the-vault-credentials-file-) from the Recovery Services vault on the Azure portal. Then take these steps, as appropriate: <br><br>- If you've already installed and registered MARS, open the Microsoft Azure Backup Agent MMC console. Then select **Register Server** in the **Actions** pane to complete the registration with the new credentials. <br> - If the new installation fails, try reinstalling with the new credentials. <br><br> **Note**: If multiple vault credential files have been downloaded, only the latest file is valid for the next 10 days. We recommend that you download a new vault credential file. <br><br> - To prevent errors during vault registration, ensure that the MARS agent version 2.0.9249.0 or above is installed. If not, we recommend you to install it [from here](https://aka.ms/azurebackup_agent).|
| **Proxy server/firewall is blocking registration** <br/>Or <br/>**No internet connectivity** <br/><br/> If your machine has limited internet access, and you don't ensure the firewall, proxy, and network settings allow access to the FQDNS and public IP addresses, the registration will fail.| Follow these steps:<br/> <br><br>- Work with your IT team to ensure the system has internet connectivity.<br>- If you don't have a proxy server, ensure the proxy option isn't selected when you register the agent. [Check your proxy settings](#verifying-proxy-settings-for-windows).<br>- If you do have a firewall/proxy server, work with your networking team to allow access to the following FQDNs and public IP addresses. Access to all of the URLs and IP addresses listed below uses the HTTPS protocol on port 443.<br/> <br> **URLs**<br> `*.microsoft.com` <br> `*.windowsazure.com` <br> `*.microsoftonline.com` <br> `*.windows.net` <br> `*blob.core.windows.net` <br> `*queue.core.windows.net` <br> `*blob.storage.azure.net`<br><br><br>- If you are a US Government customer, ensure that you have access to the following URLs:<br><br> `www.msftncsi.com` <br> `*.microsoft.com` <br> `*.windowsazure.us` <br> `*.microsoftonline.us` <br> `*.windows.net` <br> `*.usgovcloudapi.net` <br> `*blob.core.windows.net` <br> `*queue.core.windows.net` <br> `*blob.storage.azure.net` <br><br> Try registering again after you complete the preceding troubleshooting steps.<br></br> If your connection is via Azure ExpressRoute, make sure the settings are configured as described in Azure [ExpressRoute support](../backup/backup-support-matrix-mars-agent.md#azure-expressroute-support). | | **Antivirus software is blocking registration** | If you've antivirus software installed on the server, add the exclusion rules to the antivirus scan for: <br><br> - Every file and folder under the *scratch* and *bin* folder locations - `<InstallPath>\Scratch\*` and `<InstallPath>\Bin\*`. <br> - cbengine.exe |
backup Install Mars Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/install-mars-agent.md
Title: Install the Microsoft Azure Recovery Services (MARS) agent description: Learn how to install the Microsoft Azure Recovery Services (MARS) agent to back up Windows machines. Previously updated : 12/15/2021 Last updated : 08/26/2022
The data that's available for backup depends on where the agent is installed.
* Make sure that you have an Azure account if you need to back up a server or client to Azure. If you don't have an account, you can create a [free one](https://azure.microsoft.com/free/) in just a few minutes. * Verify internet access on the machines that you want to back up. * Ensure the user installing and configuring the MARS agent has local administrator privileges on the server to be protected.
+* To prevent errors during vault registration, ensure that the MARS agent version 2.0.9249.0 or above is installed. If not, we recommend you to install it [from here](https://aka.ms/azurebackup_agent).
[!INCLUDE [How to create a Recovery Services vault](../../includes/backup-create-rs-vault.md)]
cognitive-services Call Center Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/call-center-transcription.md
Internally, Microsoft uses these technologies to support Microsoft customer call
Some businesses are required to transcribe conversations in real time. You can use real-time transcription to identify keywords and trigger searches for content and resources that are relevant to the conversation, to monitor sentiment, to improve accessibility, or to provide translations for customers and agents who aren't native speakers.
-For scenarios that require real-time transcription, we recommend using the [Speech SDK](speech-sdk.md). Currently, speech-to-text is available in [more than 20 languages](language-support.md), and the SDK is available in C++, C#, Java, Python, JavaScript, Objective-C, and Go. Samples are available in each language on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk). For the latest news and updates, see [Release notes](releasenotes.md).
+For scenarios that require real-time transcription, we recommend using the [Speech SDK](speech-sdk.md). The SDK is available in C++, C#, Java, Python, JavaScript, Objective-C, and Go. Samples are available in each language on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk). For the latest news and updates, see [Release notes](releasenotes.md).
Internally, Microsoft uses the previously mentioned technologies to analyze Microsoft customer calls in real time, as shown in the following diagram:
cognitive-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/captioning-concepts.md
Profanity filter is applied to the result `Text` and `MaskedNormalizedForm` prop
## Language identification
-If the language in the audio could change, use continuous [language identification](language-identification.md). Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md#language-identification). You provide up to 10 candidate languages, at least one of which is expected be in the audio. The Speech service returns the most likely language in the audio.
+If the language in the audio could change, use continuous [language identification](language-identification.md). Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). You provide up to 10 candidate languages, at least one of which is expected be in the audio. The Speech service returns the most likely language in the audio.
## Customizations to improve accuracy
cognitive-services Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/conversation-transcription.md
Audio data is processed live to return the speaker identifier and transcript, an
## Language support
-Currently, conversation transcription supports [all speech-to-text languages](language-support.md#speech-to-text) in the following regions:ΓÇ»`centralus`, `eastasia`, `eastus`, `westeurope`.
+Currently, conversation transcription supports [all speech-to-text languages](language-support.md?tabs=stt-tts) in the following regions:ΓÇ»`centralus`, `eastasia`, `eastus`, `westeurope`.
## Next steps
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md
Custom Neural Voice is a text-to-speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. With Custom Neural Voice, you can build a highly natural-sounding voice by providing your audio samples as training data. If you're looking for ready-to-use options, check out our [text-to-speech](text-to-speech.md) service.
-Based on the neural text-to-speech technology and the multilingual, multi-speaker, universal model, Custom Neural Voice lets you create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the [supported languages](language-support.md#custom-neural-voice) for Custom Neural Voice.
+Based on the neural text-to-speech technology and the multilingual, multi-speaker, universal model, Custom Neural Voice lets you create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the [supported languages](language-support.md?tabs=stt-tts) for Custom Neural Voice.
> [!IMPORTANT] > Custom Neural Voice access is limited based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-speech-overview.md
With Custom Speech, you can evaluate and improve the Microsoft speech-to-text accuracy for your applications and products.
-Out of the box, speech to text utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing a variety of common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md) is used by default. The base model works very well in most speech recognition scenarios.
+Out of the box, speech to text utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing a variety of common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt-tts) is used by default. The base model works very well in most speech recognition scenarios.
A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions.
cognitive-services Direct Line Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/direct-line-speech.md
Direct Line Speech is a robust, end-to-end solution for creating a flexible, ext
Direct Line Speech offers the highest levels of customization and sophistication for voice assistants. It's designed for conversational scenarios that are open-ended, natural, or hybrids of the two with task completion or command-and-control use. This high degree of flexibility comes with a greater complexity, and scenarios that are scoped to well-defined tasks using natural language input may want to consider [Custom Commands](custom-commands.md) for a streamlined solution experience.
+Direct Line Speech supports these locales: `ar-eg`, `ar-sa`, `ca-es`, `da-dk`, `de-de`, `en-au`, `en-ca`, `en-gb`, `en-in`, `en-nz`, `en-us`, `es-es`, `es-mx`, `fi-fi`, `fr-ca`, `fr-fr`, `gu-in`, `hi-in`, `hu-hu`, `it-it`, `ja-jp`, `ko-kr`, `mr-in`, `nb-no`, `nl-nl`, `pl-pl`, `pt-br`, `pt-pt`, `ru-ru`, `sv-se`, `ta-in`, `te-in`, `th-th`, `tr-tr`, `zh-cn`, `zh-hk`, and `zh-tw`.
+ ## Getting started with Direct Line Speech To create a voice assistant using Direct Line Speech, create a Speech resource and Azure Bot resource in the [Azure portal](https://portal.azure.com). Then [connect the bot](/azure/bot-service/bot-service-channel-connect-directlinespeech) to the Direct Line Speech channel.
Sample code for creating a voice assistant is available on GitHub. These samples
Voice assistants built using Speech service can use the full range of customization options available for [speech-to-text](speech-to-text.md), [text-to-speech](text-to-speech.md), and [custom keyword selection](./custom-keyword-basics.md). > [!NOTE]
-> Customization options vary by language/locale (see [Supported languages](./language-support.md)).
+> Customization options vary by language/locale (see [Supported languages](./language-support.md?tabs=stt-tts)).
Direct Line Speech and its associated functionality for voice assistants are an ideal supplement to the [Virtual Assistant Solution and Enterprise Template](/azure/bot-service/bot-builder-enterprise-template-overview). Though Direct Line Speech can work with any compatible bot, these resources provide a reusable baseline for high-quality conversational experiences as well as common supporting skills and models to get started quickly.
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
The tool is based on [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). It allows you to adjust text-to-speech output attributes in real time or batch synthesis, such as voice characters, voice styles, speaking speed, pronunciation, and prosody.
-You have easy access to a broad portfolio of [languages and voices](language-support.md#text-to-speech). These voices include state-of-the-art prebuilt neural voices and your custom neural voice, if you've built one.
+You have easy access to a broad portfolio of [languages and voices](language-support.md?tabs=stt-tts). These voices include state-of-the-art prebuilt neural voices and your custom neural voice, if you've built one.
To learn more, view the [Audio Content Creation tutorial video](https://youtu.be/ygApYuOOG6w).
Each step in the preceding diagram is described here:
1. Choose the Speech resource you want to work with. 1. [Create an audio tuning file](#create-an-audio-tuning-file) by using plain text or SSML scripts. Enter or upload your content into Audio Content Creation.
-1. Choose the voice and the language for your script content. Audio Content Creation includes all of the [Microsoft text-to-speech voices](language-support.md#text-to-speech). You can use prebuilt neural voices or a custom neural voice.
+1. Choose the voice and the language for your script content. Audio Content Creation includes all of the [prebuilt text-to-speech voices](language-support.md?tabs=stt-tts). You can use prebuilt neural voices or a custom neural voice.
> [!NOTE] > Gated access is available for Custom Neural Voice, which allows you to create high-definition voices that are similar to natural-sounding speech. For more information, see [Gating process](./text-to-speech.md).
cognitive-services How To Custom Speech Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-create-project.md
zone_pivot_groups: speech-studio-cli-rest
# Create a Custom Speech project
-Custom Speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a [locale](language-support.md). For example, you might create a project for English in the United States.
+Custom Speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a [locale](language-support.md?tabs=stt-tts). For example, you might create a project for English in the United States.
## Create a project
cognitive-services How To Custom Speech Model And Endpoint Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md
When a custom model or base model expires, it is no longer available for transcr
|Transcription route |Expired model result |Recommendation | ||||
-|Custom endpoint|Speech recognition requests will fall back to the most recent base model for the same [locale](language-support.md). You will get results, but recognition might not accurately transcribe your domain data. |Update the endpoint's model as described in the [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md) guide. |
+|Custom endpoint|Speech recognition requests will fall back to the most recent base model for the same [locale](language-support.md?tabs=stt-tts). You will get results, but recognition might not accurately transcribe your domain data. |Update the endpoint's model as described in the [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md) guide. |
|Batch transcription |[Batch transcription](batch-transcription.md) requests for expired models will fail with a 4xx error. |In each [CreateTranscription](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription) REST API request body, set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. |
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Previously updated : 05/08/2022 Last updated : 08/24/2022
You can use audio + human-labeled transcript data for both [training](how-to-cus
- To improve the acoustic aspects like slight accents, speaking styles, and background noises. - To measure the accuracy of Microsoft's speech-to-text accuracy when it's processing your audio files.
-For a list of base models that support training with audio data, see [Language support](language-support.md#speech-to-text). Even if a base model does support training with audio data, the service might use only part of the audio. And it will still use all the transcripts.
+For a list of base models that support training with audio data, see [Language support](language-support.md?tabs=stt-tts). Even if a base model does support training with audio data, the service might use only part of the audio. And it will still use all the transcripts.
> [!IMPORTANT] > If a base model doesn't support customization with audio data, only the transcription text will be used for training. If you switch to a base model that supports customization with audio data, the training time may increase from several hours to several days. The change in training time would be most noticeable when you switch to a base model in a [region](regions.md#speech-service) without dedicated hardware for training. If the audio data is not required, you should remove it to decrease the training time.
Expected utterances often follow a certain pattern. One common pattern is that u
* "I have a question about `product`," where `product` is a list of possible products. * "Make that `object` `color`," where `object` is a list of geometric shapes and `color` is a list of colors.
-For a list of supported base models and locales for training with structured text, see [Language support](language-support.md#speech-to-text). You must use the latest base model for these locales. For locales that don't support training with structured text, the service will take any training sentences that don't reference any classes as part of training with plain-text data.
+For a list of supported base models and locales for training with structured text, see [Language support](language-support.md?tabs=stt-tts). You must use the latest base model for these locales. For locales that don't support training with structured text, the service will take any training sentences that don't reference any classes as part of training with plain-text data.
The structured-text file should have an .md extension. The maximum file size is 200 MB, and the text encoding must be UTF-8 BOM. The syntax of the Markdown is the same as that from the Language Understanding models, in particular list entities and example utterances. For more information about the complete Markdown syntax, see the <a href="/azure/bot-service/file-format/bot-builder-lu-file-format" target="_blank"> Language Understanding Markdown</a>.
Here are key details about the supported Markdown format:
| Property | Description | Limits | |-|-|--|
-|`@list`|A list of items that can be referenced in an example sentence.|Maximum of 10 lists. Maximum of 4,000 items per list.|
+|`@list`|A list of items that can be referenced in an example sentence.|Maximum of 20 lists. Maximum of 35,000 items per list.|
|`speech:phoneticlexicon`|A list of phonetic pronunciations according to the [Universal Phone Set](customize-pronunciation.md). Pronunciation is adjusted for each instance where the word appears in a list or training sentence. For example, if you have a word that sounds like "cat" and you want to adjust the pronunciation to "k ae t", you would add `- cat/k ae t` to the `speech:phoneticlexicon` list.|Maximum of 15,000 entries. Maximum of 2 pronunciations per word.|
-|`#ExampleSentences`|A pound symbol (`#`) delimits a section of example sentences. The section heading can only contain letters, digits, and underscores. Example sentences should reflect the range of speech that your model should expect. A training sentence can refer to items under a `@list` by using surrounding left and right curly braces (`{@list name}`). You can refer to multiple lists in the same training sentence, or none at all.|Maximum of 50,000 example sentences|
+|`#ExampleSentences`|A pound symbol (`#`) delimits a section of example sentences. The section heading can only contain letters, digits, and underscores. Example sentences should reflect the range of speech that your model should expect. A training sentence can refer to items under a `@list` by using surrounding left and right curly braces (`{@list name}`). You can refer to multiple lists in the same training sentence, or none at all.|Maximum file size of 200MB.|
|`//`|Comments follow a double slash (`//`).|Not applicable| Here's an example structured text file:
Here's an example structured text file:
Specialized or made up words might have unique pronunciations. These words can be recognized if they can be broken down into smaller words to pronounce them. For example, to recognize "Xbox", pronounce it as "X box". This approach won't increase overall accuracy, but can improve recognition of this and other keywords.
-You can provide a custom pronunciation file to improve recognition. Don't use custom pronunciation files to alter the pronunciation of common words. For a list of languages that support custom pronunciation, see [language support](language-support.md#speech-to-text).
+You can provide a custom pronunciation file to improve recognition. Don't use custom pronunciation files to alter the pronunciation of common words. For a list of languages that support custom pronunciation, see [language support](language-support.md?tabs=stt-tts).
> [!NOTE] > You can either use a pronunciation data file on its own, or you can add pronunciation within a structured text data file. The Speech service doesn't support training a model where you select both of those datasets as input.
Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX</a>
### Audio data for training
-Not all base models support [training with audio data](language-support.md#speech-to-text). For a list of base models that support training with audio data, see [Language support](language-support.md#speech-to-text).
+Not all base models support [training with audio data](language-support.md?tabs=stt-tts). For a list of base models that support training with audio data, see [Language support](language-support.md?tabs=stt-tts).
Even if a base model supports training with audio data, the service might use only part of the audio. In [regions](regions.md#speech-service) with dedicated hardware available for training audio data, the Speech service will use up to 20 hours of your audio training data. In other regions, the Speech service uses up to 8 hours of your audio data.
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
After you validate your data files, you can use them to build your Custom Neural
If you want to create a voice in the same language of your training data, select **Neural** method. For the **Neural** method, you can select different versions of the training recipe for your model. The versions vary according to the features supported and model training time. Normally new versions are enhanced ones with bugs fixed and new features supported. The latest version is selected by default.
- You can also select **Neural - cross lingual** and **Target language** to create a secondary language for your voice model. Only one target language can be selected for a voice model. You don't need to prepare additional data in the target language for training, but your test script needs to be in the target language. For the languages supported by cross lingual feature, see [supported languages](language-support.md#custom-neural-voice).
+ You can also select **Neural - cross lingual** and **Target language** to create a secondary language for your voice model. Only one target language can be selected for a voice model. You don't need to prepare additional data in the target language for training, but your test script needs to be in the target language. For the languages supported by cross lingual feature, see [supported languages](language-support.md?tabs=stt-tts).
The same unit price applies to both **Neural** and **Neural - cross lingual**. Check [the pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) for training.
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
# Create a Project
-[Custom Neural Voice](https://aka.ms/customvoice) is a set of online tools that you use to create a recognizable, one-of-a-kind voice for your brand. All it takes to get started are a handful of audio files and the associated transcriptions. See if Custom Neural Voice supports your [language](language-support.md#custom-neural-voice) and [region](regions.md#speech-service).
+[Custom Neural Voice](https://aka.ms/customvoice) is a set of online tools that you use to create a recognizable, one-of-a-kind voice for your brand. All it takes to get started are a handful of audio files and the associated transcriptions. See if Custom Neural Voice supports your [language](language-support.md?tabs=stt-tts) and [region](regions.md#speech-service).
> [!IMPORTANT] > Custom Neural Voice Pro can be used to create higher-quality models that are indistinguishable from human recordings. For access you must commit to using it in alignment with our responsible AI principles. Learn more about our [policy on limited access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply here](https://aka.ms/customneural).
To create a custom voice project:
## Cross lingual feature
-With cross lingual feature (public preview), you can create a different language for your voice model. If the language of your training data is supported by cross lingual feature, you can create a voice that speaks a different language from your training data. For example, with the `zh-CN` training data, you can create a voice that speaks `en-US` or any of the languages supported by cross lingual feature. For details, see [supported languages](language-support.md#custom-neural-voice). You don't need to prepare additional data in the target language for training, but your test script needs to be in the target language.
+With cross lingual feature (public preview), you can create a different language for your voice model. If the language of your training data is supported by cross lingual feature, you can create a voice that speaks a different language from your training data. For example, with the `zh-CN` training data, you can create a voice that speaks `en-US` or any of the languages supported by cross lingual feature. For details, see [supported languages](language-support.md?tabs=stt-tts). You don't need to prepare additional data in the target language for training, but your test script needs to be in the target language.
For how to create a different language from your training data, select the training method **Neural-cross lingual** during training. See [how to train your custom neural voice model](how-to-custom-voice-create-voice.md#train-your-custom-neural-voice-model).
cognitive-services How To Migrate To Prebuilt Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-to-prebuilt-neural-voice.md
# Migrate from prebuilt standard voice to prebuilt neural voice > [!IMPORTANT]
-> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md#prebuilt-neural-voices). After August 31, the standard voices won't be supported with any Speech resource.
+> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md?tabs=stt-tts). After August 31, the standard voices won't be supported with any Speech resource.
The prebuilt neural voice provides more natural sounding speech output, and thus, a better end-user experience.
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
You can get pronunciation assessment scores for:
- Phonemes in SAPI or IPA format > [!NOTE]
-> For information about availability of pronunciation assessment, see [supported languages](language-support.md#pronunciation-assessment) and [available regions](regions.md#speech-service).
+> For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=stt-tts) and [available regions](regions.md#speech-service).
> > The syllable groups, IPA phonemes, and spoken phoneme features of pronunciation assessment are currently only available for the en-US locale.
To request syllable-level results along with phonemes, set the granularity [conf
## Phoneme alphabet format
-The phoneme name is provided together with the score, to help identity which phonemes were pronounced accurately or inaccurately. For the [supported languages](language-support.md#pronunciation-assessment), you can get the phoneme name in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) format, and for the `en-US` locale, you can also get the phoneme name in [IPA](https://en.wikipedia.org/wiki/IPA) format.
+The phoneme name is provided together with the score, to help identity which phonemes were pronounced accurately or inaccurately. For the [supported languages](language-support.md?tabs=stt-tts), you can get the phoneme name in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) format, and for the `en-US` locale, you can also get the phoneme name in [IPA](https://en.wikipedia.org/wiki/IPA) format.
The following table compares example SAPI phonemes with the corresponding IPA phonemes.
cognitive-services How To Recognize Intents From Speech Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-recognize-intents-from-speech-csharp.md
The application doesn't parse the JSON result. It only displays the JSON text in
## Specify recognition language
-By default, LUIS recognizes intents in US English (`en-us`). By assigning a locale code to the `SpeechRecognitionLanguage` property of the speech configuration, you can recognize intents in other languages. For example, add `config.SpeechRecognitionLanguage = "de-de";` in our application before creating the recognizer to recognize intents in German. For more information, see [LUIS language support](../LUIS/luis-language-support.md#languages-supported).
+By default, LUIS recognizes intents in US English (`en-us`). By assigning a locale code to the `SpeechRecognitionLanguage` property of the speech configuration, you can recognize intents in other languages. For example, add `config.SpeechRecognitionLanguage = "de-de";` in our application before creating the recognizer to recognize intents in German. For more information, see [LUIS language support](../LUIS/luis-language-support.md).
## Continuous recognition from a file
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
zone_pivot_groups: programming-languages-speech-services-nomore-variant
# Get facial position with viseme > [!NOTE]
-> Viseme ID supports neural voices in [all viseme-supported locales](language-support.md#viseme). Scalable Vector Graphics (SVG) only supports neural voices in `en-US` locale, and blend shapes supports neural voices in `en-US` and `zh-CN` locales.
+> Viseme ID supports neural voices in [all viseme-supported locales](language-support.md?tabs=stt-tts). Scalable Vector Graphics (SVG) only supports neural voices in `en-US` locale, and blend shapes supports neural voices in `en-US` and `zh-CN` locales.
A *viseme* is the visual description of a phoneme in spoken language. It defines the position of the face and mouth while a person is speaking. Each viseme depicts the key facial poses for a specific set of phonemes.
cognitive-services Keyword Recognition Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/keyword-recognition-guidelines.md
This article outlines how to choose your keyword optimize its accuracy character
Creating an effective keyword is vital to ensuring your product will consistently and accurately respond. Consider the following guidelines when you choose a keyword. > [!NOTE]
-> The examples below are in English but the guidelines apply to all languages supported by Custom Keyword. For a list of all supported languages, see [Language support](language-support.md#custom-keyword-and-keyword-verification).
+> The examples below are in English but the guidelines apply to all languages supported by Custom Keyword. For a list of all supported languages, see [Language support](language-support.md?tabs=custom-keyword).
- It should take no longer than two seconds to say. - Words of 4 to 7 syllables work best. For example, "Hey, Computer" is a good keyword. Just "Hey" is a poor one.
cognitive-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md
zone_pivot_groups: programming-languages-speech-services-nomore-variant
# Language identification (preview)
-Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md#language-identification).
+Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification).
Language identification (LID) use cases include:
SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
::: zone-end
-For more information, see [supported languages](language-support.md#language-identification).
+For more information, see [supported languages](language-support.md?tabs=language-identification).
### At-start and Continuous language identification
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
Previously updated : 06/13/2022 Last updated : 08/25/2022 -+ # Language and voice support for the Speech service
-Language support varies by Speech service functionality. The following tables summarize language support for [speech-to-text](#speech-to-text), [text-to-speech](#text-to-speech), [speech translation](#speech-translation), and [speaker recognition](#speaker-recognition) service offerings.
+The following tables summarize language support for [speech-to-text](speech-to-text.md), [text-to-speech](text-to-speech.md), [pronunciation assessment](how-to-pronunciation-assessment.md), [speech translation](speech-translation.md), [speaker recognition](speaker-recognition-overview.md), and additional service features.
-## Speech-to-text
+## Supported languages
-The Speech service supports the languages (locales) in the following tables.
+Language support varies by Speech service functionality.
-To improve accuracy, customization is available for some languages and base model versions by uploading audio + human-labeled transcripts, plain text, structured text, and pronunciation. By default, plain text customization is supported for all available base models. To learn more about customization, see [Custom Speech](./custom-speech-overview.md).
+**Choose a Speech feature**
-### [Speech-to-text](#tab/speechtotext)
+# [Speech-to-text and Text-to-speech](#tab/stt-tts)
-| Language | Locale (BCP-47) |
-|--|--|
-| Afrikaans (South Africa) | `af-ZA` |
-| Albanian (Albania) | `sq-AL` |
-| Amharic (Ethiopia) | `am-ET` |
-| Arabic (Algeria) | `ar-DZ` |
-| Arabic (Bahrain), modern standard | `ar-BH` |
-| Arabic (Egypt) | `ar-EG` |
-| Arabic (Iraq) | `ar-IQ` |
-| Arabic (Israel) | `ar-IL` |
-| Arabic (Jordan) | `ar-JO` |
-| Arabic (Kuwait) | `ar-KW` |
-| Arabic (Lebanon) | `ar-LB` |
-| Arabic (Libya) | `ar-LY` |
-| Arabic (Morocco) | `ar-MA` |
-| Arabic (Oman) | `ar-OM` |
-| Arabic (Palestinian Authority) | `ar-PS` |
-| Arabic (Qatar) | `ar-QA` |
-| Arabic (Saudi Arabia) | `ar-SA` |
-| Arabic (Syria) | `ar-SY` |
-| Arabic (Tunisia) | `ar-TN` |
-| Arabic (United Arab Emirates) | `ar-AE` |
-| Arabic (Yemen) | `ar-YE` |
-| Armenian (Armenia) | `hy-AM` |
-| Azerbaijani (Azerbaijan) | `az-AZ` |
-| Basque (Spain) | `eu-ES` |
-| Bengali (India) | `bn-IN` |
-| Bosnian (Bosnia and Herzegovina) | `bs-BA` |
-| Bulgarian (Bulgaria) | `bg-BG` |
-| Burmese (Myanmar) | `my-MM` |
-| Catalan (Spain) | `ca-ES` |
-| Chinese (Cantonese, Simplified) | `yue-CN` |
-| Chinese (Cantonese, Traditional) | `zh-HK` |
-| Chinese (Mandarin, Simplified) | `zh-CN` |
-| Chinese (Southwestern Mandarin, Simplified) | `zh-CN-sichuan` |
-| Chinese (Taiwanese Mandarin, Traditional) | `zh-TW` |
-| Chinese (Wu, Simplified) | `wuu-CN` |
-| Croatian (Croatia) | `hr-HR` |
-| Czech (Czech) | `cs-CZ` |
-| Danish (Denmark) | `da-DK` |
-| Dutch (Belgium) | `nl-BE` |
-| Dutch (Netherlands) | `nl-NL` |
-| English (Australia) | `en-AU` |
-| English (Canada) | `en-CA` |
-| English (Ghana) | `en-GH` |
-| English (Hong Kong) | `en-HK` |
-| English (India) | `en-IN` |
-| English (Ireland) | `en-IE` |
-| English (Kenya) | `en-KE` |
-| English (New Zealand) | `en-NZ` |
-| English (Nigeria) | `en-NG` |
-| English (Philippines) | `en-PH` |
-| English (Singapore) | `en-SG` |
-| English (South Africa) | `en-ZA` |
-| English (Tanzania) | `en-TZ` |
-| English (United Kingdom) | `en-GB` |
-| English (United States) | `en-US` |
-| Estonian (Estonia) | `et-EE` |
-| Filipino (Philippines) | `fil-PH` |
-| Finnish (Finland) | `fi-FI` |
-| French (Belgium) | `fr-BE` |
-| French (Canada) | `fr-CA` |
-| French (France) | `fr-FR` |
-| French (Switzerland) | `fr-CH` |
-| Galician (Spain) | `gl-ES` |
-| Georgian (Georgia) | `ka-GE` |
-| German (Austria) | `de-AT` |
-| German (Germany) | `de-DE` |
-| German (Switzerland) | `de-CH` |
-| Greek (Greece) | `el-GR` |
-| Gujarati (Indian) | `gu-IN` |
-| Hebrew (Israel) | `he-IL` |
-| Hindi (India) | `hi-IN` |
-| Hungarian (Hungary) | `hu-HU` |
-| Icelandic (Iceland) | `is-IS` |
-| Indonesian (Indonesia) | `id-ID` |
-| Irish (Ireland) | `ga-IE` |
-| Italian (Italy) | `it-IT` |
-| Italian (Switzerland) | `it-CH` |
-| Japanese (Japan) | `ja-JP` |
-| Javanese (Indonesia) | `jv-ID` |
-| Kannada (India) | `kn-IN` |
-| Kazakh (Kazakhstan) | `kk-KZ` |
-| Khmer (Cambodia) | `km-KH` |
-| Korean (Korea) | `ko-KR` |
-| Lao (Laos) | `lo-LA` |
-| Latvian (Latvia) | `lv-LV` |
-| Lithuanian (Lithuania) | `lt-LT` |
-| Macedonian (North Macedonia) | `mk-MK` |
-| Malay (Malaysia) | `ms-MY` |
-| Maltese (Malta) | `mt-MT` |
-| Marathi (India) | `mr-IN` |
-| Mongolian (Mongolia) | `mn-MN` |
-| Nepali (Nepal) | `ne-NP` |
-| Norwegian (Bokmål, Norway) | `nb-NO` |
-| Pashto (Afghanistan) | `ps-AF` |
-| Persian (Iran) | `fa-IR` |
-| Polish (Poland) | `pl-PL` |
-| Portuguese (Brazil) | `pt-BR` |
-| Portuguese (Portugal) | `pt-PT` |
-| Romanian (Romania) | `ro-RO` |
-| Russian (Russia) | `ru-RU` |
-| Serbian (Serbia) | `sr-RS` |
-| Sinhala (Sri Lanka) | `si-LK` |
-| Slovak (Slovakia) | `sk-SK` |
-| Slovenian (Slovenia) | `sl-SI` |
-| Somali (Somalia) | `so-SO` |
-| Spanish (Argentina) | `es-AR` |
-| Spanish (Bolivia) | `es-BO` |
-| Spanish (Chile) | `es-CL` |
-| Spanish (Colombia) | `es-CO` |
-| Spanish (Costa Rica) | `es-CR` |
-| Spanish (Cuba) | `es-CU` |
-| Spanish (Dominican Republic) | `es-DO` |
-| Spanish (Ecuador) | `es-EC` |
-| Spanish (El Salvador) | `es-SV` |
-| Spanish (Equatorial Guinea) | `es-GQ` |
-| Spanish (Guatemala) | `es-GT` |
-| Spanish (Honduras) | `es-HN` |
-| Spanish (Mexico) | `es-MX` |
-| Spanish (Nicaragua) | `es-NI` |
-| Spanish (Panama) | `es-PA` |
-| Spanish (Paraguay) | `es-PY` |
-| Spanish (Peru) | `es-PE` |
-| Spanish (Puerto Rico) | `es-PR` |
-| Spanish (Spain) | `es-ES` |
-| Spanish (Uruguay) | `es-UY` |
-| Spanish (USA) | `es-US` |
-| Spanish (Venezuela) | `es-VE` |
-| Swahili (Kenya) | `sw-KE` |
-| Swahili (Tanzania) | `sw-TZ` |
-| Swedish (Sweden) | `sv-SE` |
-| Tamil (India) | `ta-IN` |
-| Telugu (India) | `te-IN` |
-| Thai (Thailand) | `th-TH` |
-| Turkish (Turkey) | `tr-TR` |
-| Ukrainian (Ukraine) | `uk-UA` |
-| Uzbek (Uzbekistan) | `uz-UZ` |
-| Vietnamese (Vietnam) | `vi-VN` |
-| Welsh (United Kingdom) | `cy-GB` |
-| Zulu (South Africa) | `zu-ZA` |
+The table in this section summarizes the locales and voices supported for Speech-to-text and Text-to-speech. Please see the table footnotes for more details.
-### [Plain text](#tab/plaintext)
+Additional remarks for Speech-to-text locales are included in the [Custom Speech](#custom-speech) section below. Additional remarks for Text-to-speech locales are included in the [Prebuilt neural voices](#prebuilt-neural-voices), [Voice styles and roles](#voice-styles-and-roles), and [Custom Neural Voice](#custom-neural-voice) sections below.
-| Language | Locale (BCP-47) |
-|--|--|
-| Afrikaans (South Africa) | `af-ZA` |
-| Amharic (Ethiopia) | `am-ET` |
-| Arabic (Algeria) | `ar-DZ` |
-| Arabic (Bahrain), modern standard | `ar-BH` |
-| Arabic (Egypt) | `ar-EG` |
-| Arabic (Iraq) | `ar-IQ` |
-| Arabic (Israel) | `ar-IL` |
-| Arabic (Jordan) | `ar-JO` |
-| Arabic (Kuwait) | `ar-KW` |
-| Arabic (Lebanon) | `ar-LB` |
-| Arabic (Libya) | `ar-LY` |
-| Arabic (Morocco) | `ar-MA` |
-| Arabic (Oman) | `ar-OM` |
-| Arabic (Palestinian Authority) | `ar-PS` |
-| Arabic (Qatar) | `ar-QA` |
-| Arabic (Saudi Arabia) | `ar-SA` |
-| Arabic (Syria) | `ar-SY` |
-| Arabic (Tunisia) | `ar-TN` |
-| Arabic (United Arab Emirates) | `ar-AE` |
-| Arabic (Yemen) | `ar-YE` |
-| Bulgarian (Bulgaria) | `bg-BG` |
-| Burmese (Myanmar) | `my-MM` |
-| Catalan (Spain) | `ca-ES` |
-| Chinese (Cantonese, Traditional) | `zh-HK` |
-| Chinese (Mandarin, Simplified) | `zh-CN` |
-| Chinese (Taiwanese Mandarin) | `zh-TW` |
-| Croatian (Croatia) | `hr-HR` |
-| Czech (Czech) | `cs-CZ` |
-| Danish (Denmark) | `da-DK` |
-| Dutch (Belgium) | `nl-BE` |
-| Dutch (Netherlands) | `nl-NL` |
-| English (Australia) | `en-AU` |
-| English (Canada) | `en-CA` |
-| English (Ghana) | `en-GH` |
-| English (Hong Kong) | `en-HK` |
-| English (India) | `en-IN` |
-| English (Ireland) | `en-IE` |
-| English (Kenya) | `en-KE` |
-| English (New Zealand) | `en-NZ` |
-| English (Nigeria) | `en-NG` |
-| English (Philippines) | `en-PH` |
-| English (Singapore) | `en-SG` |
-| English (South Africa) | `en-ZA` |
-| English (Tanzania) | `en-TZ` |
-| English (United Kingdom) | `en-GB` |
-| English (United States) | `en-US` |
-| Estonian (Estonia) | `et-EE` |
-| Filipino (Philippines) | `fil-PH` |
-| Finnish (Finland) | `fi-FI` |
-| French (Belgium) | `fr-BE` |
-| French (Canada) | `fr-CA` |
-| French (France) | `fr-FR` |
-| French (Switzerland) | `fr-CH` |
-| German (Austria) | `de-AT` |
-| German (Germany) | `de-DE` |
-| German (Switzerland) | `de-CH` |
-| Greek (Greece) | `el-GR` |
-| Gujarati (Indian) | `gu-IN` |
-| Hebrew (Israel) | `he-IL` |
-| Hindi (India) | `hi-IN` |
-| Hungarian (Hungary) | `hu-HU` |
-| Icelandic (Iceland) | `is-IS` |
-| Indonesian (Indonesia) | `id-ID` |
-| Irish (Ireland) | `ga-IE` |
-| Italian (Italy) | `it-IT` |
-| Japanese (Japan) | `ja-JP` |
-| Javanese (Indonesia) | `jv-ID` |
-| Kannada (India) | `kn-IN` |
-| Khmer (Cambodia) | `km-KH` |
-| Korean (Korea) | `ko-KR` |
-| Lao (Laos) | `lo-LA` |
-| Latvian (Latvia) | `lv-LV` |
-| Lithuanian (Lithuania) | `lt-LT` |
-| Macedonian (North Macedonia) | `mk-MK` |
-| Malay (Malaysia) | `ms-MY` |
-| Maltese (Malta) | `mt-MT` |
-| Marathi (India) | `mr-IN` |
-| Norwegian (Bokmål, Norway) | `nb-NO` |
-| Persian (Iran) | `fa-IR` |
-| Polish (Poland) | `pl-PL` |
-| Portuguese (Brazil) | `pt-BR` |
-| Portuguese (Portugal) | `pt-PT` |
-| Romanian (Romania) | `ro-RO` |
-| Russian (Russia) | `ru-RU` |
-| Serbian (Serbia) | `sr-RS` |
-| Sinhala (Sri Lanka) | `si-LK` |
-| Slovak (Slovakia) | `sk-SK` |
-| Slovenian (Slovenia) | `sl-SI` |
-| Spanish (Argentina) | `es-AR` |
-| Spanish (Bolivia) | `es-BO` |
-| Spanish (Chile) | `es-CL` |
-| Spanish (Colombia) | `es-CO` |
-| Spanish (Costa Rica) | `es-CR` |
-| Spanish (Cuba) | `es-CU` |
-| Spanish (Dominican Republic) | `es-DO` |
-| Spanish (Ecuador) | `es-EC` |
-| Spanish (El Salvador) | `es-SV` |
-| Spanish (Equatorial Guinea) | `es-GQ` |
-| Spanish (Guatemala) | `es-GT` |
-| Spanish (Honduras) | `es-HN` |
-| Spanish (Mexico) | `es-MX` |
-| Spanish (Nicaragua) | `es-NI` |
-| Spanish (Panama) | `es-PA` |
-| Spanish (Paraguay) | `es-PY` |
-| Spanish (Peru) | `es-PE` |
-| Spanish (Puerto Rico) | `es-PR` |
-| Spanish (Spain) | `es-ES` |
-| Spanish (Uruguay) | `es-UY` |
-| Spanish (USA) | `es-US` |
-| Spanish (Venezuela) | `es-VE` |
-| Swahili (Kenya) | `sw-KE` |
-| Swahili (Tanzania) | `sw-TZ` |
-| Swedish (Sweden) | `sv-SE` |
-| Tamil (India) | `ta-IN` |
-| Telugu (India) | `te-IN` |
-| Thai (Thailand) | `th-TH` |
-| Turkish (Turkey) | `tr-TR` |
-| Ukrainian (Ukraine) | `uk-UA` |
-| Uzbek (Uzbekistan) | `uz-UZ` |
-| Vietnamese (Vietnam) | `vi-VN` |
-| Zulu (South Africa) | `zu-ZA` |
+### Custom Speech
-### [Structured text](#tab/structuredtext)
-
-| Language | Locale (BCP-47) |
-|--|--|
-| English (India) | `en-IN` |
-| English (United Kingdom) | `en-GB` |
-| English (United States) | `en-US` |
-| French (Canada) | `fr-CA` |
-| French (France) | `fr-FR` |
-| German (Switzerland) | `de-CH` |
-| Spanish (Mexico) | `es-MX` |
-| Spanish (Spain) | `es-ES` |
-
-### [Pronunciation data](#tab/pronunciation)
-
-| Language | Locale (BCP-47) |
-|--|--|
-| Catalan (Spain) | `ca-ES` |
-| Croatian (Croatia) | `hr-HR` |
-| Czech (Czech) | `cs-CZ` |
-| Danish (Denmark) | `da-DK` |
-| Dutch (Netherlands) | `nl-NL` |
-| English (Australia) | `en-AU` |
-| English (Canada) | `en-CA` |
-| English (Ghana) | `en-GH` |
-| English (Hong Kong) | `en-HK` |
-| English (India) | `en-IN` |
-| English (Ireland) | `en-IE` |
-| English (Kenya) | `en-KE` |
-| English (New Zealand) | `en-NZ` |
-| English (Nigeria) | `en-NG` |
-| English (Philippines) | `en-PH` |
-| English (Singapore) | `en-SG` |
-| English (South Africa) | `en-ZA` |
-| English (Tanzania) | `en-TZ` |
-| English (United Kingdom) | `en-GB` |
-| English (United States) | `en-US` |
-| Estonian (Estonia) | `et-EE` |
-| Filipino (Philippines) | `fil-PH` |
-| Finnish (Finland) | `fi-FI` |
-| French (Canada) | `fr-CA` |
-| French (France) | `fr-FR` |
-| French (Switzerland) | `fr-CH` |
-| German (Austria) | `de-AT` |
-| German (Germany) | `de-DE` |
-| German (Switzerland) | `de-CH` |
-| Hungarian (Hungary) | `hu-HU` |
-| Indonesian (Indonesia) | `id-ID` |
-| Irish (Ireland) | `ga-IE` |
-| Italian (Italy) | `it-IT` |
-| Latvian (Latvia) | `lv-LV` |
-| Lithuanian (Lithuania) | `lt-LT` |
-| Polish (Poland) | `pl-PL` |
-| Portuguese (Brazil) | `pt-BR` |
-| Portuguese (Portugal) | `pt-PT` |
-| Romanian (Romania) | `ro-RO` |
-| Slovak (Slovakia) | `sk-SK` |
-| Slovenian (Slovenia) | `sl-SI` |
-| Spanish (Argentina) | `es-AR` |
-| Spanish (Bolivia) | `es-BO` |
-| Spanish (Chile) | `es-CL` |
-| Spanish (Colombia) | `es-CO` |
-| Spanish (Costa Rica) | `es-CR` |
-| Spanish (Cuba) | `es-CU` |
-| Spanish (Dominican Republic) | `es-DO` |
-| Spanish (Ecuador) | `es-EC` |
-| Spanish (El Salvador) | `es-SV` |
-| Spanish (Guatemala) | `es-GT` |
-| Spanish (Honduras) | `es-HN` |
-| Spanish (Mexico) | `es-MX` |
-| Spanish (Nicaragua) | `es-NI` |
-| Spanish (Panama) | `es-PA` |
-| Spanish (Paraguay) | `es-PY` |
-| Spanish (Peru) | `es-PE` |
-| Spanish (Puerto Rico) | `es-PR` |
-| Spanish (Spain) | `es-ES` |
-| Spanish (Uruguay) | `es-UY` |
-| Spanish (USA) | `es-US` |
-| Spanish (Venezuela) | `es-VE` |
-| Swedish (Sweden) | `sv-SE` |
-
-### [Audio data](#tab/audiodata)
-
-| Language | Locale (BCP-47) |
-|--|--|
-| English (United Kingdom) | `en-GB` |
-| English (United States) | `en-US` |
-| French (Canada) | `fr-CA` |
-| French (France) | `fr-FR` |
-| German (Switzerland) | `de-CH` |
-| Italian (Italy) | `it-IT` |
-| Korean (Korea) | `ko-KR` |
-| Portuguese (Brazil) | `pt-BR` |
-| Spanish (Spain) | `es-ES` |
-
-### [Phrase list](#tab/phraselist)
-
-You can use the locales in this table with [phrase list](improve-accuracy-phrase-list.md).
-
-| Language | Locale |
-|||
-| Chinese (Mandarin, Simplified) | `zh-CN` |
-| English (Australia) | `en-AU` |
-| English (Canada) | `en-CA` |
-| English (India) | `en-IN` |
-| English (United Kingdom)) | `en-GB` |
-| English (United States) | `en-US` |
-| French (Canada) | `fr-CA` |
-| French (France) | `fr-FR` |
-| German (Germany) | `de-DE` |
-| Italian (Italy) | `it-IT` |
-| Japanese (Japan) | `ja-JP` |
-| Korean (Korea) | `ko-KR` |
-| Portuguese (Brazil) | `pt-BR` |
-| Spanish (Mexico) | `es-MX` |
-| Spanish (Spain) | `es-ES` |
---
-## Text-to-speech
-
-Both the Microsoft Speech SDK and REST APIs support these neural voices, each of which supports a specific language and dialect, identified by locale. You can try the demo and hear the voices on [this website](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#features).
-
-You can also get a full list of languages and voices supported for each specific region or endpoint through the [voices list API](rest-text-to-speech.md#get-a-list-of-voices). To learn how you can configure and adjust neural voices, such as Speaking Styles, see [Speech Synthesis Markup Language](speech-synthesis-markup.md#adjust-speaking-styles).
-
-> [!IMPORTANT]
-> Pricing varies for Prebuilt Neural Voice (referred to as *Neural* on the pricing page) and Custom Neural Voice (referred to as *Custom Neural* on the pricing page). For more information, see the [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page.
-
+To improve Speech-to-text recognition accuracy, customization is available for some languages and base models. Depending on the locale, you can upload audio + human-labeled transcripts, plain text, structured text, and pronunciation data. By default, plain text customization is supported for all available base models. To learn more about customization, see [Custom Speech](./custom-speech-overview.md).
### Prebuilt neural voices
-Prebuilt neural voices are created from samples that use a 24-khz sample rate. All voices can upsample or downsample to other sample rates when synthesizing.
+Each prebuilt neural voice supports a specific language and dialect, identified by locale. You can try the demo and hear the voices on [this website](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#features).
> [!IMPORTANT]
-> The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021.
->
-> If you're using container Neural TTS, [download](speech-container-howto.md#get-the-container-image-with-docker-pull) and deploy the latest version. Starting from October 30, 2021, all requests with previous versions will not succeed.
->
-> The `en-US-JessaNeural` voice has changed to `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria." You can continue to use the full service name mapping like "Microsoft Server Speech Text to Speech Voice (en-US, AriaNeural)" in your speech synthesis requests.
-
-The following table lists the prebuilt neural voices supported in each language.
-
-| Language | Locale | Gender | Voice name | Style support |
-||||||
-| Afrikaans (South Africa) | `af-ZA` | Female | `af-ZA-AdriNeural` | General |
-| Afrikaans (South Africa) | `af-ZA` | Male | `af-ZA-WillemNeural` | General |
-| Albanian (Albania) | `sq-AL` | Female | `sq-AL-AnilaNeural` <sup>New</sup> | General |
-| Albanian (Albania) | `sq-AL` | Male | `sq-AL-IlirNeural` <sup>New</sup> | General |
-| Amharic (Ethiopia) | `am-ET` | Female | `am-ET-MekdesNeural` | General |
-| Amharic (Ethiopia) | `am-ET` | Male | `am-ET-AmehaNeural` | General |
-| Arabic (Algeria) | `ar-DZ` | Female | `ar-DZ-AminaNeural` | General |
-| Arabic (Algeria) | `ar-DZ` | Male | `ar-DZ-IsmaelNeural` | General |
-| Arabic (Bahrain) | `ar-BH` | Female | `ar-BH-LailaNeural` | General |
-| Arabic (Bahrain) | `ar-BH` | Male | `ar-BH-AliNeural` | General |
-| Arabic (Egypt) | `ar-EG` | Female | `ar-EG-SalmaNeural` | General |
-| Arabic (Egypt) | `ar-EG` | Male | `ar-EG-ShakirNeural` | General |
-| Arabic (Iraq) | `ar-IQ` | Female | `ar-IQ-RanaNeural` | General |
-| Arabic (Iraq) | `ar-IQ` | Male | `ar-IQ-BasselNeural` | General |
-| Arabic (Jordan) | `ar-JO` | Female | `ar-JO-SanaNeural` | General |
-| Arabic (Jordan) | `ar-JO` | Male | `ar-JO-TaimNeural` | General |
-| Arabic (Kuwait) | `ar-KW` | Female | `ar-KW-NouraNeural` | General |
-| Arabic (Kuwait) | `ar-KW` | Male | `ar-KW-FahedNeural` | General |
-| Arabic (Lebanon) | `ar-LB` | Female | `ar-LB-LaylaNeural` <sup>New</sup> | General |
-| Arabic (Lebanon) | `ar-LB` | Male | `ar-LB-RamiNeural` <sup>New</sup> | General |
-| Arabic (Libya) | `ar-LY` | Female | `ar-LY-ImanNeural` | General |
-| Arabic (Libya) | `ar-LY` | Male | `ar-LY-OmarNeural` | General |
-| Arabic (Morocco) | `ar-MA` | Female | `ar-MA-MounaNeural` | General |
-| Arabic (Morocco) | `ar-MA` | Male | `ar-MA-JamalNeural` | General |
-| Arabic (Oman) | `ar-OM` | Female | `ar-OM-AyshaNeural` <sup>New</sup> | General |
-| Arabic (Oman) | `ar-OM` | Male | `ar-OM-AbdullahNeural` <sup>New</sup> | General |
-| Arabic (Qatar) | `ar-QA` | Female | `ar-QA-AmalNeural` | General |
-| Arabic (Qatar) | `ar-QA` | Male | `ar-QA-MoazNeural` | General |
-| Arabic (Saudi Arabia) | `ar-SA` | Female | `ar-SA-ZariyahNeural` | General |
-| Arabic (Saudi Arabia) | `ar-SA` | Male | `ar-SA-HamedNeural` | General |
-| Arabic (Syria) | `ar-SY` | Female | `ar-SY-AmanyNeural` | General |
-| Arabic (Syria) | `ar-SY` | Male | `ar-SY-LaithNeural` | General |
-| Arabic (Tunisia) | `ar-TN` | Female | `ar-TN-ReemNeural` | General |
-| Arabic (Tunisia) | `ar-TN` | Male | `ar-TN-HediNeural` | General |
-| Arabic (United Arab Emirates) | `ar-AE` | Female | `ar-AE-FatimaNeural` | General |
-| Arabic (United Arab Emirates) | `ar-AE` | Male | `ar-AE-HamdanNeural` | General |
-| Arabic (Yemen) | `ar-YE` | Female | `ar-YE-MaryamNeural` | General |
-| Arabic (Yemen) | `ar-YE` | Male | `ar-YE-SalehNeural` | General |
-| Azerbaijani (Azerbaijan) | `az-AZ` | Male | `az-AZ-BabekNeural` <sup>New</sup> | General |
-| Azerbaijani (Azerbaijan) | `az-AZ` | Female | `az-AZ-BanuNeural` <sup>New</sup> | General |
-| Bangla (Bangladesh) | `bn-BD` | Female | `bn-BD-NabanitaNeural` | General |
-| Bangla (Bangladesh) | `bn-BD` | Male | `bn-BD-PradeepNeural` | General |
-| Bengali (India) | `bn-IN` | Female | `bn-IN-TanishaaNeural` | General |
-| Bengali (India) | `bn-IN` | Male | `bn-IN-BashkarNeural` | General |
-| Bosnian (Bosnia and Herzegovina) | `bs-BA` | Female | `bs-BA-VesnaNeural` <sup>New</sup> | General |
-| Bosnian (Bosnia and Herzegovina) | `bs-BA` | Male | `bs-BA-GoranNeural` <sup>New</sup> | General |
-| Bulgarian (Bulgaria) | `bg-BG` | Female | `bg-BG-KalinaNeural` | General |
-| Bulgarian (Bulgaria) | `bg-BG` | Male | `bg-BG-BorislavNeural` | General |
-| Burmese (Myanmar) | `my-MM` | Female | `my-MM-NilarNeural` | General |
-| Burmese (Myanmar) | `my-MM` | Male | `my-MM-ThihaNeural` | General |
-| Catalan (Spain) | `ca-ES` | Female | `ca-ES-AlbaNeural` | General |
-| Catalan (Spain) | `ca-ES` | Female | `ca-ES-JoanaNeural` | General |
-| Catalan (Spain) | `ca-ES` | Male | `ca-ES-EnricNeural` | General |
-| Chinese (Cantonese, Traditional) | `zh-HK` | Female | `zh-HK-HiuGaaiNeural` | General |
-| Chinese (Cantonese, Traditional) | `zh-HK` | Female | `zh-HK-HiuMaanNeural` | General |
-| Chinese (Cantonese, Traditional) | `zh-HK` | Male | `zh-HK-WanLungNeural` | General |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaochenNeural` | Optimized for spontaneous conversation |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaohanNeural` | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaomoNeural` | General, multiple role-play and styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoqiuNeural` | Optimized for narrating |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoruiNeural` | Senior voice, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoshuangNeural` | Child voice, optimized for child story and chat; multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles)|
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoxiaoNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoxuanNeural` | General, multiple role-play and styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoyanNeural` | Optimized for customer service |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoyouNeural` | Child voice, optimized for story narrating |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunxiNeural` | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunyangNeural` | Optimized for news reading,<br /> multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunyeNeural` | Optimized for story narrating, multiple role-play and styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Taiwanese Mandarin) | `zh-TW` | Female | `zh-TW-HsiaoChenNeural` | General |
-| Chinese (Taiwanese Mandarin) | `zh-TW` | Female | `zh-TW-HsiaoYuNeural` | General |
-| Chinese (Taiwanese Mandarin) | `zh-TW` | Male | `zh-TW-YunJheNeural` | General |
-| Croatian (Croatia) | `hr-HR` | Female | `hr-HR-GabrijelaNeural` | General |
-| Croatian (Croatia) | `hr-HR` | Male | `hr-HR-SreckoNeural` | General |
-| Czech (Czech) | `cs-CZ` | Female | `cs-CZ-VlastaNeural` | General |
-| Czech (Czech) | `cs-CZ` | Male | `cs-CZ-AntoninNeural` | General |
-| Danish (Denmark) | `da-DK` | Female | `da-DK-ChristelNeural` | General |
-| Danish (Denmark) | `da-DK` | Male | `da-DK-JeppeNeural` | General |
-| Dutch (Belgium) | `nl-BE` | Female | `nl-BE-DenaNeural` | General |
-| Dutch (Belgium) | `nl-BE` | Male | `nl-BE-ArnaudNeural` | General |
-| Dutch (Netherlands) | `nl-NL` | Female | `nl-NL-ColetteNeural` | General |
-| Dutch (Netherlands) | `nl-NL` | Female | `nl-NL-FennaNeural` | General |
-| Dutch (Netherlands) | `nl-NL` | Male | `nl-NL-MaartenNeural` | General |
-| English (Australia) | `en-AU` | Female | `en-AU-NatashaNeural` | General |
-| English (Australia) | `en-AU` | Male | `en-AU-WilliamNeural` | General |
-| English (Canada) | `en-CA` | Female | `en-CA-ClaraNeural` | General |
-| English (Canada) | `en-CA` | Male | `en-CA-LiamNeural` | General |
-| English (Hongkong) | `en-HK` | Female | `en-HK-YanNeural` | General |
-| English (Hongkong) | `en-HK` | Male | `en-HK-SamNeural` | General |
-| English (India) | `en-IN` | Female | `en-IN-NeerjaNeural` | General |
-| English (India) | `en-IN` | Male | `en-IN-PrabhatNeural` | General |
-| English (Ireland) | `en-IE` | Female | `en-IE-EmilyNeural` | General |
-| English (Ireland) | `en-IE` | Male | `en-IE-ConnorNeural` | General |
-| English (Kenya) | `en-KE` | Female | `en-KE-AsiliaNeural` | General |
-| English (Kenya) | `en-KE` | Male | `en-KE-ChilembaNeural` | General |
-| English (New Zealand) | `en-NZ` | Female | `en-NZ-MollyNeural` | General |
-| English (New Zealand) | `en-NZ` | Male | `en-NZ-MitchellNeural` | General |
-| English (Nigeria) | `en-NG` | Female | `en-NG-EzinneNeural` | General |
-| English (Nigeria) | `en-NG` | Male | `en-NG-AbeoNeural` | General |
-| English (Philippines) | `en-PH` | Female | `en-PH-RosaNeural` | General |
-| English (Philippines) | `en-PH` | Male | `en-PH-JamesNeural` | General |
-| English (Singapore) | `en-SG` | Female | `en-SG-LunaNeural` | General |
-| English (Singapore) | `en-SG` | Male | `en-SG-WayneNeural` | General |
-| English (South Africa) | `en-ZA` | Female | `en-ZA-LeahNeural` | General |
-| English (South Africa) | `en-ZA` | Male | `en-ZA-LukeNeural` | General |
-| English (Tanzania) | `en-TZ` | Female | `en-TZ-ImaniNeural` | General |
-| English (Tanzania) | `en-TZ` | Male | `en-TZ-ElimuNeural` | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-AbbiNeural` | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-BellaNeural` | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-HollieNeural` | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-LibbyNeural` | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-MaisieNeural` | General, child voice |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-MiaNeural` <sup>Retired on 30 October 2021, see below</sup> | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-OliviaNeural` | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-SoniaNeural` | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-AlfieNeural` | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-ElliotNeural` | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-EthanNeural` | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-NoahNeural` | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-OliverNeural` | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-RyanNeural` | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-ThomasNeural` | General |
-| English (United States) | `en-US` | Female | `en-US-AmberNeural` | General |
-| English (United States) | `en-US` | Female | `en-US-AriaNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Female | `en-US-AshleyNeural` | General |
-| English (United States) | `en-US` | Female | `en-US-CoraNeural` | General |
-| English (United States) | `en-US` | Female | `en-US-ElizabethNeural` | General |
-| English (United States) | `en-US` | Female | `en-US-JennyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` as the primary default. Additional locales are supported [using SSML](speech-synthesis-markup.md#adjust-speaking-languages) | Female | `en-US-JennyMultilingualNeural` | General |
-| English (United States) | `en-US` | Female | `en-US-MichelleNeural`| General |
-| English (United States) | `en-US` | Female | `en-US-MonicaNeural` | General |
-| English (United States) | `en-US` | Female | `en-US-SaraNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Kid | `en-US-AnaNeural`| General |
-| English (United States) | `en-US` | Male | `en-US-BrandonNeural` | General |
-| English (United States) | `en-US` | Male | `en-US-ChristopherNeural` | General |
-| English (United States) | `en-US` | Male | `en-US-EricNeural` | General |
-| English (United States) | `en-US` | Male | `en-US-GuyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Male | `en-US-JacobNeural` | General |
-| Estonian (Estonia) | `et-EE` | Female | `et-EE-AnuNeural` | General |
-| Estonian (Estonia) | `et-EE` | Male | `et-EE-KertNeural` | General |
-| Filipino (Philippines) | `fil-PH` | Female | `fil-PH-BlessicaNeural` | General |
-| Filipino (Philippines) | `fil-PH` | Male | `fil-PH-AngeloNeural` | General |
-| Finnish (Finland) | `fi-FI` | Female | `fi-FI-NooraNeural` | General |
-| Finnish (Finland) | `fi-FI` | Female | `fi-FI-SelmaNeural` | General |
-| Finnish (Finland) | `fi-FI` | Male | `fi-FI-HarriNeural` | General |
-| French (Belgium) | `fr-BE` | Female | `fr-BE-CharlineNeural` | General |
-| French (Belgium) | `fr-BE` | Male | `fr-BE-GerardNeural` | General |
-| French (Canada) | `fr-CA` | Female | `fr-CA-SylvieNeural` | General |
-| French (Canada) | `fr-CA` | Male | `fr-CA-AntoineNeural` | General |
-| French (Canada) | `fr-CA` | Male | `fr-CA-JeanNeural` | General |
-| French (France) | `fr-FR` | Female | `fr-FR-BrigitteNeural` | General |
-| French (France) | `fr-FR` | Female | `fr-FR-CelesteNeural` | General |
-| French (France) | `fr-FR` | Female | `fr-FR-CoralieNeural` | General |
-| French (France) | `fr-FR` | Female | `fr-FR-DeniseNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) <sup>Public preview</sup> |
-| French (France) | `fr-FR` | Female | `fr-FR-EloiseNeural` | General, child voice |
-| French (France) | `fr-FR` | Female | `fr-FR-JacquelineNeural` | General |
-| French (France) | `fr-FR` | Female | `fr-FR-JosephineNeural` | General |
-| French (France) | `fr-FR` | Female | `fr-FR-YvetteNeural` | General |
-| French (France) | `fr-FR` | Male | `fr-FR-AlainNeural` | General |
-| French (France) | `fr-FR` | Male | `fr-FR-ClaudeNeural` | General |
-| French (France) | `fr-FR` | Male | `fr-FR-HenriNeural` | General |
-| French (France) | `fr-FR` | Male | `fr-FR-JeromeNeural` | General |
-| French (France) | `fr-FR` | Male | `fr-FR-MauriceNeural` | General |
-| French (France) | `fr-FR` | Male | `fr-FR-YvesNeural` | General |
-| French (Switzerland) | `fr-CH` | Female | `fr-CH-ArianeNeural` | General |
-| French (Switzerland) | `fr-CH` | Male | `fr-CH-FabriceNeural` | General |
-| Galician (Spain) | `gl-ES` | Female | `gl-ES-SabelaNeural` | General |
-| Galician (Spain) | `gl-ES` | Male | `gl-ES-RoiNeural` | General |
-| Georgian (Georgia) | `ka-GE` | Female | `ka-GE-EkaNeural` <sup>New</sup> | General |
-| Georgian (Georgia) | `ka-GE` | Male | `ka-GE-GiorgiNeural` <sup>New</sup> | General |
-| German (Austria) | `de-AT` | Female | `de-AT-IngridNeural` | General |
-| German (Austria) | `de-AT` | Male | `de-AT-JonasNeural` | General |
-| German (Germany) | `de-DE` | Female | `de-DE-AmalaNeural` | General |
-| German (Germany) | `de-DE` | Female | `de-DE-ElkeNeural` | General |
-| German (Germany) | `de-DE` | Female | `de-DE-GiselaNeural` | General, child voice |
-| German (Germany) | `de-DE` | Female | `de-DE-KatjaNeural` | General |
-| German (Germany) | `de-DE` | Female | `de-DE-KlarissaNeural` | General |
-| German (Germany) | `de-DE` | Female | `de-DE-LouisaNeural` | General |
-| German (Germany) | `de-DE` | Female | `de-DE-MajaNeural` | General |
-| German (Germany) | `de-DE` | Female | `de-DE-TanjaNeural` | General |
-| German (Germany) | `de-DE` | Male | `de-DE-BerndNeural` | General |
-| German (Germany) | `de-DE` | Male | `de-DE-ChristophNeural` | General |
-| German (Germany) | `de-DE` | Male | `de-DE-ConradNeural` | General |
-| German (Germany) | `de-DE` | Male | `de-DE-KasperNeural` | General |
-| German (Germany) | `de-DE` | Male | `de-DE-KillianNeural` | General |
-| German (Germany) | `de-DE` | Male | `de-DE-KlausNeural` | General |
-| German (Germany) | `de-DE` | Male | `de-DE-RalfNeural` | General |
-| German (Switzerland) | `de-CH` | Female | `de-CH-LeniNeural` | General |
-| German (Switzerland) | `de-CH` | Male | `de-CH-JanNeural` | General |
-| Greek (Greece) | `el-GR` | Female | `el-GR-AthinaNeural` | General |
-| Greek (Greece) | `el-GR` | Male | `el-GR-NestorasNeural` | General |
-| Gujarati (India) | `gu-IN` | Female | `gu-IN-DhwaniNeural` | General |
-| Gujarati (India) | `gu-IN` | Male | `gu-IN-NiranjanNeural` | General |
-| Hebrew (Israel) | `he-IL` | Female | `he-IL-HilaNeural` | General |
-| Hebrew (Israel) | `he-IL` | Male | `he-IL-AvriNeural` | General |
-| Hindi (India) | `hi-IN` | Female | `hi-IN-SwaraNeural` | General |
-| Hindi (India) | `hi-IN` | Male | `hi-IN-MadhurNeural` | General |
-| Hungarian (Hungary) | `hu-HU` | Female | `hu-HU-NoemiNeural` | General |
-| Hungarian (Hungary) | `hu-HU` | Male | `hu-HU-TamasNeural` | General |
-| Icelandic (Iceland) | `is-IS` | Female | `is-IS-GudrunNeural` | General |
-| Icelandic (Iceland) | `is-IS` | Male | `is-IS-GunnarNeural` | General |
-| Indonesian (Indonesia) | `id-ID` | Female | `id-ID-GadisNeural` | General |
-| Indonesian (Indonesia) | `id-ID` | Male | `id-ID-ArdiNeural` | General |
-| Irish (Ireland) | `ga-IE` | Female | `ga-IE-OrlaNeural` | General |
-| Irish (Ireland) | `ga-IE` | Male | `ga-IE-ColmNeural` | General |
-| Italian (Italy) | `it-IT` | Female | `it-IT-ElsaNeural` | General |
-| Italian (Italy) | `it-IT` | Female | `it-IT-IsabellaNeural` | General |
-| Italian (Italy) | `it-IT` | Male | `it-IT-DiegoNeural` | General |
-| Japanese (Japan) | `ja-JP` | Female | `ja-JP-NanamiNeural` | General |
-| Japanese (Japan) | `ja-JP` | Male | `ja-JP-KeitaNeural` | General |
-| Javanese (Indonesia) | `jv-ID` | Female | `jv-ID-SitiNeural` | General |
-| Javanese (Indonesia) | `jv-ID` | Male | `jv-ID-DimasNeural` | General |
-| Kannada (India) | `kn-IN` | Female | `kn-IN-SapnaNeural` | General |
-| Kannada (India) | `kn-IN` | Male | `kn-IN-GaganNeural` | General |
-| Kazakh (Kazakhstan) | `kk-KZ` | Female | `kk-KZ-AigulNeural` | General |
-| Kazakh (Kazakhstan) | `kk-KZ` | Male | `kk-KZ-DauletNeural` | General |
-| Khmer (Cambodia) | `km-KH` | Female | `km-KH-SreymomNeural` | General |
-| Khmer (Cambodia) | `km-KH` | Male | `km-KH-PisethNeural` | General |
-| Korean (Korea) | `ko-KR` | Female | `ko-KR-SunHiNeural` | General |
-| Korean (Korea) | `ko-KR` | Male | `ko-KR-InJoonNeural` | General |
-| Lao (Laos) | `lo-LA` | Female | `lo-LA-KeomanyNeural` | General |
-| Lao (Laos) | `lo-LA` | Male | `lo-LA-ChanthavongNeural` | General |
-| Latvian (Latvia) | `lv-LV` | Female | `lv-LV-EveritaNeural` | General |
-| Latvian (Latvia) | `lv-LV` | Male | `lv-LV-NilsNeural` | General |
-| Lithuanian (Lithuania) | `lt-LT` | Female | `lt-LT-OnaNeural` | General |
-| Lithuanian (Lithuania) | `lt-LT` | Male | `lt-LT-LeonasNeural` | General |
-| Macedonian (Republic of North Macedonia) | `mk-MK` | Female | `mk-MK-MarijaNeural` | General |
-| Macedonian (Republic of North Macedonia) | `mk-MK` | Male | `mk-MK-AleksandarNeural` | General |
-| Malay (Malaysia) | `ms-MY` | Female | `ms-MY-YasminNeural` | General |
-| Malay (Malaysia) | `ms-MY` | Male | `ms-MY-OsmanNeural` | General |
-| Malayalam (India) | `ml-IN` | Female | `ml-IN-SobhanaNeural` | General |
-| Malayalam (India) | `ml-IN` | Male | `ml-IN-MidhunNeural` | General |
-| Maltese (Malta) | `mt-MT` | Female | `mt-MT-GraceNeural` | General |
-| Maltese (Malta) | `mt-MT` | Male | `mt-MT-JosephNeural` | General |
-| Marathi (India) | `mr-IN` | Female | `mr-IN-AarohiNeural` | General |
-| Marathi (India) | `mr-IN` | Male | `mr-IN-ManoharNeural` | General |
-| Mongolian (Mongolia) | `mn-MN` | Female | `mn-MN-YesuiNeural` <sup>New</sup> | General |
-| Mongolian (Mongolia) | `mn-MN` | Male | `mn-MN-BataaNeural` <sup>New</sup> | General |
-| Nepali (Nepal) | `ne-NP` | Female | `ne-NP-HemkalaNeural` <sup>New</sup> | General |
-| Nepali (Nepal) | `ne-NP` | Male | `ne-NP-SagarNeural` <sup>New</sup> | General |
-| Norwegian (Bokmål, Norway) | `nb-NO` | Female | `nb-NO-IselinNeural` | General |
-| Norwegian (Bokmål, Norway) | `nb-NO` | Female | `nb-NO-PernilleNeural` | General |
-| Norwegian (Bokmål, Norway) | `nb-NO` | Male | `nb-NO-FinnNeural` | General |
-| Pashto (Afghanistan) | `ps-AF` | Female | `ps-AF-LatifaNeural` | General |
-| Pashto (Afghanistan) | `ps-AF` | Male | `ps-AF-GulNawazNeural` | General |
-| Persian (Iran) | `fa-IR` | Female | `fa-IR-DilaraNeural` | General |
-| Persian (Iran) | `fa-IR` | Male | `fa-IR-FaridNeural` | General |
-| Polish (Poland) | `pl-PL` | Female | `pl-PL-AgnieszkaNeural` | General |
-| Polish (Poland) | `pl-PL` | Female | `pl-PL-ZofiaNeural` | General |
-| Polish (Poland) | `pl-PL` | Male | `pl-PL-MarekNeural` | General |
-| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-FranciscaNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Portuguese (Brazil) | `pt-BR` | Male | `pt-BR-AntonioNeural` | General |
-| Portuguese (Portugal) | `pt-PT` | Female | `pt-PT-FernandaNeural` | General |
-| Portuguese (Portugal) | `pt-PT` | Female | `pt-PT-RaquelNeural` | General |
-| Portuguese (Portugal) | `pt-PT` | Male | `pt-PT-DuarteNeural` | General |
-| Romanian (Romania) | `ro-RO` | Female | `ro-RO-AlinaNeural` | General |
-| Romanian (Romania) | `ro-RO` | Male | `ro-RO-EmilNeural` | General |
-| Russian (Russia) | `ru-RU` | Female | `ru-RU-DariyaNeural` | General |
-| Russian (Russia) | `ru-RU` | Female | `ru-RU-SvetlanaNeural` | General |
-| Russian (Russia) | `ru-RU` | Male | `ru-RU-DmitryNeural` | General |
-| Serbian (Serbia, Cyrillic) | `sr-RS` | Female | `sr-RS-SophieNeural` | General |
-| Serbian (Serbia, Cyrillic) | `sr-RS` | Male | `sr-RS-NicholasNeural` | General |
-| Sinhala (Sri Lanka) | `si-LK` | Female | `si-LK-ThiliniNeural` | General |
-| Sinhala (Sri Lanka) | `si-LK` | Male | `si-LK-SameeraNeural` | General |
-| Slovak (Slovakia) | `sk-SK` | Female | `sk-SK-ViktoriaNeural` | General |
-| Slovak (Slovakia) | `sk-SK` | Male | `sk-SK-LukasNeural` | General |
-| Slovenian (Slovenia) | `sl-SI` | Female | `sl-SI-PetraNeural` | General |
-| Slovenian (Slovenia) | `sl-SI` | Male | `sl-SI-RokNeural` | General |
-| Somali (Somalia) | `so-SO` | Female | `so-SO-UbaxNeural` | General |
-| Somali (Somalia) | `so-SO`| Male | `so-SO-MuuseNeural` | General |
-| Spanish (Argentina) | `es-AR` | Female | `es-AR-ElenaNeural` | General |
-| Spanish (Argentina) | `es-AR` | Male | `es-AR-TomasNeural` | General |
-| Spanish (Bolivia) | `es-BO` | Female | `es-BO-SofiaNeural` | General |
-| Spanish (Bolivia) | `es-BO` | Male | `es-BO-MarceloNeural` | General |
-| Spanish (Chile) | `es-CL` | Female | `es-CL-CatalinaNeural` | General |
-| Spanish (Chile) | `es-CL` | Male | `es-CL-LorenzoNeural` | General |
-| Spanish (Colombia) | `es-CO` | Female | `es-CO-SalomeNeural` | General |
-| Spanish (Colombia) | `es-CO` | Male | `es-CO-GonzaloNeural` | General |
-| Spanish (Costa Rica) | `es-CR` | Female | `es-CR-MariaNeural` | General |
-| Spanish (Costa Rica) | `es-CR` | Male | `es-CR-JuanNeural` | General |
-| Spanish (Cuba) | `es-CU` | Female | `es-CU-BelkysNeural` | General |
-| Spanish (Cuba) | `es-CU` | Male | `es-CU-ManuelNeural` | General |
-| Spanish (Dominican Republic) | `es-DO` | Female | `es-DO-RamonaNeural` | General |
-| Spanish (Dominican Republic) | `es-DO` | Male | `es-DO-EmilioNeural` | General |
-| Spanish (Ecuador) | `es-EC` | Female | `es-EC-AndreaNeural` | General |
-| Spanish (Ecuador) | `es-EC` | Male | `es-EC-LuisNeural` | General |
-| Spanish (El Salvador) | `es-SV` | Female | `es-SV-LorenaNeural` | General |
-| Spanish (El Salvador) | `es-SV` | Male | `es-SV-RodrigoNeural` | General |
-| Spanish (Equatorial Guinea) | `es-GQ` | Female | `es-GQ-TeresaNeural` | General |
-| Spanish (Equatorial Guinea) | `es-GQ` | Male | `es-GQ-JavierNeural` | General |
-| Spanish (Guatemala) | `es-GT` | Female | `es-GT-MartaNeural` | General |
-| Spanish (Guatemala) | `es-GT` | Male | `es-GT-AndresNeural` | General |
-| Spanish (Honduras) | `es-HN` | Female | `es-HN-KarlaNeural` | General |
-| Spanish (Honduras) | `es-HN` | Male | `es-HN-CarlosNeural` | General |
-| Spanish (Mexico) | `es-MX` | Female | `es-MX-DaliaNeural` | General |
-| Spanish (Mexico) | `es-MX` | Male | `es-MX-JorgeNeural` | General |
-| Spanish (Nicaragua) | `es-NI` | Female | `es-NI-YolandaNeural` | General |
-| Spanish (Nicaragua) | `es-NI` | Male | `es-NI-FedericoNeural` | General |
-| Spanish (Panama) | `es-PA` | Female | `es-PA-MargaritaNeural` | General |
-| Spanish (Panama) | `es-PA` | Male | `es-PA-RobertoNeural` | General |
-| Spanish (Paraguay) | `es-PY` | Female | `es-PY-TaniaNeural` | General |
-| Spanish (Paraguay) | `es-PY` | Male | `es-PY-MarioNeural` | General |
-| Spanish (Peru) | `es-PE` | Female | `es-PE-CamilaNeural` | General |
-| Spanish (Peru) | `es-PE` | Male | `es-PE-AlexNeural` | General |
-| Spanish (Puerto Rico) | `es-PR` | Female | `es-PR-KarinaNeural` | General |
-| Spanish (Puerto Rico) | `es-PR` | Male | `es-PR-VictorNeural` | General |
-| Spanish (Spain) | `es-ES` | Female | `es-ES-ElviraNeural` | General |
-| Spanish (Spain) | `es-ES` | Male | `es-ES-AlvaroNeural` | General |
-| Spanish (Uruguay) | `es-UY` | Female | `es-UY-ValentinaNeural` | General |
-| Spanish (Uruguay) | `es-UY` | Male | `es-UY-MateoNeural` | General |
-| Spanish (US) | `es-US` | Female | `es-US-PalomaNeural` | General |
-| Spanish (US) | `es-US` | Male | `es-US-AlonsoNeural` | General |
-| Spanish (Venezuela) | `es-VE` | Female | `es-VE-PaolaNeural` | General |
-| Spanish (Venezuela) | `es-VE` | Male | `es-VE-SebastianNeural` | General |
-| Sundanese (Indonesia) | `su-ID` | Female | `su-ID-TutiNeural` | General |
-| Sundanese (Indonesia) | `su-ID` | Male | `su-ID-JajangNeural` | General |
-| Swahili (Kenya) | `sw-KE` | Female | `sw-KE-ZuriNeural` | General |
-| Swahili (Kenya) | `sw-KE` | Male | `sw-KE-RafikiNeural` | General |
-| Swahili (Tanzania) | `sw-TZ` | Female | `sw-TZ-RehemaNeural` | General |
-| Swahili (Tanzania) | `sw-TZ` | Male | `sw-TZ-DaudiNeural` | General |
-| Swedish (Sweden) | `sv-SE` | Female | `sv-SE-HilleviNeural` | General |
-| Swedish (Sweden) | `sv-SE` | Female | `sv-SE-SofieNeural` | General |
-| Swedish (Sweden) | `sv-SE` | Male | `sv-SE-MattiasNeural` | General |
-| Tamil (India) | `ta-IN` | Female | `ta-IN-PallaviNeural` | General |
-| Tamil (India) | `ta-IN` | Male | `ta-IN-ValluvarNeural` | General |
-| Tamil (Malaysia) | `ta-MY` | Female | `ta-MY-KaniNeural` <sup>New</sup> | General |
-| Tamil (Malaysia) | `ta-MY` | Male | `ta-MY-SuryaNeural` <sup>New</sup> | General |
-| Tamil (Singapore) | `ta-SG` | Female | `ta-SG-VenbaNeural` | General |
-| Tamil (Singapore) | `ta-SG` | Male | `ta-SG-AnbuNeural` | General |
-| Tamil (Sri Lanka) | `ta-LK` | Female | `ta-LK-SaranyaNeural` | General |
-| Tamil (Sri Lanka) | `ta-LK` | Male | `ta-LK-KumarNeural` | General |
-| Telugu (India) | `te-IN` | Female | `te-IN-ShrutiNeural` | General |
-| Telugu (India) | `te-IN` | Male | `te-IN-MohanNeural` | General |
-| Thai (Thailand) | `th-TH` | Female | `th-TH-AcharaNeural` | General |
-| Thai (Thailand) | `th-TH` | Female | `th-TH-PremwadeeNeural` | General |
-| Thai (Thailand) | `th-TH` | Male | `th-TH-NiwatNeural` | General |
-| Turkish (Turkey) | `tr-TR` | Female | `tr-TR-EmelNeural` | General |
-| Turkish (Turkey) | `tr-TR` | Male | `tr-TR-AhmetNeural` | General |
-| Ukrainian (Ukraine) | `uk-UA` | Female | `uk-UA-PolinaNeural` | General |
-| Ukrainian (Ukraine) | `uk-UA` | Male | `uk-UA-OstapNeural` | General |
-| Urdu (India) | `ur-IN` | Female | `ur-IN-GulNeural` | General |
-| Urdu (India) | `ur-IN` | Male | `ur-IN-SalmanNeural` | General |
-| Urdu (Pakistan) | `ur-PK` | Female | `ur-PK-UzmaNeural` | General |
-| Urdu (Pakistan) | `ur-PK` | Male | `ur-PK-AsadNeural` | General |
-| Uzbek (Uzbekistan) | `uz-UZ` | Female | `uz-UZ-MadinaNeural` | General |
-| Uzbek (Uzbekistan) | `uz-UZ` | Male | `uz-UZ-SardorNeural` | General |
-| Vietnamese (Vietnam) | `vi-VN` | Female | `vi-VN-HoaiMyNeural` | General |
-| Vietnamese (Vietnam) | `vi-VN` | Male | `vi-VN-NamMinhNeural` | General |
-| Welsh (United Kingdom) | `cy-GB` | Female | `cy-GB-NiaNeural` | General |
-| Welsh (United Kingdom) | `cy-GB` | Male | `cy-GB-AledNeural` | General |
-| Zulu (South Africa) | `zu-ZA` | Female | `zu-ZA-ThandoNeural` | General |
-| Zulu (South Africa) | `zu-ZA` | Male | `zu-ZA-ThembaNeural` | General |
-
-### Prebuilt neural voices in preview
-
-The following neural voices are in public preview.
+> Pricing varies for Prebuilt Neural Voice (see *Neural* on the pricing page) and Custom Neural Voice (see *Custom Neural* on the pricing page). For more information, see the [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page.
-> [!NOTE]
-> Voices and styles in public preview are only available in three service [regions](regions.md): East US, West Europe, and Southeast Asia.
+Prebuilt neural voices are created from samples that use a 24-khz sample rate. All voices can upsample or downsample to other sample rates when synthesizing.
-| Language | Locale | Gender | Voice name | Style support |
-|-||--|-||
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaomengNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoyiNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaozhenNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunfengNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunhaoNeural` <sup>New</sup> | Optimized for promoting a product or service, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunjianNeural` <sup>New</sup> | Optimized for broadcasting sports event, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunxiaNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunzeNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN-liaoning` | Female | `zh-CN-liaoning-XiaobeiNeural` <sup>New</sup> | General, Liaoning accent |
-| Chinese (Mandarin, Simplified) | `zh-CN-sichuan` | Male | `zh-CN-sichuan-YunxiSichuanNeural` <sup>New</sup> | General, Sichuan accent |
-| English (United States) | `en-US` | Female | `en-US-JaneNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Female | `en-US-NancyNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Male | `en-US-AIGenerate1Neural` <sup>New</sup> | General|
-| English (United States) | `en-US` | Male | `en-US-DavisNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Male | `en-US-JasonNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Male | `en-US-RogerNeural` <sup>New</sup> | General|
-| English (United States) | `en-US` | Male | `en-US-TonyNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Italian (Italy) | `it-IT` | Female | `it-IT-FabiolaNeural` <sup>New</sup> | General |
-| Italian (Italy) | `it-IT` | Female | `it-IT-FiammaNeural` <sup>New</sup> | General |
-| Italian (Italy) | `it-IT` | Female | `it-IT-ImeldaNeural` <sup>New</sup> | General |
-| Italian (Italy) | `it-IT` | Female | `it-IT-IrmaNeural` <sup>New</sup> | General |
-| Italian (Italy) | `it-IT` | Female | `it-IT-PalmiraNeural` <sup>New</sup> | General |
-| Italian (Italy) | `it-IT` | Female | `it-IT-PierinaNeural` <sup>New</sup> | General |
-| Italian (Italy) | `it-IT` | Male | `it-IT-BenignoNeural` <sup>New</sup> | General |
-| Italian (Italy) | `it-IT` | Male | `it-IT-CalimeroNeural` <sup>New</sup> | General |
-| Italian (Italy) | `it-IT` | Male | `it-IT-CataldoNeural` <sup>New</sup> | General |
-| Italian (Italy) | `it-IT` | Male | `it-IT-GianniNeural` <sup>New</sup> | General |
-| Italian (Italy) | `it-IT` | Male | `it-IT-LisandroNeural` <sup>New</sup> | General |
-| Italian (Italy) | `it-IT` | Male | `it-IT-RinaldoNeural` <sup>New</sup> | General |
-| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-BrendaNeural` <sup>New</sup> | General |
-| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-ElzaNeural` <sup>New</sup> | General |
-| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-GiovannaNeural` <sup>New</sup> | General |
-| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-LeilaNeural` <sup>New</sup> | General |
-| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-LeticiaNeural` <sup>New</sup> | General |
-| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-ManuelaNeural` <sup>New</sup> | General |
-| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-YaraNeural` <sup>New</sup> | General |
-| Portuguese (Brazil) | `pt-BR` | Male | `pt-BR-DonatoNeural` <sup>New</sup> | General |
-| Portuguese (Brazil) | `pt-BR` | Male | `pt-BR-FabioNeural` <sup>New</sup> | General |
-| Portuguese (Brazil) | `pt-BR` | Male | `pt-BR-HumbertoNeural` <sup>New</sup> | General |
-| Portuguese (Brazil) | `pt-BR` | Male | `pt-BR-JulioNeural` <sup>New</sup> | General |
-| Portuguese (Brazil) | `pt-BR` | Male | `pt-BR-NicolauNeural` <sup>New</sup> | General |
-| Portuguese (Brazil) | `pt-BR` | Male | `pt-BR-ValerioNeural` <sup>New</sup> | General |
-| Spanish (Mexico) | `es-MX` | Female | `es-MX-BeatrizNeural` <sup>New</sup> | General |
-| Spanish (Mexico) | `es-MX` | Female | `es-MX-CandelaNeural` <sup>New</sup> | General |
-| Spanish (Mexico) | `es-MX` | Female | `es-MX-CarlotaNeural` <sup>New</sup> | General |
-| Spanish (Mexico) | `es-MX` | Female | `es-MX-LarissaNeural` <sup>New</sup> | General |
-| Spanish (Mexico) | `es-MX` | Female | `es-MX-MarinaNeural` <sup>New</sup> | General |
-| Spanish (Mexico) | `es-MX` | Female | `es-MX-NuriaNeural` <sup>New</sup> | General |
-| Spanish (Mexico) | `es-MX` | Female | `es-MX-RenataNeural` <sup>New</sup> | General |
-| Spanish (Mexico) | `es-MX` | Male | `es-MX-CecilioNeural` <sup>New</sup> | General |
-| Spanish (Mexico) | `es-MX` | Male | `es-MX-GerardoNeural` <sup>New</sup> | General |
-| Spanish (Mexico) | `es-MX` | Male | `es-MX-LibertoNeural` <sup>New</sup> | General |
-| Spanish (Mexico) | `es-MX` | Male | `es-MX-LucianoNeural` <sup>New</sup> | General |
-| Spanish (Mexico) | `es-MX` | Male | `es-MX-PelayoNeural` <sup>New</sup> | General |
-| Spanish (Mexico) | `es-MX` | Male | `es-MX-YagoNeural` <sup>New</sup> | General |
+Please note that the following neural voices are retired.
+- The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021. If you're using container Neural TTS, [download](speech-container-howto.md#get-the-container-image-with-docker-pull) and deploy the latest version. Starting from October 30, 2021, all requests with previous versions will not succeed.
+- The `en-US-JessaNeural` voice is retired and replaced by `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria."
### Voice styles and roles In some cases, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm. You can optimize the voice for different scenarios like customer service, newscast, and voice assistant. With roles, the same voice can act as a different age and gender.
-> [!NOTE]
-> The angry, cheerful, excited, friendly, hopeful, sad, shouting, terrified, unfriendly, and whispering styles for DavisNeural, JaneNeural, JasonNeural, NancyNeural and TonyNeural are only available in three service regions: East US, West Europe, and Southeast Asia.
To learn how you can configure and adjust neural voice styles and roles, see [Speech Synthesis Markup Language](speech-synthesis-markup.md#adjust-speaking-styles). Use the following table to determine supported styles and roles for each neural voice.
-|Voice|Styles|Style degree|Roles|
-|--|--|--|--|
-|en-US-AriaNeural|`angry`, `chat`, `cheerful`, `customerservice`, `empathetic`, `excited`, `friendly`, `hopeful`, `narration-professional`, `newscast-casual`, `newscast-formal`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-DavisNeural <sup>Public preview</sup>|`angry`, `chat`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-GuyNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `newscast`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-JaneNeural <sup>Public preview</sup>|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-JasonNeural <sup>Public preview</sup>|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-JennyNeural|`angry`, `assistant`, `chat`, `cheerful`,`customerservice`, `excited`, `friendly`, `hopeful`, `newscast`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-NancyNeural <sup>Public preview</sup>|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-SaraNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-TonyNeural <sup>Public preview</sup>|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|fr-FR-DeniseNeural |`cheerful`, `sad`|||
-|ja-JP-NanamiNeural|`chat`, `cheerful`, `customerservice`|||
-|pt-BR-FranciscaNeural|`calm`|||
-|zh-CN-XiaohanNeural|`affectionate`, `angry`, `calm`, `cheerful`, `disgruntled`, `embarrassed`, `fearful`, `gentle`, `sad`, `serious`|Supported||
-|zh-CN-XiaomengNeural <sup>Public preview</sup>|`chat`|Supported||
-|zh-CN-XiaomoNeural|`affectionate`, `angry`, `calm`, `cheerful`, `depressed`, `disgruntled`, `embarrassed`, `envious`, `fearful`, `gentle`, `sad`, `serious`|Supported|Supported|
-|zh-CN-XiaoruiNeural|`angry`, `calm`, `fearful`, `sad`|Supported||
-|zh-CN-XiaoshuangNeural|`chat`|Supported||
-|zh-CN-XiaoxiaoNeural|`affectionate`, `angry`, `assistant`, `calm`, `chat`, `cheerful`, `customerservice`, `disgruntled`, `fearful`, `gentle`, `lyrical`, `newscast`, `poetry-reading`, `sad`, `serious`|Supported||
-|zh-CN-XiaoxuanNeural|`angry`, `calm`, `cheerful`, `depressed`, `disgruntled`, `fearful`, `gentle`, `serious`|Supported|Supported|
-|zh-CN-XiaoyiNeural <sup>Public preview</sup>|`affectionate`, `angry`, `cheerful`, `disgruntled`, `embarrassed`, `fearful`, `gentle`, `sad`, `serious`|Supported||
-|zh-CN-XiaozhenNeural <sup>Public preview</sup>|`angry`, `cheerful`, `disgruntled`, `fearful`, `sad`, `serious`|Supported||
-|zh-CN-YunfengNeural <sup>Public preview</sup>|`calm`, `angry`, ` disgruntled`, `cheerful`, `fearful`, `sad`, `serious`, `depressed`|Supported||
-|zh-CN-YunhaoNeural <sup>Public preview</sup>|`general`, `advertisement-upbeat` <sup>Public preview</sup>|Supported||
-|zh-CN-YunjianNeural <sup>Public preview</sup>|`narration-relaxed`, `sports-commentary` <sup>Public preview</sup>, `sports-commentary-excited` <sup>Public preview</sup>|Supported||
-|zh-CN-YunxiNeural|`angry`, `assistant`, `cheerful`, `depressed`, `disgruntled`, `embarrassed`, `fearful`, `narration-relaxed`, `sad`, `serious`|Supported|Supported|
-|zh-CN-YunxiaNeural <sup>Public preview</sup>|`angry`, `calm`, `cheerful`, `fearful`, `sad`|Supported||
-|zh-CN-YunyangNeural|`customerservice`, `narration-professional`, `newscast-casual`|Supported||
-|zh-CN-YunyeNeural|`angry`, `calm`, `cheerful`, `disgruntled`, `embarrassed`, `fearful`, `sad`, `serious`|Supported|Supported|
-|zh-CN-YunzeNeural <sup>Public preview</sup>|`angry`, `calm`, `cheerful`, `depressed`, `disgruntled`, `documentary-narration`, `fearful`, `sad`, `serious`|Supported|Supported|
### Custom Neural Voice
-Custom Neural Voice lets you create synthetic voices that are rich in speaking styles. You can create a unique brand voice in multiple languages and styles by using a small set of recording data.
+Custom Neural Voice lets you create synthetic voices that are rich in speaking styles. You can create a unique brand voice in multiple languages and styles by using a small set of recording data. There are two Custom Neural Voice (CNV) project types: CNV Pro and CNV Lite (preview).
Select the right locale that matches your training data to train a custom neural voice model. For example, if the recording data is spoken in English with a British accent, select `en-GB`.
-With the cross-lingual feature (preview), you can transfer your custom neural voice model to speak a second language. For example, with the `zh-CN` data, you can create a voice that speaks `en-AU` or any of the languages marked with "Yes" in the Cross-lingual column in the following table.
-
-There are two Custom Neural Voice (CNV) project types: CNV Pro and CNV Lite (preview). In the following table, all the languages are supported by CNV Pro, and the languages marked with "Yes" in the Custom Neural Voice Lite column are supported by CNV Lite.
-
-| Language | Locale | Cross-lingual (preview) |Custom Neural Voice Lite (preview)|
-|--|--|--|--|
-| Arabic (Egypt) | `ar-EG` | No |No|
-| Arabic (Saudi Arabia) | `ar-SA` | No |No|
-| Bulgarian (Bulgaria) | `bg-BG` | No |No|
-| Catalan (Spain) | `ca-ES` | No |No|
-| Chinese (Cantonese, Traditional) | `zh-HK` | No |No|
-| Chinese (Mandarin, Simplified) | `zh-CN` | Yes |Yes|
-| Chinese (Mandarin, Simplified), English bilingual | `zh-CN` bilingual | Yes |No|
-| Chinese (Taiwanese Mandarin) | `zh-TW` | No |No|
-| Croatian (Croatia) | `hr-HR` | No |No|
-| Czech (Czech) | `cs-CZ` | No |No|
-| Danish (Denmark) | `da-DK` | No |No|
-| Dutch (Netherlands) | `nl-NL` | No |No|
-| English (Australia) | `en-AU` | Yes |Yes|
-| English (Canada) | `en-CA` | No |Yes|
-| English (India) | `en-IN` | No |No|
-| English (Ireland) | `en-IE` | No |No|
-| English (United Kingdom) | `en-GB` | Yes |Yes|
-| English (United States) | `en-US` | Yes |Yes|
-| Finnish (Finland) | `fi-FI` | No |No|
-| French (Canada) | `fr-CA` | Yes |No|
-| French (France) | `fr-FR` | Yes |Yes|
-| French (Switzerland) | `fr-CH` | No |No|
-| German (Austria) | `de-AT` | No |No|
-| German (Germany) | `de-DE` | Yes |Yes|
-| German (Switzerland) | `de-CH` | No |No|
-| Greek (Greece) | `el-GR` | No |No|
-| Hebrew (Israel) | `he-IL` | No |No|
-| Hindi (India) | `hi-IN` | No |No|
-| Hungarian (Hungary) | `hu-HU` | No |No|
-| Indonesian (Indonesia) | `id-ID` | No |No|
-| Italian (Italy) | `it-IT` | Yes |Yes|
-| Japanese (Japan) | `ja-JP` | Yes |Yes|
-| Korean (Korea) | `ko-KR` | Yes |Yes|
-| Malay (Malaysia) | `ms-MY` | No |No|
-| Norwegian (Bokmål, Norway) | `nb-NO` | No |No|
-| Polish (Poland) | `pl-PL` | No |No|
-| Portuguese (Brazil) | `pt-BR` | Yes |Yes|
-| Portuguese (Portugal) | `pt-PT` | No |No|
-| Romanian (Romania) | `ro-RO` | No |No|
-| Russian (Russia) | `ru-RU` | Yes |No|
-| Slovak (Slovakia) | `sk-SK` | No |No|
-| Slovenian (Slovenia) | `sl-SI` | No |No|
-| Spanish (Mexico) | `es-MX` | Yes |Yes|
-| Spanish (Spain) | `es-ES` | Yes |No|
-| Swedish (Sweden) | `sv-SE` | No |No|
-| Tamil (India) | `ta-IN` | No |No |
-| Telugu (India) | `te-IN` | No |No |
-| Thai (Thailand) | `th-TH` | No |No |
-| Turkish (Turkey) | `tr-TR` | No |No|
-| Vietnamese (Vietnam) | `vi-VN` | No |No|
-
-### Viseme
+With the cross-lingual feature (preview), you can transfer your custom neural voice model to speak a second language. For example, with the `zh-CN` data, you can create a voice that speaks `en-AU` or any of the languages with Cross-lingual support.
-A _viseme_ is the visual description of a phoneme in spoken language. It defines the position of the face and mouth while a person is speaking. Each viseme depicts the key facial poses for a specific set of phonemes. Speech audio output can be accompanied by a viseme ID, Scalable Vector Graphics (SVG), or blend shapes. For more information, see [Get facial position with viseme](how-to-speech-synthesis-viseme.md).
+# [Pronunciation assessment](#tab/pronunciation-assessment)
-> [!NOTE]
-> Viseme ID supports [neural voices](#text-to-speech) in the locales listed below. SVG only supports neural voices in the `en-US` locale, and blend shapes supports neural voices in the `en-US` and `zh-CN` locales.
+The table in this section summarizes the locales supported for Pronunciation assessment.
-The following table lists the languages supported by viseme ID.
-| Language | Locale |
-|||
-| Arabic (Algeria) | `ar-DZ` |
-| Arabic (Bahrain) | `ar-BH` |
-| Arabic (Egypt) | `ar-EG` |
-| Arabic (Iraq) | `ar-IQ` |
-| Arabic (Jordan) | `ar-JO` |
-| Arabic (Kuwait) | `ar-KW` |
-| Arabic (Lebanon) | `ar-LB` |
-| Arabic (Libya) | `ar-LY` |
-| Arabic (Morocco) | `ar-MA` |
-| Arabic (Oman) | `ar-OM` |
-| Arabic (Qatar) | `ar-QA` |
-| Arabic (Saudi Arabia) | `ar-SA` |
-| Arabic (Syria) | `ar-SY` |
-| Arabic (Tunisia) | `ar-TN` |
-| Arabic (United Arab Emirates) | `ar-AE` |
-| Arabic (Yemen) | `ar-YE` |
-| Bulgarian (Bulgaria) | `bg-BG` |
-| Catalan (Spain) | `ca-ES` |
-| Chinese (Cantonese, Traditional) | `zh-HK` |
-| Chinese (Mandarin, Simplified) | `zh-CN` |
-| Chinese (Taiwanese Mandarin) | `zh-TW` |
-| Croatian (Croatia) | `hr-HR` |
-| Czech (Czech) | `cs-CZ` |
-| Danish (Denmark) | `da-DK` |
-| Dutch (Belgium) | `nl-BE` |
-| Dutch (Netherlands) | `nl-NL` |
-| English (Australia) | `en-AU` |
-| English (Canada) | `en-CA` |
-| English (Hongkong) | `en-HK` |
-| English (India) | `en-IN` |
-| English (Ireland) | `en-IE` |
-| English (Kenya) | `en-KE` |
-| English (New Zealand) | `en-NZ` |
-| English (Nigeria) | `en-NG` |
-| English (Philippines) | `en-PH` |
-| English (Singapore) | `en-SG` |
-| English (South Africa) | `en-ZA` |
-| English (Tanzania) | `en-TZ` |
-| English (United Kingdom) | `en-GB` |
-| English (United States) | `en-US` |
-| Finnish (Finland) | `fi-FI` |
-| French (Belgium) | `fr-BE` |
-| French (Canada) | `fr-CA` |
-| French (France) | `fr-FR` |
-| French (Switzerland) | `fr-CH` |
-| German (Austria) | `de-AT` |
-| German (Germany) | `de-DE` |
-| German (Switzerland) | `de-CH` |
-| Greek (Greece) | `el-GR` |
-| Gujarati (India) | `gu-IN` |
-| Hebrew (Israel) | `he-IL` |
-| Hindi (India) | `hi-IN` |
-| Hungarian (Hungary) | `hu-HU` |
-| Indonesian (Indonesia) | `id-ID` |
-| Italian (Italy) | `it-IT` |
-| Japanese (Japan) | `ja-JP` |
-| Korean (Korea) | `ko-KR` |
-| Malay (Malaysia) | `ms-MY` |
-| Marathi (India) | `mr-IN` |
-| Norwegian (Bokmål, Norway) | `nb-NO` |
-| Polish (Poland) | `pl-PL` |
-| Portuguese (Brazil) | `pt-BR` |
-| Portuguese (Portugal) | `pt-PT` |
-| Romanian (Romania) | `ro-RO` |
-| Russian (Russia) | `ru-RU` |
-| Slovak (Slovakia) | `sk-SK` |
-| Slovenian (Slovenia) | `sl-SI` |
-| Spanish (Argentina) | `es-AR` |
-| Spanish (Bolivia) | `es-BO` |
-| Spanish (Chile) | `es-CL` |
-| Spanish (Colombia) | `es-CO` |
-| Spanish (Costa Rica) | `es-CR` |
-| Spanish (Cuba) | `es-CU` |
-| Spanish (Dominican Republic) | `es-DO` |
-| Spanish (Ecuador) | `es-EC` |
-| Spanish (El Salvador) | `es-SV` |
-| Spanish (Equatorial Guinea) | `es-GQ` |
-| Spanish (Guatemala) | `es-GT` |
-| Spanish (Honduras) | `es-HN` |
-| Spanish (Mexico) | `es-MX` |
-| Spanish (Nicaragua) | `es-NI` |
-| Spanish (Panama) | `es-PA` |
-| Spanish (Paraguay) | `es-PY` |
-| Spanish (Peru) | `es-PE` |
-| Spanish (Puerto Rico) | `es-PR` |
-| Spanish (Spain) | `es-ES` |
-| Spanish (Uruguay) | `es-UY` |
-| Spanish (US) | `es-US` |
-| Spanish (Venezuela) | `es-VE` |
-| Swahili (Tanzania) | `sw-TZ` |
-| Swedish (Sweden) | `sv-SE` |
-| Tamil (India) | `ta-IN` |
-| Tamil (Malaysia) | `ta-MY` |
-| Tamil (Singapore) | `ta-SG` |
-| Tamil (Sri Lanka) | `ta-LK` |
-| Telugu (India) | `te-IN` |
-| Thai (Thailand) | `th-TH` |
-| Turkish (Turkey) | `tr-TR` |
-| Ukrainian (Ukraine) | `uk-UA` |
-| Urdu (India) | `ur-IN` |
-| Urdu (Pakistan) | `ur-PK` |
-| Vietnamese (Vietnam) | `vi-VN` |
+# [Speech translation](#tab/speech-translation)
-## Language identification
+The table in this section summarizes the locales supported for Speech translation. Speech translation supports different languages for speech-to-speech and speech-to-text translation. The available target languages depend on whether the translation target is speech or text.
-With language identification, you set and get one of the supported locales in the following table. We only compare at the language level, such as English and German. If you include multiple locales of the same language, for example, `en-IN` and `en-US`, we'll only compare English (`en`) with the other candidate languages.
+#### Translate from language
-|Language|Locale (BCP-47)|
-|--|--|
-Arabic|`ar-DZ`<br/>`ar-BH`<br/>`ar-EG`<br/>`ar-IQ`<br/>`ar-OM`<br/>`ar-SY`|
-|Bulgarian|`bg-BG`|
-|Catalan|`ca-ES`|
-|Chinese, Mandarin|`zh-CN`<br/>`zh-TW`|
-|Chinese, Traditional|`zh-HK`|
-|Croatian|`hr-HR`|
-|Czech|`cs-CZ`|
-|Danish|`da-DK`|
-|Dutch|`nl-NL`|
-|English|`en-AU`<br/>`en-CA`<br/>`en-GH`<br/>`en-HK`<br/>`en-IN`<br/>`en-IE`<br/>`en-KE`<br/>`en-NZ`<br/>`en-NG`<br/>`en-PH`<br/>`en-SG`<br/>`en-ZA`<br/>`en-TZ`<br/>`en-GB`<br/>`en-US`|
-|Estonian|`et-EE`|
-|Finnish|`fi-FI`|
-|French|`fr-CA`<br/>`fr-FR`|
-|German|`de-DE`|
-|Greek|`el-GR`|
-|Gujarati|`gu-IN`|
-|Hindi|`hi-IN`|
-|Hungarian|`hu-HU`|
-|Indonesian|`id-ID`|
-|Irish|`ga-IE`|
-|Italian|`it-IT`|
-|Japanese|`ja-JP`|
-|Korean|`ko-KR`|
-|Latvian|`lv-LV`|
-|Lithuanian|`lt-LT`|
-|Maltese|`mt-MT`|
-|Marathi|`mr-IN`|
-|Norwegian|`nb-NO`|
-|Polish|`pl-PL`|
-|Portuguese|`pt-BR`<br/>`pt-PT`|
-|Romanian|`ro-RO`|
-|Russian|`ru-RU`|
-|Slovak|`sk-SK`|
-|Slovenian|`sl-SI`|
-|Spanish|`es-AR`<br/>`es-BO`<br/>`es-CL`<br/>`es-CO`<br/>`es-CR`<br/>`es-CU`<br/>`es-DO`<br/>`es-EC`<br/>`es-SV`<br/>`es-GQ`<br/>`es-GT`<br/>`es-HN`<br/>`es-MX`<br/>`es-NI`<br/>`es-PA`<br/>`es-PY`<br/>`es-PE`<br/>`es-PR`<br/>`es-ES`<br/>`es-UY`<br/>`es-US`<br/>`es-VE`|
-|Swedish|`sv-SE`|
-|Tamil|`ta-IN`|
-|Telugu|`te-IN`|
-|Thai|`th-TH`|
-|Turkish|`tr-TR`|
-|Ukrainian|`uk-UA`|
+To set the input speech recognition language, specify the full locale with a dash (`-`) separator. See the [speech-to-text language table](?tabs=stt#supported-languages). The default language is `en-US` if you don't specify a language.
-## Pronunciation assessment
+#### Translate to text language
-The following table lists the released languages and public preview languages.
-
-| Language | Locale |
-|--|--|
-|Chinese (Mandarin, Simplified)|`zh-CN`<sup>Public preview</sup> |
-|English (Australia)|`en-AU`<sup>Public preview</sup> |
-|English (United Kingdom)|`en-GB`<sup>Public preview</sup> |
-|English (United States)|`en-US`<sup>General available</sup>|
-|French (France)|`fr-FR`<sup>Public preview</sup> |
-|German (Germany)|`de-DE`<sup>Public preview</sup> |
-|Spanish (Spain)|`es-ES`<sup>Public preview</sup> |
+To set the translation target language, with few exceptions you only specify the language code that precedes the locale dash (`-`) separator. For example, use `es` for Spanish (Spain) instead of `es-ES`. See the speech translation target language table below. The default language is `en` if you don't specify a language.
-> [!NOTE]
-> For pronunciation assessment, `en-US` and `en-GB` are available inΓÇ»[all regions](regions.md#speech-service), `zh-CN` is available in East Asia and Southeast Asia regions, `de-DE`, `es-ES`, and `fr-FR` are available in West Europe region, and `en-AU` is available in Australia East region.
-## Speech translation
+# [Language identification](#tab/language-identification)
-Speech Translation supports different languages for speech-to-speech and speech-to-text translation. The available target languages depend on whether the translation target is speech or text.
+The table in this section summarizes the locales supported for Language identification. With language identification, the Speech service compares speech at the language level, such as English and German. If you include multiple locales of the same language, for example, `en-IN` English (India) and `en-US` English (United States), we'll only compare `en` (English) with the other candidate languages.
-To set the input speech recognition language, specify the full locale with a dash (`-`) separator. See the [speech-to-text language table](#speech-to-text) above. The default language is `en-US` if you don't specify a language.
-To set the translation target language, with few exceptions you only specify the language code that precedes the locale dash (`-`) separator. For example, use `es` for Spanish (Spain) instead of `es-ES`. See the speech translation target language table below. The default language is `en` if you don't specify a language.
+# [Speaker recognition](#tab/speaker-recognition)
-### Text languages
+The table in this section summarizes the locales supported for Speaker recognition. Speaker recognition is mostly language agnostic. The universal model for text-independent speaker recognition combines various data sources from multiple languages. We've tuned and evaluated the model on these languages and locales. For more information on speaker recognition, see the [overview](speaker-recognition-overview.md).
-| Text language | Language code |
-|:|:-:|
-| Afrikaans | `af` |
-| Albanian | `sq` |
-| Amharic | `am` |
-| Arabic | `ar` |
-| Armenian | `hy` |
-| Assamese | `as` |
-| Azerbaijani | `az` |
-| Bangla | `bn` |
-| Bosnian (Latin) | `bs` |
-| Bulgarian | `bg` |
-| Cantonese (Traditional) | `yue` |
-| Catalan | `ca` |
-| Chinese (Literary) | `lzh` |
-| Chinese Simplified | `zh-Hans` |
-| Chinese Traditional | `zh-Hant` |
-| Croatian | `hr` |
-| Czech | `cs` |
-| Danish | `da` |
-| Dari | `prs` |
-| Dutch | `nl` |
-| English | `en` |
-| Estonian | `et` |
-| Fijian | `fj` |
-| Filipino | `fil` |
-| Finnish | `fi` |
-| French | `fr` |
-| French (Canada) | `fr-ca` |
-| German | `de` |
-| Greek | `el` |
-| Gujarati | `gu` |
-| Haitian Creole | `ht` |
-| Hebrew | `he` |
-| Hindi | `hi` |
-| Hmong Daw | `mww` |
-| Hungarian | `hu` |
-| Icelandic | `is` |
-| Indonesian | `id` |
-| Inuktitut | `iu` |
-| Irish | `ga` |
-| Italian | `it` |
-| Japanese | `ja` |
-| Kannada | `kn` |
-| Kazakh | `kk` |
-| Khmer | `km` |
-| Klingon | `tlh-Latn` |
-| Klingon (plqaD) | `tlh-Piqd` |
-| Korean | `ko` |
-| Kurdish (Central) | `ku` |
-| Kurdish (Northern) | `kmr` |
-| Lao | `lo` |
-| Latvian | `lv` |
-| Lithuanian | `lt` |
-| Malagasy | `mg` |
-| Malay | `ms` |
-| Malayalam | `ml` |
-| Maltese | `mt` |
-| Maori | `mi` |
-| Marathi | `mr` |
-| Myanmar | `my` |
-| Nepali | `ne` |
-| Norwegian | `nb` |
-| Odia | `or` |
-| Pashto | `ps` |
-| Persian | `fa` |
-| Polish | `pl` |
-| Portuguese (Brazil) | `pt` |
-| Portuguese (Portugal) | `pt-pt` |
-| Punjabi | `pa` |
-| Queretaro Otomi | `otq` |
-| Romanian | `ro` |
-| Russian | `ru` |
-| Samoan | `sm` |
-| Serbian (Cyrillic) | `sr-Cyrl` |
-| Serbian (Latin) | `sr-Latn` |
-| Slovak | `sk` |
-| Slovenian | `sl` |
-| Spanish | `es` |
-| Swahili | `sw` |
-| Swedish | `sv` |
-| Tahitian | `ty` |
-| Tamil | `ta` |
-| Telugu | `te` |
-| Thai | `th` |
-| Tigrinya | `ti` |
-| Tongan | `to` |
-| Turkish | `tr` |
-| Ukrainian | `uk` |
-| Urdu | `ur` |
-| Vietnamese | `vi` |
-| Welsh | `cy` |
-| Yucatec Maya | `yua` |
-## Speaker recognition
+# [Custom keyword](#tab/custom-keyword)
-Speaker recognition is mostly language agnostic. We built a universal model for text-independent speaker recognition by combining various data sources from multiple languages. We've tuned and evaluated the model on the languages and locales that appear in the following table. For more information on speaker recognition, see the [overview](speaker-recognition-overview.md).
+The table in this section summarizes the locales supported for custom keyword and keyword verification.
-| Language | Locale (BCP-47) | Text-dependent verification | Text-independent verification | Text-independent identification |
-|-|-|-|-|-|
-|English (US) | `en-US` | Yes | Yes | Yes |
-|Chinese (Mandarin, simplified) | `zh-CN` | n/a | Yes | Yes|
-|English (Australia) | `en-AU` | n/a | Yes | Yes|
-|English (Canada) | `en-CA` | n/a | Yes | Yes|
-|English (India) | `en-IN` | n/a | Yes | Yes|
-|English (UK) | `en-GB` | n/a | Yes | Yes|
-|French (Canada) | `fr-CA` | n/a | Yes | Yes|
-|French (France) | `fr-FR` | n/a | Yes | Yes|
-|German (Germany) | `de-DE` | n/a | Yes | Yes|
-|Italian | `it-IT` | n/a | Yes | Yes|
-|Japanese | `ja-JP` | n/a | Yes | Yes|
-|Portuguese (Brazil) | `pt-BR` | n/a | Yes | Yes|
-|Spanish (Mexico) | `es-MX` | n/a | Yes | Yes|
-|Spanish (Spain) | `es-ES` | n/a | Yes | Yes|
-## Custom keyword and keyword verification
+# [Intent Recognition](#tab/intent-recognizer-pattern-matcher)
-The following table outlines supported languages for custom keyword and keyword verification.
+The table in this section summarizes the locales supported for the Intent Recognizer Pattern Matcher.
-| Language | Locale (BCP-47) | Custom keyword | Keyword verification |
-| -- | | -- | -- |
-| Chinese (Mandarin, Simplified) | zh-CN | Yes | Yes |
-| English (United States) | en-US | Yes | Yes |
-| Japanese (Japan) | ja-JP | No | Yes |
-| Portuguese (Brazil) | pt-BR | No | Yes |
-## Intent Recognition Pattern Matcher
+***
-The Intent Recognizer Pattern Matcher supports the following locales:
+## Get locales via API and SDK
-| Locale | Locale (BCP-47) |
-|--|--|
-| English (United States) | `en-US` |
-| Chinese (Cantonese, Traditional) | `zh-HK` |
-| Chinese (Mandarin, Simplified) | `zh-CN` |
+You can also get a list of locales and voices supported for each specific region or endpoint through the [Speech SDK](speech-sdk.md), [Speech-to-text REST API](rest-speech-to-text.md), [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) and [Text-to-speech REST API](rest-text-to-speech.md#get-a-list-of-voices).
## Next steps
cognitive-services Long Audio Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/long-audio-api.md
These libraries are used to construct the HTTP request, and call the text-to-spe
### Get a list of supported voices
-The Long Audio API supports a subset of [Public Neural Voices](./language-support.md#prebuilt-neural-voices) and [Custom Neural Voices](./language-support.md#custom-neural-voice).
+The Long Audio API supports a subset of [Public Neural Voices](language-support.md?tabs=stt-tts) and [Custom Neural Voices](language-support.md?tabs=stt-tts).
To get a list of supported voices, send a GET request to `https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/voices`.
Replace the following values:
* Replace `<your_key>` with your Speech service subscription key. This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal). * Replace `<region>` with the region where your Speech resource was created (for example: `eastus` or `westus`). This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal). * Replace `<input_file_path>` with the path to the text file you've prepared for text-to-speech.
-* Replace `<locale>` with the desired output locale. For more information, see [language support](language-support.md#prebuilt-neural-voices).
+* Replace `<locale>` with the desired output locale. For more information, see [language support](language-support.md?tabs=stt-tts).
Use one of the voices returned by your previous call to the `/voices` endpoint.
cognitive-services Migration Overview Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migration-overview-neural-voice.md
Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-s
## Prebuilt standard voice > [!IMPORTANT]
-> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md#prebuilt-neural-voices). After August 31, the standard voices won't be supported with any Speech resource.
+> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md?tabs=stt-tts). After August 31, the standard voices won't be supported with any Speech resource.
Go to [this article](how-to-migrate-to-prebuilt-neural-voice.md) to learn how to migrate to prebuilt neural voice.
cognitive-services Multi Device Conversation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/multi-device-conversation.md
Whereas [Conversation Transcription](conversation-transcription.md) works on a s
## Key features - **Real-time transcription:** Everyone will receive a transcript of the conversation, so they can follow along the text in real-time or save it for later.-- **Real-time translation:** With more than 70 [supported languages](language-support.md#text-languages) for text translation, users can translate the conversation to their preferred language(s).
+- **Real-time translation:** With more than 70 [supported languages](language-support.md) for text translation, users can translate the conversation to their preferred languages.
- **Readable transcripts:** The transcription and translation are easy to follow, with punctuation and sentence breaks.-- **Voice or text input:** Each user can speak or type on their own device, depending on the language support capabilities enabled for the participant's chosen language. Please refer to [Language support](language-support.md#speech-to-text).-- **Message relay:** The multi-device conversation service will distribute messages sent by one client to all the others, in the language(s) of their choice.
+- **Voice or text input:** Each user can speak or type on their own device, depending on the language support capabilities enabled for the participant's chosen language. Please refer to [Language support](language-support.md).
+- **Message relay:** The multi-device conversation service will distribute messages sent by one client to all the others, in the languages of their choice.
- **Message identification:** Every message that users receive in the conversation will be tagged with the nickname of the user who sent it. ## Use cases
cognitive-services Pronunciation Assessment Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md
Follow these steps to assess your pronunciation of the reference text:
1. Go to **Pronunciation Assessment** in the [Speech Studio](https://aka.ms/speechstudio/pronunciationassessment).
-1. Choose a supported [language](language-support.md#pronunciation-assessment) that you want to evaluate the pronunciation.
+1. Choose a supported [language](language-support.md?tabs=pronunciation-assessment) that you want to evaluate the pronunciation.
1. Choose from the provisioned text samples, or under the **Enter your own script** label, enter your own reference text.
cognitive-services Resiliency And Recovery Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/resiliency-and-recovery-plan.md
Custom Voice doesn't support automatic failover. Handle real-time synthesis fail
When custom voice real-time synthesis fails, fail over to a public voice (client sample code: [GitHub: custom voice failover to public voice](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L899)).
-Check the [public voices available](./language-support.md#prebuilt-neural-voices). You can also change the sample code above if you would like to fail over to a different voice or in a different region.
+Check the [public voices available](language-support.md?tabs=stt-tts). You can also change the sample code above if you would like to fail over to a different voice or in a different region.
**Option 2: Fail over to custom voice on another region.**
cognitive-services Rest Speech To Text Short https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-short.md
These parameters might be included in the query string of the REST request:
| Parameter | Description | Required or optional | |--|-||
-| `language` | Identifies the spoken language that's being recognized. See [Supported languages](language-support.md#speech-to-text). | Required |
+| `language` | Identifies the spoken language that's being recognized. See [Supported languages](language-support.md?tabs=stt-tts). | Required |
| `format` | Specifies the result format. Accepted values are `simple` and `detailed`. Simple results include `RecognitionStatus`, `DisplayText`, `Offset`, and `Duration`. Detailed responses include four different representations of display text. The default setting is `simple`. | Optional | | `profanity` | Specifies how to handle profanity in recognition results. Accepted values are: <br><br>`masked`, which replaces profanity with asterisks. <br>`removed`, which removes all profanity from the result. <br>`raw`, which includes profanity in the result. <br><br>The default setting is `masked`. | Optional | | `cid` | When you're using the [Speech Studio](speech-studio-overview.md) to create [custom models](./custom-speech-overview.md), you can take advantage of the **Endpoint ID** value from the **Deployment** page. Use the **Endpoint ID** value as the argument to the `cid` query string parameter. | Optional |
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
The Speech service allows you to [convert text into synthesized speech](#convert
The text-to-speech REST API supports neural text-to-speech voices, which support specific languages and dialects that are identified by locale. Each available endpoint is associated with a region. A subscription key for the endpoint or region that you plan to use is required. Here are links to more information: -- For a complete list of voices, see [Language and voice support for the Speech service](language-support.md#text-to-speech).
+- For a complete list of voices, see [Language and voice support for the Speech service](language-support.md?tabs=stt-tts).
- For information about regional availability, see [Speech service supported regions](regions.md#speech-service). - For Azure Government and Azure China endpoints, see [this article about sovereign clouds](sovereign-clouds.md).
You can use the `voices/list` endpoint to get a full list of voices for a specif
| West US 3 | `https://westus3.tts.speech.microsoft.com/cognitiveservices/voices/list` | > [!TIP]
-> [Voices in preview](language-support.md#prebuilt-neural-voices-in-preview) are available in only these three regions: East US, West Europe, and Southeast Asia.
+> [Voices in preview](language-support.md?tabs=stt-tts) are available in only these three regions: East US, West Europe, and Southeast Asia.
### Request headers
This table lists required and optional headers for text-to-speech requests:
### Request body
-If you're using a custom neural voice, the body of a request can be sent as plain text (ASCII or UTF-8). Otherwise, the body of each `POST` request is sent as [SSML](speech-synthesis-markup.md). SSML allows you to choose the voice and language of the synthesized speech that the text-to-speech feature returns. For a complete list of supported voices, see [Language and voice support for the Speech service](language-support.md#text-to-speech).
+If you're using a custom neural voice, the body of a request can be sent as plain text (ASCII or UTF-8). Otherwise, the body of each `POST` request is sent as [SSML](speech-synthesis-markup.md). SSML allows you to choose the voice and language of the synthesized speech that the text-to-speech feature returns. For a complete list of supported voices, see [Language and voice support for the Speech service](language-support.md?tabs=stt-tts).
### Sample request
cognitive-services Speaker Recognition Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speaker-recognition-overview.md
As with all of the Cognitive Services resources, developers who use the speaker
||-| | What situations am I most likely to use speaker recognition? | Good examples include call center customer verification, voice-based patient check-in, meeting transcription, and multi-user device personalization.| | What's the difference between identification and verification? | Identification is the process of detecting which member from a group of speakers is speaking. Verification is the act of confirming that a speaker matches a known, *enrolled* voice.|
-| What languages are supported? | See [Speaker recognition language support](language-support.md#speaker-recognition). |
+| What languages are supported? | See [Speaker recognition language support](language-support.md?tabs=speaker-recognition). |
| What Azure regions are supported? | See [Speaker recognition region support](regions.md#speech-service).| | What audio formats are supported? | Mono 16 bit, 16 kHz PCM-encoded WAV. | | Can you enroll one speaker multiple times? | Yes, for text-dependent verification, you can enroll a speaker up to 50 times. For text-independent verification or speaker identification, you can enroll with up to 300 seconds of audio. |
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
With Speech containers, you can build a speech application architecture that's o
| Container | Features | Latest | Release status | |--|--|--|--|
-| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.4.0 | Generally available |
-| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.4.0 | Generally available |
+| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.5.0 | Generally available |
+| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.5.0 | Generally available |
| Speech language identification | Detects the language spoken in audio files. | 1.5.0 | Preview |
-| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.3.0 | Generally available |
+| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.4.0 | Generally available |
## Prerequisites
The following tag is an example of the format:
For all the supported locales and corresponding voices of the neural text-to-speech container, see [Neural text-to-speech image tags](../containers/container-image-tags.md#neural-text-to-speech). > [!IMPORTANT]
-> When you construct a neural text-to-speech HTTP POST, the [SSML](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The value is the corresponding container locale and voice, which is also known as the [short name](language-support.md#prebuilt-neural-voices). For example, the `latest` tag would have a voice name of `en-US-AriaNeural`.
+> When you construct a neural text-to-speech HTTP POST, the [SSML](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The value is the corresponding container [locale and voice](language-support.md?tabs=stt-tts). For example, the `latest` tag would have a voice name of `en-US-AriaNeural`.
# [Speech language identification](#tab/lid)
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-studio-overview.md
In Speech Studio, the following Speech service features are available as project
* [Pronunciation assessment](https://aka.ms/speechstudio/pronunciationassessment): Evaluate speech pronunciation and give speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly, without code. To use the feature with the Speech SDK in your applications, see the [Pronunciation assessment](how-to-pronunciation-assessment.md) article.
-* [Voice Gallery](https://aka.ms/speechstudio/voicegallery): Build apps and services that speak naturally. Choose from a broad portfolio of [languages, voices, and variants](language-support.md#prebuilt-neural-voices). Bring your scenarios to life with highly expressive and human-like neural voices.
+* [Voice Gallery](https://aka.ms/speechstudio/voicegallery): Build apps and services that speak naturally. Choose from a broad portfolio of [languages, voices, and variants](language-support.md?tabs=stt-tts). Bring your scenarios to life with highly expressive and human-like neural voices.
* [Custom Voice](https://aka.ms/speechstudio/customvoice): Create custom, one-of-a-kind voices for text-to-speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. To create and use custom voices via endpoints, see [Create and use your voice model](how-to-custom-voice-create-voice.md).
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The Speech service implementation of SSML is based on the World Wide Web Consort
## Prebuilt neural voices and custom neural voices
-Use a humanlike neural voice or create your own custom neural voice unique to your product or brand. For a complete list of supported languages, locales, and voices, see [Language support](language-support.md). To learn more about using a prebuilt neural voice and a custom neural voice, see [Text-to-speech overview](text-to-speech.md).
+Use a humanlike neural voice or create your own custom neural voice unique to your product or brand. For a complete list of supported languages, locales, and voices, see [Language support](language-support.md?tabs=stt-tts?tabs=stt-tts). To learn more about using a prebuilt neural voice and a custom neural voice, see [Text-to-speech overview](text-to-speech.md).
> [!NOTE] > You can hear voices in different styles and pitches reading example text by using this [text-to-speech website](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#features).
The `voice` element is required. It's used to specify the voice that's used for
| Attribute | Description | Required or optional | | | | -- |
-| `name` | Identifies the voice used for text-to-speech output. For a complete list of supported voices, see [Language support](language-support.md#text-to-speech). | Required |
+| `name` | Identifies the voice used for text-to-speech output. For a complete list of supported voices, see [Language support](language-support.md?tabs=stt-tts). | Required |
**Example** > [!NOTE]
-> This example uses the `en-US-JennyNeural` voice. For a complete list of supported voices, see [Language support](language-support.md#text-to-speech).
+> This example uses the `en-US-JennyNeural` voice. For a complete list of supported voices, see [Language support](language-support.md?tabs=stt-tts).
```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
Within the `speak` element, you can specify multiple voices for text-to-speech o
| Attribute | Description | Required or optional | | | | -- |
-| `name` | Identifies the voice used for text-to-speech output. For a complete list of supported voices, see [Language support](language-support.md#text-to-speech). | Required |
+| `name` | Identifies the voice used for text-to-speech output. For a complete list of supported voices, see [Language support](language-support.md?tabs=stt-tts). | Required |
**Example**
By default, text-to-speech synthesizes text by using a neutral speaking style fo
Styles, style degree, and roles are supported for a subset of neural voices. If a style or role isn't supported, the service uses the default neutral speech. To determine what styles and roles are supported for each voice, use: -- The [Voice styles and roles](language-support.md#voice-styles-and-roles) table.
+- The [Voice styles and roles](language-support.md?tabs=stt-tts#voice-styles-and-roles) table.
- The [Voice List API](rest-text-to-speech.md#get-a-list-of-voices). - The code-free [Audio Content Creation](https://aka.ms/audiocontentcreation) portal.
Styles, style degree, and roles are supported for a subset of neural voices. If
You use the `mstts:express-as` element to express emotions like cheerfulness, empathy, and calm. You can also optimize the voice for different scenarios like customer service, newscast, and voice assistant.
+For a list of supported styles per neural voice, see [supported voice styles and roles](language-support.md?tabs=stt-tts#voice-styles-and-roles).
+ **Syntax** ```xml
The following table has descriptions of each supported style.
|Style|Description| |--|-|
-|`style="advertisement-upbeat"`|Expresses an excited and high-energy tone for promoting a product or service.|
+|`style="advertisement_upbeat"`|Expresses an excited and high-energy tone for promoting a product or service.|
|`style="affectionate"`|Expresses a warm and affectionate tone, with higher pitch and vocal energy. The speaker is in a state of attracting the attention of the listener. The personality of the speaker is often endearing in nature.| |`style="angry"`|Expresses an angry and annoyed tone.| |`style="assistant"`|Expresses a warm and relaxed tone for digital assistants.|
The following table has descriptions of each supported style.
|`style="sad"`|Expresses a sorrowful tone.| |`style="serious"`|Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence.| |`style="shouting"`|Speaks like from a far distant or outside and to make self be clearly heard|
-|`style="sports-commentary"`|Expresses a relaxed and interesting tone for broadcasting a sports event.|
-|`style="sports-commentary-excited"`|Expresses an intensive and energetic tone for broadcasting exciting moments in a sports event.|
+|`style="sports_commentary"`|Expresses a relaxed and interesting tone for broadcasting a sports event.|
+|`style="sports_commentary_excited"`|Expresses an intensive and energetic tone for broadcasting exciting moments in a sports event.|
|`style="whispering"`|Speaks very softly and make a quiet and gentle sound| |`style="terrified"`|Expresses a very scared tone, with faster pace and a shakier voice. It sounds like the speaker is in an unsteady and frantic status.| |`style="unfriendly"`|Expresses a cold and indifferent tone.| ### Style degree
-The intensity of speaking style can be adjusted to better fit your use case. You specify a stronger or softer style with the `styledegree` attribute to make the speech more expressive or subdued. Speaking style degree adjustments are supported for Chinese (Mandarin, Simplified) neural voices.
+The intensity of speaking style can be adjusted to better fit your use case. You specify a stronger or softer style with the `styledegree` attribute to make the speech more expressive or subdued.
+
+For a list of neural voices that support speaking style degree, see [supported voice styles and roles](language-support.md?tabs=stt-tts#voice-styles-and-roles).
**Syntax**
This SSML snippet illustrates how the `styledegree` attribute is used to change
### Role
-Apart from adjusting the speaking styles and style degree, you can also adjust the `role` parameter so that the voice imitates a different age and gender. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name won't be changed. Role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices:
+Apart from adjusting the speaking styles and style degree, you can also adjust the `role` parameter so that the voice imitates a different age and gender. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name won't be changed.
-* `zh-CN-XiaomoNeural`
-* `zh-CN-XiaoxuanNeural`
-* `zh-CN-YunxiNeural`
-* `zh-CN-YunyeNeural`
+For a list of supported roles per neural voice, see [supported voice styles and roles](language-support.md?tabs=stt-tts#voice-styles-and-roles).
**Syntax**
This SSML snippet illustrates how to request blend shapes with your synthesized
## Next steps
-[Language support: Voices, locales, languages](language-support.md)
+[Language support: Voices, locales, languages](language-support.md?tabs=stt-tts)
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md
keywords: speech to text, speech to text software
In this overview, you learn about the benefits and capabilities of the speech-to-text feature of the Speech service, which is part of Azure Cognitive Services.
-Speech-to-text, also known as speech recognition, enables real-time or offline transcription of audio streams into text. For a full list of available speech-to-text languages, see [Language and voice support for the Speech service](language-support.md#speech-to-text).
+Speech-to-text, also known as speech recognition, enables real-time or offline transcription of audio streams into text. For a full list of available speech-to-text languages, see [Language and voice support for the Speech service](language-support.md?tabs=stt-tts).
> [!NOTE] > Microsoft uses the same recognition technology for Cortana and Office products.
The Azure speech-to-text service analyzes audio in real-time or batch to transcr
The base model may not be sufficient if the audio contains ambient noise or includes a lot of industry and domain-specific jargon. In these cases, building a custom speech model makes sense by training with additional data associated with that specific domain. You can create and train custom acoustic, language, and pronunciation models. For more information, see [Custom Speech](./custom-speech-overview.md) and [Speech-to-text REST API v3.0](rest-speech-to-text.md).
-Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md).
+Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md?tabs=stt-tts).
## Next steps
cognitive-services Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-translation.md
In this article, you learn about the benefits and capabilities of the speech tra
By using the Speech SDK or Speech CLI, you can give your applications, tools, and devices access to source transcriptions and translation outputs for the provided audio. Interim transcription and translation results are returned as speech is detected, and the final results can be converted into synthesized speech.
-For a list of languages supported for speech translation, see [Language and voice support](language-support.md#speech-translation).
+For a list of languages supported for speech translation, see [Language and voice support](language-support.md?tabs=speech-translation).
## Core features
cognitive-services Spx Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-basics.md
You can also save the synthesized output to a file. In this example, let's creat
spx synthesize --text "Enjoy using the Speech CLI." --audio output my-sample.wav ```
-These examples presume that you're testing in English. However, Speech service supports speech synthesis in many languages. You can pull down a full list of voices either by running the following command or by visiting the [language support page](./language-support.md).
+These examples presume that you're testing in English. However, Speech service supports speech synthesis in many languages. You can pull down a full list of voices either by running the following command or by visiting the [language support page](./language-support.md?tabs=stt-tts).
```console spx synthesize --voices
spx translate --file /some/file/path/input.wav --source en-US --target ru-RU --o
``` > [!NOTE]
-> For a list of all supported languages and their corresponding locale codes, see [Language and voice support for the Speech service](language-support.md).
+> For a list of all supported languages and their corresponding locale codes, see [Language and voice support for the Speech service](language-support.md?tabs=stt-tts).
> [!TIP] > If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help translate```.
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md
keywords: text to speech
In this overview, you learn about the benefits and capabilities of the text-to-speech feature of the Speech service, which is part of Azure Cognitive Services.
-Text-to-speech enables your applications, tools, or devices to convert text into humanlike synthesized speech. The text-to-speech capability is also known as speech synthesis. Use humanlike prebuilt neural voices out of the box, or create a custom neural voice that's unique to your product or brand. For a full list of supported voices, languages, and locales, see [Language and voice support for the Speech service](language-support.md#text-to-speech).
+Text-to-speech enables your applications, tools, or devices to convert text into humanlike synthesized speech. The text-to-speech capability is also known as speech synthesis. Use humanlike prebuilt neural voices out of the box, or create a custom neural voice that's unique to your product or brand. For a full list of supported voices, languages, and locales, see [Language and voice support for the Speech service](language-support.md?tabs=stt-tts).
## Core features
The patterns of stress and intonation in spoken language are called _prosody_. T
Here's more information about neural text-to-speech features in the Speech service, and how they overcome the limits of traditional text-to-speech systems:
-* **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text-to-speech by using [prebuilt neural voices](language-support.md#text-to-speech) or [custom neural voices](custom-neural-voice.md).
+* **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text-to-speech by using [prebuilt neural voices](language-support.md?tabs=stt-tts) or [custom neural voices](custom-neural-voice.md).
* **Asynchronous synthesis of long audio**: Use the [Long Audio API](long-audio-api.md) to asynchronously synthesize text-to-speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or speech-to-text REST API, responses aren't returned in real time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
Here's more information about neural text-to-speech features in the Speech servi
- Convert digital texts such as e-books into audiobooks. - Enhance in-car navigation systems.
- For a full list of platform neural voices, see [Language and voice support for the Speech service](language-support.md#text-to-speech).
+ For a full list of platform neural voices, see [Language and voice support for the Speech service](language-support.md?tabs=stt-tts).
* **Fine-tuning text-to-speech output with SSML**: Speech Synthesis Markup Language (SSML) is an XML-based markup language that's used to customize text-to-speech outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, change speaking rate, adjust volume, and attribute multiple voices to a single document.
Here's more information about neural text-to-speech features in the Speech servi
* **Visemes**: [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw, and tongue in producing a particular phoneme. Visemes have a strong correlation with voices and phonemes.
- By using viseme events in Speech SDK, you can generate facial animation data. This data can be used to animate faces in lip-reading communication, education, entertainment, and customer service. Viseme is currently supported only for the `en-US` (US English) [neural voices](language-support.md#text-to-speech).
+ By using viseme events in Speech SDK, you can generate facial animation data. This data can be used to animate faces in lip-reading communication, education, entertainment, and customer service. Viseme is currently supported only for the `en-US` (US English) [neural voices](language-support.md?tabs=stt-tts).
> [!NOTE] > We plan to retire the traditional/standard voices and non-neural custom voice in 2024. After that, we'll no longer support them.
cognitive-services Tutorial Voice Enable Your Bot Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/tutorial-voice-enable-your-bot-speech-sdk.md
In this section, you'll learn how to change the language that your bot will list
### Change the language
-You can choose from any of the languages mentioned in the [speech-to-text](language-support.md#speech-to-text) table. The following example changes the language to German.
+You can choose from any of the languages mentioned in the [speech-to-text](language-support.md?tabs=stt-tts) table. The following example changes the language to German.
-1. Open the Windows Voice Assistant Client app, select the **Settings** button (upper-right gear icon), and enter **de-de** in the **Language** field. This is the locale value mentioned in the [speech-to-text](language-support.md#speech-to-text) table.
+1. Open the Windows Voice Assistant Client app, select the **Settings** button (upper-right gear icon), and enter **de-de** in the **Language** field. This is the locale value mentioned in the [speech-to-text](language-support.md?tabs=stt-tts) table.
This step sets the spoken language to be recognized, overriding the default **en-us**. It also instructs the Direct Line Speech channel to use a default German voice for the bot reply. 1. Close the **Settings** page, and then select the **Reconnect** button to establish a new connection to your echo bot.
You can choose from any of the languages mentioned in the [speech-to-text](langu
You can select the text-to-speech voice and control pronunciation if the bot specifies the reply in the form of a [Speech Synthesis Markup Language](speech-synthesis-markup.md) (SSML) instead of simple text. The echo bot doesn't use SSML, but you can easily modify the code to do that.
-The following example adds SSML to the echo bot reply so that the German voice Stefan Apollo (a male voice) is used instead of the default female voice. See the [list of standard voices](how-to-migrate-to-prebuilt-neural-voice.md) and [list of neural voices](language-support.md#prebuilt-neural-voices) that are supported for your language.
+The following example adds SSML to the echo bot reply so that the German voice Stefan Apollo (a male voice) is used instead of the default female voice. See the [list of standard voices](how-to-migrate-to-prebuilt-neural-voice.md) and [list of neural voices](language-support.md?tabs=stt-tts) that are supported for your language.
1. Open **samples\csharp_dotnetcore\02.echo-bot\echo-bot.cs**. 1. Find these lines:
cognitive-services Migrate To V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/migrate-to-v3.md
The following list of V2 and V3 methods identifies the V3 methods and APIs that
| `TranslateArray` | [Translate](reference/v3-0-translate.md) | | `GetLanguageNames` | [Languages](reference/v3-0-languages.md) | | `GetLanguagesForTranslate` | [Languages](reference/v3-0-languages.md) |
-| `GetLanguagesForSpeak` | [Microsoft Speech Service](../speech-service/language-support.md#text-to-speech) |
+| `GetLanguagesForSpeak` | [Microsoft Speech Service](../speech-service/language-support.md) |
| `Speak` | [Microsoft Speech Service](../speech-service/text-to-speech.md) | | `Detect` | [Detect](reference/v3-0-detect.md) | | `DetectArray` | [Detect](reference/v3-0-detect.md) |
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
The [Custom Speech-to-text][sp-cstt] container image can be found on the `mcr.mi
# [Latest version](#tab/current)
-Release note for `3.4.0-amd64`:
+Release note for `3.5.0-amd64`:
**Features** * Security upgrade.
-* Support for latest model versions.
+ | Image Tags | Notes | Digest | |-|:|:-|
-| `latest` | | `sha256:d436d62c423d7a3062a2b13b7f774e4c50a32887c987ec5d4476224810c62086`|
-| `3.4.0-amd64` | | `sha256:d436d62c423d7a3062a2b13b7f774e4c50a32887c987ec5d4476224810c62086`|
+| `latest` | | `sha256:4900337eb93408064502dcaf2e5bdb16c0724ec6b4daacf140701f8d7e0e5061`|
+| `3.5.0-amd64` | | `sha256:4900337eb93408064502dcaf2e5bdb16c0724ec6b4daacf140701f8d7e0e5061`|
# [Previous version](#tab/previous)
+Release note for `3.4.0-amd64`:
+
+**Features**
+* Security upgrade.
+* Support for latest model versions.
+ Release note for `3.3.0-amd64`: **Features**
The [Speech-to-text][sp-stt] container image can be found on the `mcr.microsoft.
Since Speech-to-text v2.5.0, images are supported in the *US Government Virginia* region. Please use the *US Government Virginia* billing endpoint and API keys when using this region. # [Latest version](#tab/current)
+Release note for `3.5.0-amd64-<locale>`:
+
+**Features**
+* Security upgrade.
+* Support for latest model versions.
+* Support for the following new locales:
+ * en-gh
+ * en-ke
+ * en-tz
+ * fil-ph
+ * fr-ch
+ * id-id
+ * ms-my
+ * vi-vn
+
+| Image Tags | Notes |
+|-|:--|
+| `latest` | Container image with the `en-US` locale. |
+| `3.5.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `3.5.0-amd64-en-us`. |
+
+This container has the following locales available.
+
+| Locale for v3.5.0 | Notes | Digest |
+|--|:--|:--|
+| `ar-ae`| Container image with the `ar-AE` locale. | `sha256:6715a169278af58fe222f3ea92081311553f6b8f9f486b32d92c817d9af8ca80` |
+| `ar-bh`| Container image with the `ar-BH` locale. | `sha256:7a631229e0df797f3d87568274f6a5150fa7dc854ffda0c8db9a196fe1b74404` |
+| `ar-eg`| Container image with the `ar-EG` locale. | `sha256:66ab26df584088436a2a49b942c4850c89cce72f97a14627269379f6676312c7` |
+| `ar-iq`| Container image with the `ar-IQ` locale. | `sha256:990cf74bfd5f167a2cca5455a31de554d6e32b87e80f509eb18b748fd43b2503` |
+| `ar-jo`| Container image with the `ar-JO` locale. | `sha256:657f6ca98132b0d061d1d887f025e54bdf583a98151c482c07ee63cd8495864f` |
+| `ar-kw`| Container image with the `ar-KW` locale. | `sha256:b1963da1daca684a74998dc575839bdb695bc410ce3b2ceea935ce859de54b39` |
+| `ar-lb`| Container image with the `ar-LB` locale. | `sha256:bfcb0508079eab891a17bc92b7e3a2bc7d82ed8a91faa60176d53e0444f19077` |
+| `ar-om`| Container image with the `ar-OM` locale. | `sha256:138711172e324f94d3b25504ec4fa88db94cc36ecb6cae6a11af021c5028494c` |
+| `ar-qa`| Container image with the `ar-QA` locale. | `sha256:9062398762bbcbfef3a4a2c4df2913d2d2a337a889fbcec71b17c52ddf016bec` |
+| `ar-sa`| Container image with the `ar-SA` locale. | `sha256:4cb69c9389be71967719418110668ae5f6e14081c009ffa4b56b9b3050bf9a61` |
+| `ar-sy`| Container image with the `ar-SY` locale. | `sha256:1769e55848c6910af5bb89801cc16068c8f9a624a2fec1e78dc5dc2b17773c82` |
+| `bg-bg`| Container image with the `bg-BG` locale. | `sha256:28100a5da95aaa3de8028207fad3e9bb1e6bab7b2737bfb64424931386e5a714` |
+| `ca-es`| Container image with the `ca-ES` locale. | `sha256:2ab7c673493ffc2ceb2e70dcaca88e4e85ddfc4fbdccddcb2dcc73578145f442` |
+| `cs-cz`| Container image with the `cs-CZ` locale. | `sha256:831067911518fbc250ce1be8edcda82e9393da44c49b2b82a5f45ccfb03a72dc` |
+| `da-dk`| Container image with the `da-DK` locale. | `sha256:53d1edb90189a6af89284bb58c5e450ad8164678df1be935d4dab3ce561978a9` |
+| `de-at`| Container image with the `de-AT` locale. | `sha256:fb1114c717157bf94efd67a9a651294f9c2e89a22210fd00640e98869b9a57f1` |
+| `de-ch`| Container image with the `de-CH` locale. | `sha256:9ff4753cfea6fadd77d19d519323b5c728d15b8f644effaf438ff1858fb7501c` |
+| `de-de`| Container image with the `de-DE` locale. | `sha256:420eca6eb7799a742bcbdbfc1298ba01581e4e6352767dd6244835afa21ffb0b` |
+| `el-gr`| Container image with the `el-GR` locale. | `sha256:8f8b3280ea918f9e2b352f506ade83f6a99e0354190206ba8e3390860404b0b7` |
+| `en-au`| Container image with the `en-AU` locale. | `sha256:ebf84c0fa847fadfedf4420f2ca1ec5c543c80326fe73d4adc56b255f71be359` |
+| `en-ca`| Container image with the `en-CA` locale. | `sha256:f4d45cb010c1a82b01796a15a1644a7a147adb809ec2143015eb48cdc932e306` |
+| `en-gb`| Container image with the `en-GB` locale. | `sha256:f1a84faa5320d931a17f86d41540180956a186ab0007bd2d714547a4bf170a59` |
+| `en-gh`| Container image with the `en-GH` locale. | `sha256:e3a77906a145dd78d18d91a429208a896a6ebedd9ebfd97012b6160655ffcdfc` |
+| `en-hk`| Container image with the `en-HK` locale. | `sha256:0105c13d34be642fc7d1c7d77f275071833c79ddf5dbb86ee2aa2eb585b0a71a` |
+| `en-ie`| Container image with the `en-IE` locale. | `sha256:f89e5a9f361ed9a1ce493207cce0f71d155e26f4725cece78dfa09dd79930538` |
+| `en-in`| Container image with the `en-IN` locale. | `sha256:b11924603f332023cb2ca486096bf9a4f102c6545be9ccb6195d57f62049ebfc` |
+| `en-ke`| Container image with the `en-KE` locale. | `sha256:6b380e73e8b000aed488fa4f8a517f26af9174851c1822286305273ac9a4bce3` |
+| `en-nz`| Container image with the `en-NZ` locale. | `sha256:d8b0654f4bef0e05fd5a3eda2dcab9ee14f675df304aae2c65ca028f349696bf` |
+| `en-ph`| Container image with the `en-PH` locale. | `sha256:30715144797bde8a9b41833698e8637f2a0be48331c0e7a79388d3490666878d` |
+| `en-sg`| Container image with the `en-SG` locale. | `sha256:22fa13f4fe5941596a8b190afcb48ea5aad05b64b7cd68fe166d48ca92c35f65` |
+| `en-tz`| Container image with the `en-TZ` locale. | `sha256:c269b926fbb833a4c3ca95cdaf840214cb753de65eeb749961f96c47a75ee1fa` |
+| `en-us`| Container image with the `en-US` locale. | `sha256:4ff6f6eca433e5f0fbdc4f23a09cdfb7dc499e5abe3d5bd2aa1d1096317b9132` |
+| `en-za`| Container image with the `en-ZA` locale. | `sha256:d30359cc9e3f6e0bc7a98a46150ed637a3838e43f6c5f177d5a713b51c8870a9` |
+| `es-ar`| Container image with the `es-AR` locale. | `sha256:5ddd64e1e2c082facdefb7ac5bab281cd1c31785ed2add2a30f97d7a2cd9de71` |
+| `es-bo`| Container image with the `es-BO` locale. | `sha256:92afea9df00b93d2e7fc2ed1394f66c8fe7683698dea443417c4bc0f7206d214` |
+| `es-cl`| Container image with the `es-CL` locale. | `sha256:6c79087bc00aaa345c53242fa643c46ba44d7538b91f930983270ee5e5fc7934` |
+| `es-co`| Container image with the `es-CO` locale. | `sha256:a119569a403dc83b73a8d5975a7adfab2e636c1988807e46be488ecb4c27c3ae` |
+| `es-cr`| Container image with the `es-CR` locale. | `sha256:39b2392a6cc9b27914c19fe72a8280988a09ad2d9ac9cad08a2b3fb33a928fdc` |
+| `es-cu`| Container image with the `es-CU` locale. | `sha256:39620de6547da928201d30db0aefb165c6b5b5b52d52e47ac1ac2d18814e4c11` |
+| `es-do`| Container image with the `es-DO` locale. | `sha256:019e787910e8e7658d5fe38d67183f614bf2a5750d7d9c1474632be4f24cf5a8` |
+| `es-ec`| Container image with the `es-EC` locale. | `sha256:6caa85f8f212c365963e1de6a3ca07f3204e4a4eb112f861baa0e2e8b94a4487` |
+| `es-es`| Container image with the `es-ES` locale. | `sha256:411635408a73dd10ff942a36222f79749b3972701e84f31681d6d41eaf507845` |
+| `es-gt`| Container image with the `es-GT` locale. | `sha256:ac4485d0b954b8d88fff39dc5bfa02763fa486ba13a10333d170d29d3b21cb2f` |
+| `es-hn`| Container image with the `es-HN` locale. | `sha256:c8595a2e006087996073975a7d254f747ad5224d1da29fbb7dc2bacf3b7293b9` |
+| `es-mx`| Container image with the `es-MX` locale. | `sha256:d97ccd800b243b75adb91b6d1190e8cc5f45de899a5649fd5724793e731fdb86` |
+| `es-ni`| Container image with the `es-NI` locale. | `sha256:67d0797c5d3008a4f4ee79cb2f212bc6fb2ab65ff8f7f5ac5d3e10988d3585ed` |
+| `es-pa`| Container image with the `es-PA` locale. | `sha256:83e57f8e832aa26a16ec3f7a2f555ba555b3abd8e896d65eeeab6d55e94faed8` |
+| `es-pe`| Container image with the `es-PE` locale. | `sha256:0a901c13cb7a19cbd9ee128f5062846f5cef42db4219eccc6c23a04c2c6fe2ac` |
+| `es-pr`| Container image with the `es-PR` locale. | `sha256:52f50a59351b22da04e2c88fbd735374e6a1f4a39201a0e23fd968cee993f2d2` |
+| `es-py`| Container image with the `es-PY` locale. | `sha256:a378c9db2114857fad382e9b25037046c36696496fc89aa2b31fafa56a351ee3` |
+| `es-sv`| Container image with the `es-SV` locale. | `sha256:5d110d27e0f47ad2fa1290c19ae31825659f336bcfd8980d47d851bd3eafd2a6` |
+| `es-us`| Container image with the `es-US` locale. | `sha256:373be27cb5ba0e444862fe84ad8239d9e3a43669d721501d26833fa1f919c7cc` |
+| `es-uy`| Container image with the `es-UY` locale. | `sha256:5a301a9a07015c20f74e155dab1ee16806361cff6babe95526881a2f4d3b6f6a` |
+| `es-ve`| Container image with the `es-VE` locale. | `sha256:76656f8c456d8454fef9c75b8b1187ea32d9c6f6a047bcc600275d06f9e7ad85` |
+| `et-ee`| Container image with the `et-EE` locale. | `sha256:4fdf98bd01138f7cb9f3ae5cf526e2df0148b9addf9564ce0570ef613c329b64` |
+| `fi-fi`| Container image with the `fi-FI` locale. | `sha256:e3fe023fe2141c3a7e1b0a53a0e09db598becea68b075a2348f0b83e80b48973` |
+| `fil-ph`| Container image with the `fil-PH` locale. | `sha256:a47741d0d41621a1ee4f6f0351965011ba1f1fcb6a8a1464ea15b8d4c76ae3eb` |
+| `fr-ca`| Container image with the `fr-CA` locale. | `sha256:51b1020498b60bdfbaa1251aac673957267c597e83d688ee4db660497609f186` |
+| `fr-ch`| Container image with the `fr-CH` locale. | `sha256:1c8b680647854f645d90f88694b4df28880667c50452b4056a69ea266591a6a1` |
+| `fr-fr`| Container image with the `fr-FR` locale. | `sha256:cf84661a5667e29598f62f56454521ff498620d65d3adb28969322d73d395686` |
+| `ga-ie`| Container image with the `ga-IE` locale. | `sha256:52f4c6dd6a89abb98ab44e146c4777b2988e4f1ea559be2e32844b2013b7770b` |
+| `gu-in`| Container image with the `gu-IN` locale. | `sha256:d759bdd49d3afc19067133864129b3be0d5343386a92c6eb26351bc36a23f73d` |
+| `hi-in`| Container image with the `hi-IN` locale. | `sha256:f2480c678f0ad0251e751d73ac37c9be8641ab70f1af5c1551f2fdca936c4225` |
+| `hr-hr`| Container image with the `hr-HR` locale. | `sha256:64d446be688cac3308150570e539ec24029608955592ef4d7904b55a50d7c2a6` |
+| `hu-hu`| Container image with the `hu-HU` locale. | `sha256:ae579fecc38af9c50064d4baaeda6a6b7adb496888e0d4c54eff88d733d15243` |
+| `id-id`| Container image with the `id-ID` locale. | `sha256:2f35ecddadb93e942052a89ada5f02c2a75fb032e83d2c74b0c127459d3311b7` |
+| `it-it`| Container image with the `it-IT` locale. | `sha256:a0afbf373db5225a46940f1335d48586491bd83bcc2c9fa3910a1550c17a146d` |
+| `ja-jp`| Container image with the `ja-JP` locale. | `sha256:6d37861108fb7aaf5533468f1c081b2497228586c65ee5824e269cee62771712` |
+| `ko-kr`| Container image with the `ko-KR` locale. | `sha256:f95f0822586d45de8d18bdbfdcf931414f50d6f14feca862d4e4e1eb307fef2a` |
+| `lt-lt`| Container image with the `lt-LT` locale. | `sha256:d75a3e1e5b894cdb084cc87e320ff21c451e7880394981a050a77308f08babb9` |
+| `lv-lv`| Container image with the `lv-LV` locale. | `sha256:f7c65d3171d249c80a737508939561ee300dd4161d7766d4ae01653999a3dd97` |
+| `mr-in`| Container image with the `mr-IN` locale. | `sha256:ac46bcbf696c0f58daa5a0750bf14b7b2fbd1936cc88924decc8deac7f329e7b` |
+| `ms-my`| Container image with the `ms-MY` locale. | `sha256:b383b06ccf07e6828c07f52fa9b711c228ce56b96f108074868dd636be57100c` |
+| `mt-mt`| Container image with the `mt-MT` locale. | `sha256:89f2873dd6865011ce7ac5ffa03280623a44d8f8e619103135938d37d9e11136` |
+| `nb-no`| Container image with the `nb-NO` locale. | `sha256:9ce90954ad47c1f9bfaf01674ecf9adacaafe4398f205cf5282fbb8aa4469cba` |
+| `nl-nl`| Container image with the `nl-NL` locale. | `sha256:2726e66487e45a3955e29e6ca7158e7dd0611f6644d40c25c5de84610bfb81d2` |
+| `pl-pl`| Container image with the `pl-PL` locale. | `sha256:5aa1ec1e0ebca0c12f8ebe4041c572fa6893c8663bef19943f02c0d3945e3f74` |
+| `pt-br`| Container image with the `pt-BR` locale. | `sha256:fd2bdfb941787a86484734ba1e5cd77262c30e85456a070cef7f22853064b48f` |
+| `pt-pt`| Container image with the `pt-PT` locale. | `sha256:5415075fa104b8b92195c57b13a1d0ed003e90433fb24445c38ea67b012fad6c` |
+| `ro-ro`| Container image with the `ro-RO` locale. | `sha256:b70d8ce535fd49d6871594ef0bab53e0b55ecd70e2761a6bd27c5dc41febb8b4` |
+| `ru-ru`| Container image with the `ru-RU` locale. | `sha256:383aa99df457d493a8b873fd404508dcbd4b2c63e9e990a4f1ffdbe72c83b447` |
+| `sk-sk`| Container image with the `sk-SK` locale. | `sha256:f5e895abad6c26223193e767538fcc184acd4473d315e059f8b36a010c47d795` |
+| `sl-si`| Container image with the `sl-SI` locale. | `sha256:7d67fe1baf6de1e8940dc0fdd5b3e138549edcfea133a4f9111de3184c01d9f8` |
+| `sv-se`| Container image with the `sv-SE` locale. | `sha256:ceb99b9c95006a8485d9795067202eb326da474c039b96324bca5c7b71821e20` |
+| `ta-in`| Container image with the `ta-IN` locale. | `sha256:b0f768fcfe33ccb8ecf6cec2a409423aab0641c4a14e5cc63fe91d41dd509814` |
+| `te-in`| Container image with the `te-IN` locale. | `sha256:b7e13878addc40376d90dd624bd8b6f23969e4dbd22bf9520de928275bba9599` |
+| `th-th`| Container image with the `th-TH` locale. | `sha256:62987c56efbcd207c82b21bb269ed5fbf6db7a92017f25e790998b25ba969c69` |
+| `tr-tr`| Container image with the `tr-TR` locale. | `sha256:d191dd90bd5af9a579aacbe8f5e1bafcce02e702279e26086a73a48d494dae42` |
+| `uk-ua`| Container image with the `uk-UA` locale. | `sha256:7ce7f6c84574aeed2bbf5e6ea0e3f88d2f724998e696909c0cae787ba6e5f577` |
+| `vi-vn`| Container image with the `vi-VN` locale. | `sha256:f339d4ddfa7bad0efb5c6eb1eb8bcbd099edac267f36aafc6effb14d892526cb` |
+| `zh-cn`| Container image with the `zh-CN` locale. | `sha256:25973549ed030aba44c183928b19eee6b2c4e56d0e2ebcfdca24015783f6bbbf` |
+| `zh-hk`| Container image with the `zh-HK` locale. | `sha256:71104ab83fb6d750eecfc050fa705f7520b673c83d30c57b88f66d88d030f2f4` |
+| `zh-tw`| Container image with the `zh-TW` locale. | `sha256:2f5d720242f64354f769c26b58538bab40f2e860ca21a542b0c1b78a5c7e7419` |
+
+# [Previous version](#tab/previous)
Release note for `3.4.0-amd64-<locale>`: **Features**
This container has the following locales available.
| `zh-hk` | Container image with the `zh-HK` locale. | `sha256:6dfe201e06499af95957b27049dea66977168b9bf36fbf00e5ff8c948146cd24` | | `zh-tw` | Container image with the `zh-TW` locale. | `sha256:63fd6ea1dbef2656b3bdf45831d4d9015d8118d5ea146d55a0c2db2ca8c4883a` |
-# [Previous version](#tab/previous)
- Release note for `3.3.0-amd64-<locale>`: **Features**
This container image has the following tags available. You can also find a full
# [Latest version](#tab/current)
-Release notes for `v2.3.0`:
+Release notes for `v2.4.0`:
**Features** * Security upgrade.
Release notes for `v2.3.0`:
| Image Tags | Notes | ||:| | `latest` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. |
-| `2.3.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `2.3.0-amd64-en-us-arianeural`. |
+| `2.4.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `2.4.0-amd64-en-us-arianeural`. |
-| v2.3.0 Locales and voices | Notes |
+
+| v2.4.0 Locales and voices | Notes |
|-|:| | `am-et-amehaneural`| Container image with the `am-ET` locale and `am-ET-amehaneural` voice.| | `am-et-mekdesneural`| Container image with the `am-ET` locale and `am-ET-mekdesneural` voice.|
Release notes for `v2.3.0`:
# [Previous version](#tab/previous)
+Release notes for `v2.3.0`:
+
+**Features**
+* Security upgrade.
+
+| Image Tags | Notes |
+||:|
+| `latest` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. |
+| `2.3.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `2.3.0-amd64-en-us-arianeural`. |
+
+| v2.3.0 Locales and voices | Notes |
+|-|:|
+| `am-et-amehaneural`| Container image with the `am-ET` locale and `am-ET-amehaneural` voice.|
+| `am-et-mekdesneural`| Container image with the `am-ET` locale and `am-ET-mekdesneural` voice.|
+| `ar-bh-lailaneural`| Container image with the `ar-BH` locale and `ar-BH-lailaneural` voice.|
+| `ar-eg-salmaneural`| Container image with the `ar-EG` locale and `ar-EG-salmaneural` voice.|
+| `ar-eg-shakirneural`| Container image with the `ar-EG` locale and `ar-EG-shakirneural` voice.|
+| `ar-sa-hamedneural`| Container image with the `ar-SA` locale and `ar-SA-hamedneural` voice.|
+| `ar-sa-zariyahneural`| Container image with the `ar-SA` locale and `ar-SA-zariyahneural` voice.|
+| `cs-cz-antoninneural`| Container image with the `cs-CZ` locale and `cs-CZ-antoninneural` voice.|
+| `cs-cz-vlastaneural`| Container image with the `cs-CZ` locale and `cs-CZ-vlastaneural` voice.|
+| `de-ch-janneural`| Container image with the `de-CH` locale and `de-CH-janneural` voice.|
+| `de-ch-lenineural`| Container image with the `de-CH` locale and `de-CH-lenineural` voice.|
+| `de-de-conradneural`| Container image with the `de-DE` locale and `de-DE-conradneural` voice.|
+| `de-de-katjaneural`| Container image with the `de-DE` locale and `de-DE-katjaneural` voice.|
+| `en-au-natashaneural`| Container image with the `en-AU` locale and `en-AU-natashaneural` voice.|
+| `en-au-williamneural`| Container image with the `en-AU` locale and `en-AU-williamneural` voice.|
+| `en-ca-claraneural`| Container image with the `en-CA` locale and `en-CA-claraneural` voice.|
+| `en-ca-liamneural`| Container image with the `en-CA` locale and `en-CA-liamneural` voice.|
+| `en-gb-libbyneural`| Container image with the `en-GB` locale and `en-GB-libbyneural` voice.|
+| `en-gb-ryanneural`| Container image with the `en-GB` locale and `en-GB-ryanneural` voice.|
+| `en-gb-sonianeural`| Container image with the `en-GB` locale and `en-GB-sonianeural` voice.|
+| `en-us-arianeural`| Container image with the `en-US` locale and `en-US-arianeural` voice.|
+| `en-us-guyneural`| Container image with the `en-US` locale and `en-US-guyneural` voice.|
+| `en-us-jennyneural`| Container image with the `en-US` locale and `en-US-jennyneural` voice.|
+| `es-es-alvaroneural`| Container image with the `es-ES` locale and `es-ES-alvaroneural` voice.|
+| `es-es-elviraneural`| Container image with the `es-ES` locale and `es-ES-elviraneural` voice.|
+| `es-mx-dalianeural`| Container image with the `es-MX` locale and `es-MX-dalianeural` voice.|
+| `es-mx-jorgeneural`| Container image with the `es-MX` locale and `es-MX-jorgeneural` voice.|
+| `fr-ca-antoineneural`| Container image with the `fr-CA` locale and `fr-CA-antoineneural` voice.|
+| `fr-ca-jeanneural`| Container image with the `fr-CA` locale and `fr-CA-jeanneural` voice.|
+| `fr-ca-sylvieneural`| Container image with the `fr-CA` locale and `fr-CA-sylvieneural` voice.|
+| `fr-fr-deniseneural`| Container image with the `fr-FR` locale and `fr-FR-deniseneural` voice.|
+| `fr-fr-henrineural`| Container image with the `fr-FR` locale and `fr-FR-henrineural` voice.|
+| `hi-in-madhurneural`| Container image with the `hi-IN` locale and `hi-IN-madhurneural` voice.|
+| `hi-in-swaraneural`| Container image with the `hi-IN` locale and `hi-IN-swaraneural` voice.|
+| `it-it-diegoneural`| Container image with the `it-IT` locale and `it-IT-diegoneural` voice.|
+| `it-it-elsaneural`| Container image with the `it-IT` locale and `it-IT-elsaneural` voice.|
+| `it-it-isabellaneural`| Container image with the `it-IT` locale and `it-IT-isabellaneural` voice.|
+| `ja-jp-keitaneural`| Container image with the `ja-JP` locale and `ja-JP-keitaneural` voice.|
+| `ja-jp-nanamineural`| Container image with the `ja-JP` locale and `ja-JP-nanamineural` voice.|
+| `ko-kr-injoonneural`| Container image with the `ko-KR` locale and `ko-KR-injoonneural` voice.|
+| `ko-kr-sunhineural`| Container image with the `ko-KR` locale and `ko-KR-sunhineural` voice.|
+| `pt-br-antonioneural`| Container image with the `pt-BR` locale and `pt-BR-antonioneural` voice.|
+| `pt-br-franciscaneural`| Container image with the `pt-BR` locale and `pt-BR-franciscaneural` voice.|
+| `so-so-muuseneural`| Container image with the `so-SO` locale and `so-SO-muuseneural` voice.|
+| `so-so-ubaxneural`| Container image with the `so-SO` locale and `so-SO-ubaxneural` voice.|
+| `sv-se-hillevineural`| Container image with the `sv-SE` locale and `sv-SE-hillevineural` voice.|
+| `sv-se-mattiasneural`| Container image with the `sv-SE` locale and `sv-SE-mattiasneural` voice.|
+| `sv-se-sofieneural`| Container image with the `sv-SE` locale and `sv-SE-sofieneural` voice.|
+| `tr-tr-ahmetneural`| Container image with the `tr-TR` locale and `tr-TR-ahmetneural` voice.|
+| `tr-tr-emelneural`| Container image with the `tr-TR` locale and `tr-TR-emelneural` voice.|
+| `zh-cn-xiaochenneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaochenneural` voice.|
+| `zh-cn-xiaohanneural`| Container image with the `zh-CN` locale and `zh-CN-xiaohanneural` voice.|
+| `zh-cn-xiaomoneural`| Container image with the `zh-CN` locale and `zh-CN-xiaomoneural` voice.|
+| `zh-cn-xiaoqiuneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoqiuneural` voice.|
+| `zh-cn-xiaoruineural`| Container image with the `zh-CN` locale and `zh-CN-xiaoruineural` voice.|
+| `zh-cn-xiaoshuangneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoshuangneural` voice.|
+| `zh-cn-xiaoxiaoneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoxiaoneural` voice.|
+| `zh-cn-xiaoxuanneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoxuanneural` voice.|
+| `zh-cn-xiaoyanneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoyanneural` voice.|
+| `zh-cn-xiaoyouneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoyouneural` voice.|
+| `zh-cn-yunxineural`| Container image with the `zh-CN` locale and `zh-CN-yunxineural` voice.|
+| `zh-cn-yunyangneural`| Container image with the `zh-CN` locale and `zh-CN-yunyangneural` voice.|
+| `zh-cn-yunyeneural`| Container image with the `zh-CN` locale and `zh-CN-yunyeneural` voice.|
Release notes for `v2.2.0`:
cognitive-services Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/developer-guide.md
Previously updated : 06/30/2022 Last updated : 08/24/2022
As you use these features in your application, use the following documentation a
| [C# documentation](/dotnet/api/overview/azure/ai.language.questionanswering-readme-pre) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering) | | [Python documentation](/python/api/overview/azure/ai-language-questionanswering-readme) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-questionanswering) |
+## Version support
+
+The namespaces mentioned here have the following framework/language version support:
+
+|Framework/Language | Minimum supported version |
+|||
+|.NET | .NET Framework 4.6.1 or newer, or .NET (formerly .NET Core) 2.0 or newer. |
+|Java | v8 or later |
+|JavaScript | v14 LTS or later |
+|Python| v3.7 or later |
# [REST API](#tab/rest-api)
As you use this API in your application, see the following reference documentati
* [Custom authoring API](/rest/api/cognitiveservices/questionanswering/question-answering-projects) - Create a knowledge base to answer questions. * [Custom runtime API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) - Query and knowledge base to generate an answer. + ## See also
cognitive-services Use Asynchronously https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/use-asynchronously.md
Previously updated : 08/08/2022 Last updated : 08/25/2022
Afterwards, use the client object to send asynchronous calls to the API. The met
When using this feature asynchronously, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
+## Automatic language detection
+
+Starting in version `2022-07-01-preview` of the REST API, you can request automatic [language detection](../language-detection/overview.md) on your documents. By setting the `language` parameter to `auto`, the detected language code of the text will be returned as a language value in the response. This language detection will not incur extra charges to your Language resource.
+ ## Data limits > [!NOTE]
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Previously updated : 07/28/2022 Last updated : 08/25/2022
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## August 2022
+
+* [Role-based access control](./concepts/role-based-access-control.md) for the Language service.
+ ## July 2022 * New AI models for [sentiment analysis](./sentiment-opinion-mining/overview.md) and [key phrase extraction](./key-phrase-extraction/overview.md) based on [z-code models](https://www.microsoft.com/research/project/project-zcode/), providing:
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* Conversational PII is now available in all Azure regions supported by the Language service.
-* A new version of the Language API (`2022-07-15-preview`) has been released. It provides:
- * Automatic language detection for asynchronous tasks.
+* A new version of the Language API (`2022-07-01-preview`) has been released. It provides:
+ * [Automatic language detection](./concepts/use-asynchronously.md#automatic-language-detection) for asynchronous tasks.
* For Text Analytics for health, confidence score are now returned in relations. To use this version in your REST API calls, use the following URL: ```http
- <your-language-resource-endpoint>/language/:analyze-text?api-version=2022-07-15-preview`
+ <your-language-resource-endpoint>/language/:analyze-text?api-version=2022-07-01-preview
``` ## June 2022
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-support.md
These Cognitive Services are language agnostic and don't have limitations based
## Speech
-* [Speech Service: Speech-to-Text](./speech-service/language-support.md#speech-to-text)
-* [Speech Service:Text-to-Speech](./speech-service/language-support.md#text-to-speech)
-* [Speech Service: Speech Translation](./speech-service/language-support.md#speech-translation)
+* [Speech Service: Speech-to-Text](./speech-service/language-support.md?tabs=stt-tts)
+* [Speech Service:Text-to-Speech](./speech-service/language-support.md?tabs=stt-tts)
+* [Speech Service: Speech Translation](./speech-service/language-support.md?tabs=speech-translation)
## Decision
cognitive-services Manage Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/manage-resources.md
To recover a deleted cognitive service resource, use the following commands. Whe
* `{resourceName}` with your resource name * `{location}` with the location of your resource
-### Using the REST API
+
+# [Azure portal](#tab/azure-portal)
+
+If you need to recover a deleted resource, navigate to the hub of the cognitive services API type and select "Manage deleted resources" from the menu. For example, if you would like to recover an "Anomaly detector" resource, search for "Anomaly detector" in the search bar and select the service to get to the "Anomaly detector" hub which lists deleted resources.
++
+Select the subscription in the dropdown list to locate the deleted resource you would like to recover. Select one or more of the deleted resources and click **Recover**.
++
+> [!NOTE]
+> It can take a couple of minutes for your deleted resource(s) to recover and show up in the list of the resources. Click on the **Refresh** button in the menu to update the list of resources.
+
+# [Rest API](#tab/rest-api)
Use the following `PUT` command:
In the request body, use the following JSON format:
} ```
-### Using PowerShell
+# [PowerShell](#tab/powershell)
Use the following command to restore the resource:
If you need to find the name of your deleted resources, you can get a list of de
Get-AzResource -ResourceId /subscriptions/{subscriptionId}/providers/Microsoft.CognitiveServices/deletedAccounts -ApiVersion 2021-04-30 ```
-### Using the Azure CLI
+# [Azure CLI](#tab/azure-cli)
```azurecli-interactive az resource create --subscription {subscriptionID} -g {resourceGroup} -n {resourceName} --location {location} --namespace Microsoft.CognitiveServices --resource-type accounts --properties "{\"restore\": true}" ``` ++ ## Purge a deleted resource Once you delete a resource, you won't be able to create another one with the same name for 48 hours. To create a resource with the same name, you will need to purge the deleted resource.
To purge a deleted cognitive service resource, use the following commands. Where
> [!NOTE] > Once a resource is purged, it is permanently deleted and cannot be restored. You will lose all data and keys associated with the resource.
-### Using the REST API
+
+# [Azure portal](#tab/azure-portal)
+
+If you need to purge a deleted resource, the steps are similar to recovering a deleted resource.
+
+Navigate to the hub of the cognitive services API type of your deleted resource. For example, if you would like to purge an "Anomaly detector" resource, search for "Anomaly detector" in the search bar. Select the service to get to the "Anomaly detector" hub which lists deleted resources.
+
+Select **Manage deleted resources** from the menu.
++
+Select the subscription in the dropdown list to locate the deleted resource you would like to purge.
+Select one or more deleted resources and click **Purge**.
+Purging will permanently delete a Cognitive Services resource.
+++
+# [Rest API](#tab/rest-api)
Use the following `DELETE` command:
Use the following `DELETE` command:
https://management.azure.com/subscriptions/{subscriptionID}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName}?Api-Version=2021-04-30` ```
-### Using PowerShell
+# [PowerShell](#tab/powershell)
```powershell Remove-AzResource -ResourceId /subscriptions/{subscriptionID}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName} -ApiVersion 2021-04-30` ```
-### Using the Azure CLI
+# [Azure CLI](#tab/azure-cli)
```azurecli-interactive az resource delete --ids /subscriptions/{subscriptionId}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName} ``` +++ ## See also * [Create a new resource using the Azure portal](cognitive-services-apis-create-account.md) * [Create a new resource using the Azure CLI](cognitive-services-apis-create-account-cli.md) * [Create a new resource using the client library](cognitive-services-apis-create-account-client-library.md)
-* [Create a new resource using an ARM template](create-account-resource-manager-template.md)
+* [Create a new resource using an ARM template](create-account-resource-manager-template.md)
cognitive-services Concepts Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-features.md
Personalizer does not prescribe, limit, or fix what features you can send for ac
* There must be at least one feature for the context. Personalizer does not support an empty context. If you only send a fixed context every time, Personalizer will choose the action for rankings only regarding the features in the actions. * For categorical features, you don't need to define the possible values, and you don't need to pre-define ranges for numerical values.
+Features are sent as part of the JSON payload in a [Rank API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) call. Each Rank call is associated with a personalization _event_. By default, Personalizer will automatically assign an event ID and return it in the Rank response. This default behavior is recommended for most users, however, if you need to create your own unique event ID (for example, using a GUID), then you can provide it in the Rank call as an argument.
+ ## Supported feature types Personalizer supports features of string, numeric, and boolean types. It is very likely that your application will mostly use string features, with a few exceptions.
You can use several other [Azure Cognitive Services](https://www.microsoft.com/c
Each action:
-* Has an _event_ ID. If you already have an event ID, you should submit that. If you do not have an event ID, do not send one, Personalizer creates one for you and returns it in the response of the Rank request. The ID is associated with the Rank event, not the user. If you create an ID, a GUID works best.
* Has a list of features. * The list of features can be large (hundreds) but we recommend evaluating feature effectiveness to remove features that aren't contributing to getting rewards. * The features in the **actions** may or may not have any correlation with features in the **context** used by Personalizer.
cosmos-db Manage Data Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-python.md
Now go back to the Azure portal to get your connection string information and co
## Use the X509 certificate
-1. Copy the Baltimore CyberTrust Root certificate details from [https://baltimore-cybertrust-root.chain-demos.digicert.com/info/https://docsupdatetracker.net/index.html](https://baltimore-cybertrust-root.chain-demos.digicert.com/info/https://docsupdatetracker.net/index.html) into a text file. Save the file using the file extension *.cer*.
+1. Copy the Baltimore CyberTrust Root certificate details from [https://www.digicert.com/kb/digicert-root-certificates.htm](https://www.digicert.com/kb/digicert-root-certificates.htm) into a text file. Save the file using the file extension *.cer*.
The certificate has serial number `02:00:00:b9` and SHA1 fingerprint `d4:de:20:d0:5e:66:fc:53:fe:1a:50:88:2c:78:db:28:52:ca:e4:74`.
Now go back to the Azure portal to get your connection string information and co
In this quickstart, you learned how to create an Azure Cosmos DB account with Cassandra API, and run a Cassandra Python app that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account. > [!div class="nextstepaction"]
-> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
+> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Change Feed Processor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/change-feed-processor.md
The change feed processor can be hosted in any platform that supports long runni
* A continuous running [Azure WebJob](/learn/modules/run-web-app-background-task-with-webjobs/). * A process in an [Azure Virtual Machine](/azure/architecture/best-practices/background-jobs#azure-virtual-machines). * A background job in [Azure Kubernetes Service](/azure/architecture/best-practices/background-jobs#azure-kubernetes-service).
+* A serverless function in [Azure Functions](/azure/architecture/best-practices/background-jobs#azure-functions).
* An [ASP.NET hosted service](/aspnet/core/fundamentals/host/hosted-services). While change feed processor can run in short lived environments, because the lease container maintains the state, the startup cycle of these environments will add delay to receiving the notifications (due to the overhead of starting the processor every time the environment is started).
You can now proceed to learn more about change feed processor in the following a
* [Change feed pull model](change-feed-pull-model.md) * [How to migrate from the change feed processor library](how-to-migrate-from-change-feed-library.md) * [Using the change feed estimator](how-to-use-change-feed-estimator.md)
-* [Change feed processor start time](#starting-time)
+* [Change feed processor start time](#starting-time)
cosmos-db Create Sql Api Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-python.md
ms.devlang: python Previously updated : 08/26/2021 Last updated : 08/25/2022
In this quickstart, you create and manage an Azure Cosmos DB SQL API account fro
* Without an Azure active subscription: * [Try Azure Cosmos DB for free](../try-free.md), a tests environment that lasts for 30 days. * [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) -- [Python 2.7 or 3.6+](https://www.python.org/downloads/), with the `python` executable in your `PATH`.
+- [Python 3.7+](https://www.python.org/downloads/), with the `python` executable in your `PATH`.
- [Visual Studio Code](https://code.visualstudio.com/). - The [Python extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python#overview). - [Git](https://www.git-scm.com/downloads).
The following snippets are all taken from the [cosmos_get_started.py](https://gi
5. Run the following command to install the azure-cosmos package. ```python
- pip install --pre azure-cosmos
+ pip install azure-cosmos aiohttp
``` If you get an error about access being denied when attempting to install azure-cosmos, you'll need to [run VS Code as an administrator](https://stackoverflow.com/questions/37700536/visual-studio-code-terminal-how-to-run-a-command-with-administrator-rights).
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use i
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) > [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB for the SQL API](../import-data.md)
+> [Import data into Azure Cosmos DB for the SQL API](../import-data.md)
cosmos-db Sql Api Sdk Java Spring V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-spring-v3.md
The [Spring Framework](https://spring.io/projects/spring-framework) is a program
You can use Spring Data Azure Cosmos DB in your applications hosted in [Azure Spring Apps](https://azure.microsoft.com/services/spring-apps/).
-## Spring Boot support policy
+## Version Support Policy
-Azure Spring Data Cosmos supports multiple [Spring Boot Versions](https://aka.ms/spring/versions). For complete list of currently supported versions, please visit our [Spring Version Mapping](https://aka.ms/spring/versions).
+### Spring Boot Version Support
-Spring Boot releases are marked as "End of Life" when they are no longer supported or released in any form. If you are running an EOL version, you should upgrade as soon as possible.
+This project supports multiple Spring Boot Versions. Visit [spring boot support policy](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-spring-data-cosmos#spring-boot-support-policy) for more information. Maven users can inherit from the `spring-boot-starter-parent` project to obtain a dependency management section to let Spring manage the versions for dependencies. Visit [spring boot version support](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-spring-data-cosmos#spring-boot-version-support) for more information.
-Please note that a version can be out of support before it is marked as "End of Life." During this time you should only expect releases for critical bugs or security issues.
+### Spring Data Version Support
-For more information on Spring Boot supported versions, please visit [Spring Boot Supported Versions](https://github.com/spring-projects/spring-boot/wiki/Supported-Versions).
+This project supports different spring-data-commons versions. Visit [spring data version support](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-spring-data-cosmos#spring-data-version-support) for more information.
+
+### Which Version of Azure Spring Data Cosmos Should I Use
+
+Azure Spring Data Cosmos library supports multiple versions of Spring Boot / Spring Cloud. Refer to [azure spring data cosmos version mapping](https://github.com/Azure/azure-sdk-for-jav#which-version-of-azure-spring-data-cosmos-should-i-use) for detailed information on which version of Azure Spring Data Cosmos to use with Spring Boot / Spring Cloud version.
> [!IMPORTANT] > These release notes are for version 3 of Spring Data Azure Cosmos DB.
For more information on Spring Boot supported versions, please visit [Spring Boo
| Content | Link | ||| | **Release notes** | [Release notes for Spring Data Cosmos SDK v3](https://github.com/Azure/azure-sdk-for-jav) |
+| **SDK Documentation** | [Azure Spring Data Cosmos SDK v3 documentation](https://github.com/Azure/azure-sdk-for-jav) |
| **SDK download** | [Maven](https://mvnrepository.com/artifact/com.azure/azure-spring-data-cosmos) | | **API documentation** | [Java API reference documentation](/java/api/overview/azure/spring-data-cosmos-readme?view=azure-java-stable&preserve-view=true) | | **Contribute to SDK** | [Azure SDK for Java Central Repo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-spring-data-cosmos) |
For more information on Spring Boot supported versions, please visit [Spring Boo
| **Troubleshooting** | [Troubleshoot Java SDK v4 (applicable to Spring Data)](troubleshoot-java-sdk-v4-sql.md) | | **Azure Cosmos DB workshops and labs** |[Cosmos DB workshops home page](https://aka.ms/cosmosworkshop)
-> [!IMPORTANT]
-> * The 3.5.0 release supports Spring Boot 2.4.3 and above.
- ## Release history Release history is maintained in the azure-sdk-for-java repo, for detailed list of releases, see the [changelog file](https://github.com/Azure/azure-sdk-for-jav).
It's strongly recommended to use version 3.22.0 and above.
## Additional notes * Spring Data Azure Cosmos DB supports Java JDK 8 and Java JDK 11.
-* Spring Data 2.3 is currently supported, Spring Data 2.4 is not supported currently.
## FAQ
data-factory Author Global Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/author-global-parameters.md
$dataFactory.GlobalParameters = $newGlobalParameters
Write-Host "Updating" $newGlobalParameters.Count "global parameters."
-Set-AzDataFactoryV2 -InputObject $dataFactory -Force
+Set-AzDataFactoryV2 -InputObject $dataFactory -Force -PublicNetworkAccess $dataFactory.PublicNetworkAccess
``` ## Next steps
data-factory Connector Appfigures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-appfigures.md
+
+ Title: Transform data in AppFigures (Preview)
+
+description: Learn how to transform data in AppFigures (Preview) by using Data Factory or Azure Synapse Analytics.
++++++ Last updated : 08/16/2022++
+# Transform data in AppFigures (Preview) using Azure Data Factory or Synapse Analytics
++
+This article outlines how to use Data Flow to transform data in AppFigures (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+
+> [!IMPORTANT]
+> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
+
+## Supported capabilities
+
+This AppFigures connector is supported for the following capabilities:
+
+| Supported capabilities|IR |
+|| --|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
+
+## Create an AppFigures linked service using UI
+
+Use the following steps to create an AppFigures linked service in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory U I.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse U I.":::
+
+2. Search for AppFigures (Preview) and select the AppFigures (Preview) connector.
+
+ :::image type="content" source="media/connector-appfigures/appfigures-connector.png" alt-text="Screenshot showing selecting AppFigures connector.":::
+
+3. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-appfigures/configure-appfigures-linked-service.png" alt-text="Screenshot of configuration for AppFigures linked service.":::
+
+## Connector configuration details
+
+The following sections provide information about properties that are used to define Data Factory and Synapse pipeline entities specific to AppFigures.
+
+## Linked service properties
+
+The following properties are supported for the AppFigures linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **AppFigures**. |Yes |
+| userName | Specify a user name for the AppFigures. |Yes |
+| password | Specify a password for the AppFigures. Mark this field as **SecureString** to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
+| clientKey | Specify a client key for the AppFigures. Mark this field as **SecureString** to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
+
+**Example:**
+
+```json
+{
+ "name": "AppFiguresLinkedService",
+ "properties": {
+ "type": "AppFigures",
+ "typeProperties": {
+ "userName": "<username>",
+ "password": "<password>",
+ "clientKey": "<client key>"
+ }
+ }
+}
+```
+
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read tables from AppFigures. For more information, see the [source transformation](data-flow-source.md) in mapping data flows. You can only use an [inline dataset](data-flow-source.md#inline-datasets) as source type.
+
+### Source transformation
+
+The below table lists the properties supported by AppFigures source. You can edit these properties in the **Source options** tab.
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| Entity type | The type of the entity in AppFigures. | Yes | `products`<br>`ads`<br>`sales` | *(for inline dataset only)*<br>entityType |
++
+#### AppFigures source script examples
+
+When you use AppFigures as source type, the associated data flow script is:
+
+```
+source(allowSchemaDrift: true,
+ validateSchema: false,
+ store: 'appfigures',
+ format: 'rest',
+ entityType: 'products') ~> AppFiguresSource
+```
+
+## Next steps
+
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-overview.md
Previously updated : 06/23/2022 Last updated : 08/23/2022
data-factory Data Flow Expressions Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-expressions-usage.md
___
### <code>currentDate</code> <code><b>currentDate([<i>&lt;value1&gt;</i> : string]) => date</b></code><br/><br/>
-Gets the current date when this job starts to run. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer to Java's `SimpleDateFormat` class for available formats. [https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html).
+Gets the current date when this job starts to run. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone of the data factory's data center/region is used as the default. Refer to Java's `SimpleDateFormat` class for available formats. [https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html).
* ``currentDate() == toDate('2250-12-31') -> false`` * ``currentDate('PST') == toDate('2250-12-31') -> false`` * ``currentDate('America/New_York') == toDate('2250-12-31') -> false``
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-source.md
Previously updated : 08/04/2022 Last updated : 08/23/2022 # Source transformation in mapping data flow
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| Connector | Format | Dataset/inline | | | | -- | |[Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Delta](format-delta.md)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties) <br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô |
+|[Appfigures (Preview)](connector-appfigures.md#mapping-data-flow-properties) | | -/Γ£ô |
|[Asana (Preview)](connector-asana.md#mapping-data-flow-properties) | | -/Γ£ô | |[Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Delta](format-delta.md)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties) <br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô | | [Azure Cosmos DB (SQL API)](connector-azure-cosmos-db.md#mapping-data-flow-properties) | | Γ£ô/- |
data-factory Tutorial Copy Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-portal.md
In this tutorial, you start with creating the pipeline. Then you create linked s
1. On the home page, select **Orchestrate**.
- :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the ADF home page.":::
+ :::image type="content" source="./media/tutorial-data-flow/orchestrate.png" alt-text="Screenshot that shows the ADF home page.":::
1. In the General panel under **Properties**, specify **CopyPipeline** for **Name**. Then collapse the panel by clicking the Properties icon in the top-right corner.
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md
You can create and manage virtual machines (VMs) on an Azure Stack Edge Pro GPU device by using the Azure portal, templates, and Azure PowerShell cmdlets, and via the Azure CLI or Python scripts. This article describes how to create and manage a VM on your Azure Stack Edge Pro GPU device by using the Azure portal. > [!IMPORTANT]
-> You will need to enable multifactor authentication for the user who manages the VMs and images that are deployed on your device from the cloud. The cloud operations will fail if the user doesn't have multifactor authentication enabled. For steps to enable multifactor authentication click [here](/azure/active-directory/authentication/tutorial-enable-azure-mfa.md)
+> You will need to enable multifactor authentication for the user who manages the VMs and images that are deployed on your device from the cloud. The cloud operations will fail if the user doesn't have multifactor authentication enabled. For steps to enable multifactor authentication click [here](/articles/active-directory/authentication/howto-mfa-userdevicesettings.md)
## VM deployment workflow
defender-for-cloud Adaptive Application Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md
When you move a machine from one group to another, the application control polic
To manage your adaptive application controls programmatically, use our REST API.
-The relevant API documentation is available in [the Adaptive application Controls section of Defender for Cloud's API docs](/rest/api/securitycenter/adaptiveapplicationcontrols).
+The relevant API documentation is available in [the Adaptive application Controls section of Defender for Cloud's API docs](/rest/api/defenderforcloud/adaptiveapplicationcontrols).
Some of the functions that are available from the REST API:
defender-for-cloud Alerts Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-schemas.md
You can view these security alerts in Microsoft Defender for Cloud's pages - [ov
- [Microsoft Sentinel](../sentinel/index.yml) - Microsoft's cloud-native SIEM. The Sentinel Connector gets alerts from Microsoft Defender for Cloud and sends them to the [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) for Microsoft Sentinel. - Third-party SIEMs - Send data to [Azure Event Hubs](../event-hubs/index.yml). Then integrate your Event Hub data with a third-party SIEM. Learn more in [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md).-- [The REST API](/rest/api/securitycenter/) - If you're using the REST API to access alerts, see the [online Alerts API documentation](/rest/api/securitycenter/alerts).
+- [The REST API](/rest/api/defenderforcloud/) - If you're using the REST API to access alerts, see the [online Alerts API documentation](/rest/api/defenderforcloud/alerts).
If you're using any programmatic methods to consume the alerts, you'll need the correct schema to find the fields that are relevant to you. Also, if you're exporting to an Event Hub or trying to trigger Workflow Automation with generic HTTP connectors, use the schemas to properly parse the JSON objects.
For the alerts schema when using workflow automation, see the [connectors docume
Defender for Cloud's continuous export feature passes alert data to: -- Azure Event Hub using the same schema as [the alerts API](/rest/api/securitycenter/alerts).
+- Azure Event Hub using the same schema as [the alerts API](/rest/api/defenderforcloud/alerts).
- Log Analytics workspaces according to the [SecurityAlert schema](/azure/azure-monitor/reference/tables/SecurityAlert) in the Azure Monitor data reference documentation. ### [MS Graph API](#tab/schema-graphapi)
defender-for-cloud Alerts Suppression Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-suppression-rules.md
The relevant HTTP methods for suppression rules in the REST API are:
- **DELETE**: Deletes an existing rule (but doesn't change the status of alerts already dismissed by it).
-For full details and usage examples, see the [API documentation](/rest/api/securitycenter/).
+For full details and usage examples, see the [API documentation](/rest/api/defenderforcloud/).
## Next steps
defender-for-cloud Configure Email Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/configure-email-notifications.md
You can send email notifications to individuals or to all users with specific Az
1. To apply the security contact information to your subscription, select **Save**. ## Customize the alerts email notifications through the API
-You can also manage your email notifications through the supplied REST API. For full details see the [SecurityContacts API documentation](/rest/api/securitycenter/securitycontacts).
+You can also manage your email notifications through the supplied REST API. For full details see the [SecurityContacts API documentation](/rest/api/defenderforcloud/securitycontacts).
This is an example request body for the PUT request when creating a security contact configuration:
defender-for-cloud Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/continuous-export.md
The steps below are necessary whether you're setting up a continuous export to L
### Configure continuous export using the REST API
-Continuous export can be configured and managed via the Microsoft Defender for Cloud [automations API](/rest/api/securitycenter/automations). Use this API to create or update rules for exporting to any of the following possible destinations:
+Continuous export can be configured and managed via the Microsoft Defender for Cloud [automations API](/rest/api/defenderforcloud/automations). Use this API to create or update rules for exporting to any of the following possible destinations:
- Azure Event Hub - Log Analytics workspace
Here are some examples of options that you can only use in the the API:
> [!TIP] > These API-only options are not shown in the Azure portal. If you use them, there'll be a banner informing you that other configurations exist.
-Learn more about the automations API in the [REST API documentation](/rest/api/securitycenter/automations).
+Learn more about the automations API in the [REST API documentation](/rest/api/defenderforcloud/automations).
### [**Deploy at scale with Azure Policy**](#tab/azure-policy)
defender-for-cloud Custom Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-security-policies.md
Below is an example of a custom policy including the metadata/securityCenter pro
} ```
-For another example of using the securityCenter property, see [this section of the REST API documentation](/rest/api/securitycenter/assessmentsmetadata/createinsubscription#examples).
+For another example of using the securityCenter property, see [this section of the REST API documentation](/rest/api/defenderforcloud/assessmentsmetadata/createinsubscription#examples).
## Next steps
defender-for-cloud Data Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security.md
Customers can access Defender for Cloud related data from the following data str
| [Azure Activity log](../azure-monitor/essentials/activity-log.md) | All security alerts, approved Defender for Cloud [just-in-time](just-in-time-access-usage.md) access requests, and all alerts generated by [adaptive application controls](adaptive-application-controls.md).| | [Azure Monitor logs](../azure-monitor/data-platform.md) | All security alerts. | | [Azure Resource Graph](../governance/resource-graph/overview.md) | Security alerts, security recommendations, vulnerability assessment results, secure score information, status of compliance checks, and more. |
-| [Microsoft Defender for Cloud REST API](/rest/api/securitycenter/) | Security alerts, security recommendations, and more. |
+| [Microsoft Defender for Cloud REST API](/rest/api/defenderforcloud/) | Security alerts, security recommendations, and more. |
## Next steps
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
Defender for Cloud pulls the image from the registry and runs it in an isolated
Defender for Cloud filters and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying you when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts. ### Can I get the scan results via REST API?
-Yes. The results are under [Sub-Assessments REST API](/rest/api/securitycenter/subassessments/list/). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
+Yes. The results are under [Sub-Assessments REST API](/rest/api/defenderforcloud/subassessments/list/). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
### What registry types are scanned? What types are billed? For a list of the types of container registries supported by Microsoft Defender for container registries, see [Availability](#availability).
defender-for-cloud Defender For Containers Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-cicd.md
This page explains how to scan your Azure Container Registry-based container images with the integrated vulnerability scanner when they're built as part of your GitHub workflows.
-To set up the scanner, you'll need to enable **Microsoft Defender for container registries** and the CI/CD integration. When your CI/CD workflows push images to your registries, you can view registry scan results and a summary of CI/CD scan results.
+To set up the scanner, you'll need to enable **Microsoft Defender for Containers** and the CI/CD integration. When your CI/CD workflows push images to your registries, you can view registry scan results and a summary of CI/CD scan results.
The findings of the CI/CD scans are an enrichment to the existing registry scan findings by Qualys. Defender for Cloud's CI/CD scanning is powered by [Aqua Trivy](https://github.com/aquasecurity/trivy).
defender-for-cloud Defender For Containers Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-usage.md
Defender for Cloud filters and classifies findings from the scanner. When an ima
### Can I get the scan results via REST API?
-Yes. The results are under [Sub-Assessments REST API](/rest/api/securitycenter/subassessments/list/). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
+Yes. The results are under [Sub-Assessments REST API](/rest/api/defenderforcloud/subassessments/list/). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
### What registry types are scanned? What types are billed?
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
You can use any of the following ways to enable enhanced security for your subsc
| Method | Instructions | |-|-| | Defender for Cloud pages of the Azure portal | [Enable enhanced protections](enable-enhanced-security.md) |
-| REST API | [Pricings API](/rest/api/securitycenter/pricings) |
+| REST API | [Pricings API](/rest/api/defenderforcloud/pricings) |
| Azure CLI | [az security pricing](/cli/azure/security/pricing) | | PowerShell | [Set-AzSecurityPricing](/powershell/module/az.security/set-azsecuritypricing) | | Azure Policy | [Bundle Pricings](https://github.com/Azure/Azure-Security-Center/blob/master/Pricing%20%26%20Settings/ARM%20Templates/Set-ASC-Bundle-Pricing.json) |
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
If you've never enabled the integration for Windows, the **Allow Microsoft Defen
### Enable the MDE unified solution at scale
-You can also enable the MDE unified solution at scale through the supplied REST API version 2022-05-01. For full details see the [API documentation](/rest/api/securitycenter/settings/update?tabs=HTTP).
+You can also enable the MDE unified solution at scale through the supplied REST API version 2022-05-01. For full details see the [API documentation](/rest/api/defenderforcloud/settings/update?tabs=HTTP).
This is an example request body for the PUT request to enable the MDE unified solution:
defender-for-cloud Just In Time Access Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-usage.md
The following PowerShell commands create this JIT configuration:
The just-in-time VM access feature can be used via the Microsoft Defender for Cloud API. Use this API to get information about configured VMs, add new ones, request access to a VM, and more.
-Learn more at [JIT network access policies](/rest/api/securitycenter/jitnetworkaccesspolicies).
+Learn more at [JIT network access policies](/rest/api/defenderforcloud/jitnetworkaccesspolicies).
Learn more in the [PowerShell cmdlet documentation](/powershell/scripting/develo
The just-in-time VM access feature can be used via the Microsoft Defender for Cloud API. Use this API to get information about configured VMs, add new ones, request access to a VM, and more.
-Learn more at [JIT network access policies](/rest/api/securitycenter/jitnetworkaccesspolicies).
+Learn more at [JIT network access policies](/rest/api/defenderforcloud/jitnetworkaccesspolicies).
defender-for-cloud Multi Factor Authentication Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md
To see which accounts don't have MFA enabled, use the following Azure Resource G
> The accounts are shown as object IDs rather than account names to protect the privacy of the account holders. > [!TIP]
-> Alternatively, you can use the Defender for Cloud REST API method [Assessments - Get](/rest/api/securitycenter/assessments/get).
+> Alternatively, you can use the Defender for Cloud REST API method [Assessments - Get](/rest/api/defenderforcloud/assessments/get).
## FAQ - MFA in Defender for Cloud
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
You can learn more by watching this video from the Defender for Cloud in the Fie
|Aspect|Details| |-|:-| |Release state:|General Availability (GA)|
-|Pricing:|The **CSPM plan** is free.<br>The **[Defender for SQL](defender-for-sql-introduction.md)** plan is billed at the same price as Azure resources.<br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for AWS at the same price as for Azure resources.<br>For every AWS machine connected to Azure with [Azure Arc-enabled servers](../azure-arc/servers/overview.md), the **Defender for Servers** plan is billed at the same price as the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for Azure machines. If an AWS EC2 doesn't have the Azure Arc agent deployed, you won't be charged for that machine.|
+|Pricing:|The **CSPM plan** is free.<br>The **[Defender for SQL](defender-for-sql-introduction.md)** plan is billed at the same price as Azure resources.<br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for AWS at the same price as for Azure resources.<br>For every AWS machine connected to Azure, the **Defender for Servers** plan is billed at the same price as the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for Azure machines.<br>Learn more about [Defender plan pricing and billing](enhanced-security-features-overview.md#faqpricing-and-billing)|
|Required roles and permissions:|**Contributor** permission for the relevant Azure subscription. <br> **Administrator** on the AWS account.| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
Connecting your AWS account is part of the multicloud experience available in Mi
- [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md). - [Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md)-- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector)
+- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector)
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
To view all the active recommendations for your resources by resource type, use
## FAQ - Connecting GCP projects to Microsoft Defender for Cloud ### Is there an API for connecting my GCP resources to Defender for Cloud?
-Yes. To create, edit, or delete Defender for Cloud cloud connectors with a REST API, see the details of the [Connectors API](/rest/api/securitycenter/security-connectors).
+Yes. To create, edit, or delete Defender for Cloud cloud connectors with a REST API, see the details of the [Connectors API](/rest/api/defenderforcloud/security-connectors).
## Next steps
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Learn how Security Center can protect your containerized environments in [Contai
### Assessments API expanded with two new fields
-We've added the following two fields to the [Assessments REST API](/rest/api/securitycenter/assessments):
+We've added the following two fields to the [Assessments REST API](/rest/api/defenderforcloud/assessments):
- **FirstEvaluationDate** ΓÇô The time that the recommendation was created and first evaluated. Returned as UTC time in ISO 8601 format. - **StatusChangeDate** ΓÇô The time that the status of the recommendation last changed. Returned as UTC time in ISO 8601 format.
To access this information, you can use any of the methods in the table below.
| Continuous export | The two dedicated fields will be available the Log Analytics workspace data | | [CSV export](continuous-export.md#manual-one-time-export-of-alerts-and-recommendations) | The two fields are included in the CSV files |
-Learn more about the [Assessments REST API](/rest/api/securitycenter/assessments).
+Learn more about the [Assessments REST API](/rest/api/defenderforcloud/assessments).
### Asset inventory gets a cloud environment filter
Learn more about [secure score and security controls in Azure Security Center](s
### Secure score API is released for general availability (GA)
-You can now access your score via the [secure score API](/rest/api/securitycenter/securescores/). The API methods provide the flexibility to query the data and build your own reporting mechanism of your secure scores over time. For example:
+You can now access your score via the [secure score API](/rest/api/defenderforcloud/securescores/). The API methods provide the flexibility to query the data and build your own reporting mechanism of your secure scores over time. For example:
- use the **Secure Scores** API to get the score for a specific subscription - use the **Secure Score Controls** API to list the security controls and the current score of your subscriptions
Updates in June include:
### Secure score API (preview)
-You can now access your score via the [secure score API](/rest/api/securitycenter/securescores/) (currently in preview). The API methods provide the flexibility to query the data and build your own reporting mechanism of your secure scores over time. For example, you can use the **Secure Scores** API to get the score for a specific subscription. In addition, you can use the **Secure Score Controls** API to list the security controls and the current score of your subscriptions.
+You can now access your score via the [secure score API](/rest/api/defenderforcloud/securescores/) (currently in preview). The API methods provide the flexibility to query the data and build your own reporting mechanism of your secure scores over time. For example, you can use the **Secure Scores** API to get the score for a specific subscription. In addition, you can use the **Secure Score Controls** API to list the security controls and the current score of your subscriptions.
For examples of external tools made possible with the secure score API, see [the secure score area of our GitHub community](https://github.com/Azure/Azure-Security-Center/tree/master/Secure%20Score).
defender-for-cloud Secure Score Access And Track https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-access-and-track.md
To recap, your secure score is shown in the following locations in Defender for
## Get your secure score from the REST API
-You can access your score via the secure score API. The API methods provide the flexibility to query the data and build your own reporting mechanism of your secure scores over time. For example, you can use the [Secure Scores API](/rest/api/securitycenter/securescores) to get the score for a specific subscription. In addition, you can use the [Secure Score Controls API](/rest/api/securitycenter/securescorecontrols) to list the security controls and the current score of your subscriptions.
+You can access your score via the secure score API. The API methods provide the flexibility to query the data and build your own reporting mechanism of your secure scores over time. For example, you can use the [Secure Scores API](/rest/api/defenderforcloud/securescores) to get the score for a specific subscription. In addition, you can use the [Secure Score Controls API](/rest/api/defenderforcloud/securescorecontrols) to list the security controls and the current score of your subscriptions.
![Retrieving a single secure score via the API.](media/secure-score-security-controls/single-secure-score-via-api.png)
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
The following APIs are set to be deprecated:
- Security Statuses - Security Summaries
-These three APIs exposed old formats of assessments and will be replaced by the [Assessments APIs](/rest/api/securitycenter/assessments) and [SubAssessments APIs](/rest/api/securitycenter/sub-assessments). All data that is exposed by these legacy APIs will also be available in the new APIs.
+These three APIs exposed old formats of assessments and will be replaced by the [Assessments APIs](/rest/api/defenderforcloud/assessments) and [SubAssessments APIs](/rest/api/defenderforcloud/sub-assessments). All data that is exposed by these legacy APIs will also be available in the new APIs.
## Next steps
event-grid Blob Event Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/blob-event-quickstart-portal.md
Before subscribing to the events for the Blob storage, let's create the endpoint
![Navigate to web site.](./media/blob-event-quickstart-portal/web-site.png)
-6. Confirm that you see the site but no events have been posted to it yet.
+6. Confirm that you see the site but no events have been posted to it yet.
![View new site.](./media/blob-event-quickstart-portal/view-site.png)
+ > [!IMPORTANT]
+ > Keep the Azure Event Grid Viewer window open so that you can see events as they are posted.
+ [!INCLUDE [event-grid-register-provider-portal.md](../../includes/event-grid-register-provider-portal.md)] ## Subscribe to the Blob storage
Now, let's trigger an event to see how Event Grid distributes the message to you
You trigger an event for the Blob storage by uploading a file. The file doesn't need any specific content. The articles assumes you have a file named testfile.txt, but you can use any file. 1. In the Azure portal, navigate to your Blob storage account, and select **Containers** on the let menu.
-1. Select **+ Container**. Give you container a name, and use any access level, and select **Create**.
+1. Select **+ Container**. Give your container a name, and use any access level, and select **Create**.
![Add container.](./media/blob-event-quickstart-portal/add-container.png) 1. Select your new container.
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-features.md
Title: Overview of features - Azure Event Hubs | Microsoft Docs
description: This article provides details about features and terminology of Azure Event Hubs. Previously updated : 05/11/2022 Last updated : 08/25/2022 # Features and terminology in Azure Event Hubs
Published events are removed from an event hub based on a configurable, timed-ba
- The **default** value and **shortest** possible retention period is **1 day (24 hours)**. - For Event Hubs **Standard**, the maximum retention period is **7 days**. - For Event Hubs **Premium** and **Dedicated**, the maximum retention period is **90 days**.-- If you change the retention period, it applies to all messages including messages that are already in the event hub.
+- If you change the retention period, it applies to all events including events that are already in the event hub.
Event Hubs retains events for a configured retention time that applies across all partitions. Events are automatically removed when the retention period has
The publish/subscribe mechanism of Event Hubs is enabled through *consumer group
In a stream processing architecture, each downstream application equates to a consumer group. If you want to write event data to long-term storage, then that storage writer application is a consumer group. Complex event processing can then be performed by another, separate consumer group. You can only access partitions through a consumer group. There's always a default consumer group in an event hub, and you can create up to the [maximum number of consumer groups](event-hubs-quotas.md) for the corresponding pricing tier.
-There can be at most 5 concurrent readers on a partition per consumer group; however **it's recommended that there's only one active receiver on a partition per consumer group**. Within a single partition, each reader receives all of the messages. If you have multiple readers on the same partition, then you process duplicate messages. You need to handle this in your code, which may not be trivial. However, it's a valid approach in some scenarios.
+There can be at most 5 concurrent readers on a partition per consumer group; however **it's recommended that there's only one active receiver on a partition per consumer group**. Within a single partition, each reader receives all events. If you have multiple readers on the same partition, then you process duplicate events. You need to handle this in your code, which may not be trivial. However, it's a valid approach in some scenarios.
Some clients offered by the Azure SDKs are intelligent consumer agents that automatically manage the details of ensuring that each partition has a single reader and that all partitions for an event hub are being read from. This allows your code to focus on processing the events being read from the event hub so it can ignore many of the details of the partitions. For more information, see [Connect to a partition](#connect-to-a-partition).
expressroute Expressroute Global Reach https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-global-reach.md
ExpressRoute Global Reach is supported in the following places.
> [!NOTE] > * To enable ExpressRoute Global Reach between [different geopolitical regions](expressroute-locations-providers.md#locations), your circuits must be **Premium SKU**.
-> * IPv6 support for ExpressRoute Global Reach is now in Public Preview. See [Enable Global Reach](expressroute-howto-set-global-reach.md) to learn more.
* Australia * Canada
expressroute Expressroute Howto Set Global Reach Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach-cli.md
# Configure ExpressRoute Global Reach by using the Azure CLI This article helps you configure Azure ExpressRoute Global Reach by using the Azure CLI. For more information, see [ExpressRoute Global Reach](expressroute-global-reach.md).-
-> [!NOTE]
-> IPv6 support for ExpressRoute Global Reach is now in Public Preview. See [Enable Global Reach](expressroute-howto-set-global-reach.md) for steps to configure this feature using PowerShell.
Before you start configuration, complete the following requirements:
expressroute Expressroute Howto Set Global Reach Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach-portal.md
This article helps you configure ExpressRoute Global Reach using the Azure portal. For more information, see [ExpressRouteRoute Global Reach](expressroute-global-reach.md).
-> [!NOTE]
-> IPv6 support for ExpressRoute Global Reach is now in Public Preview.
- ## Before you begin Before you start configuration, confirm the following criteria:
Enable connectivity between your on-premises networks. There are separate sets o
1. On the *Add Global Reach* configuration page, give a name to this configuration. Select the *ExpressRoute circuit* you want to connect this circuit to and enter in a **/29 IPv4** for the *Global Reach IPv4 subnet*. We use IP addresses in this subnet to establish connectivity between the two ExpressRoute circuits. DonΓÇÖt use the addresses in this subnet in your Azure virtual networks, private peering subnet, or on-premises network. Select **Add** to add the circuit to the private peering configuration. > [!NOTE]
- > IPv6 support for ExpressRoute Global Reach is now in Public Preview. If you want to enable this feature for test workloads, select "Both" for the *Subnets* field and include a **/125 IPv6** subnet for the *Global Reach IPv6 subnet*.
+ > If you wish to enable IPv6 support for ExpressRoute Global Reach, select "Both" for the *Subnets* field and include a **/125 IPv6** subnet for the *Global Reach IPv6 subnet*.
:::image type="content" source="./media/expressroute-howto-set-global-reach-portal/add-global-reach-configuration.png" alt-text="Screenshot of adding Global Reach in Overview tab.":::
If the two circuits aren't in the same Azure subscription, you'll need authoriza
1. On the *Add Global Reach* configuration page, give a name to this configuration. Check the **Redeem authorization** box. Enter the **Authorization Key** and the **ExpressRoute circuit ID** generated and obtained in Step 1. Then provide a **/29 IPv4** for the *Global Reach IPv4 subnet*. We use IP addresses in this subnet to establish connectivity between the two ExpressRoute circuits. DonΓÇÖt use the addresses in this subnet in your Azure virtual networks, or in your on-premises network. Select **Add** to add the circuit to the private peering configuration. > [!NOTE]
- > IPv6 support for ExpressRoute Global Reach is now in Public Preview. If you want to enable this feature for test workloads, select "Both" for the *Subnets* field and include a **/125 IPv6** subnet for the *Global Reach IPv6 subnet*.
+ > If you wish to enable IPv6 support for ExpressRoute Global Reach, select "Both" for the *Subnets* field and include a **/125 IPv6** subnet for the *Global Reach IPv6 subnet*.
:::image type="content" source="./media/expressroute-howto-set-global-reach-portal/add-global-reach-configuration-with-authorization.png" alt-text="Screenshot of Add Global Reach with authorization key.":::
expressroute Expressroute Howto Set Global Reach https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach.md
Before you start configuration, confirm the following:
* Azure private peering is configured on your ExpressRoute circuits. * If you want to run PowerShell locally, verify that the latest version of Azure PowerShell is installed on your computer.
-> [!NOTE]
-> Global Reach does **not** support configuration updates at this time. This means that if you create a Global Reach connection using the following instructions, you must delete and recreate the connection with any configuration updates. Attempting to update an existing Global Reach connection will put your ExpressRoute circuit in a failed state.
->
- ### Working with Azure PowerShell [!INCLUDE [updated-for-az](../../includes/hybrid-az-ps.md)]
Enable connectivity between your on-premises networks. There are separate sets o
``` > [!NOTE]
- > IPv6 support for ExpressRoute Global Reach is now in Public Preview. To add an IPv6 Global Reach connection, you must specify a /125 IPv6 subnet for *-AddressPrefix* and an *-AddressPrefixType* of *IPv6*.
+ > If you wish to enable IPv6 support for ExpressRoute Global Reach, you must specify a /125 IPv6 subnet for *-AddressPrefix* and an *-AddressPrefixType* of *IPv6*.
```azurepowershell-interactive Add-AzExpressRouteCircuitConnectionConfig -Name 'Your_connection_name' -ExpressRouteCircuit $ckt_1 -PeerExpressRouteCircuitPeering $ckt_2.Peerings[0].Id -AddressPrefix '__.__.__.__/125' -AddressPrefixType IPv6
If the two circuits are not in the same Azure subscription, you need authorizati
Add-AzExpressRouteCircuitConnectionConfig -Name 'Your_connection_name' -ExpressRouteCircuit $ckt_1 -PeerExpressRouteCircuitPeering "circuit_2_private_peering_id" -AddressPrefix '__.__.__.__/29' -AuthorizationKey '########-####-####-####-############' ```
- > [!NOTE]
- > IPv6 support for ExpressRoute Global Reach is now in Public Preview. To add an IPv6 Global Reach connection, you must specify a /125 IPv6 subnet for *-AddressPrefix* and an *-AddressPrefixType* of *IPv6*.
+ > [!NOTE]
+ > If you wish to enable IPv6 support for ExpressRoute Global Reach, you must specify a /125 IPv6 subnet for *-AddressPrefix* and an *-AddressPrefixType* of *IPv6*.
```azurepowershell-interactive Add-AzExpressRouteCircuitConnectionConfig -Name 'Your_connection_name' -ExpressRouteCircuit $ckt_1 -PeerExpressRouteCircuitPeering $ckt_2.Peerings[0].Id -AddressPrefix '__.__.__.__/125' -AddressPrefixType IPv6 -AuthorizationKey '########-####-####-####-############'
Set-AzExpressRouteCircuit -ExpressRouteCircuit $ckt_1
``` > [!NOTE]
-> IPv6 support for ExpressRoute Global Reach is now in Public Preview. To delete an IPv6 Global Reach connection, you must specify an *-AddressPrefixType* of *IPv6* like in the following command.
+> To delete an IPv6 Global Reach connection, you must specify an *-AddressPrefixType* of *IPv6* like in the following command.
```azurepowershell-interactive $ckt_1 = Get-AzExpressRouteCircuit -Name "Your_circuit_1_name" -ResourceGroupName "Your_resource_group"
expressroute Expressroute Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-routing.md
A Private AS Number is allowed with Microsoft Peering, but will also require man
> ### Public peering (deprecated - not available for new circuits)
-The Azure public peering path enables you to connect to all services hosted in Azure over their public IP addresses. These include services listed in the [ExpessRoute FAQ](expressroute-faqs.md) and any services hosted by ISVs on Microsoft Azure. Connectivity to Microsoft Azure services on public peering is always initiated from your network into the Microsoft network. You must use Public IP addresses for the traffic destined to Microsoft network.
+The Azure public peering path enables you to connect to all services hosted in Azure over their public IP addresses. These include services listed in the [ExpressRoute FAQ](expressroute-faqs.md) and any services hosted by ISVs on Microsoft Azure. Connectivity to Microsoft Azure services on public peering is always initiated from your network into the Microsoft network. You must use Public IP addresses for the traffic destined to Microsoft network.
> [!IMPORTANT] > All Azure PaaS services are accessible through Microsoft peering.
frontdoor Front Door Geo Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-geo-filtering.md
- Title: Geo-filtering on a domain for Azure Front Door | Microsoft Docs
-description: In this article, you learn about geo-filtering policy for Azure Front Door
----- Previously updated : 08/31/2021----
-# Geo-filtering on a domain for Azure Front Door
-
-By default, Azure Front Door will respond to all user requests regardless of the location where the request is coming from. In some scenarios, you may want to restrict the access to your web application by countries/regions. The Web application firewall (WAF) service in Front Door enables you to define a policy using custom access rules for a specific path on your endpoint to either allow or block access from specified countries/regions.
-
-A WAF policy contains a set of custom rules. The rule consists of match conditions, an action, and a priority. In a match condition, you define a match variable, operator, and match value. For a geo filtering rule, a match variable is REMOTE_ADDR, the operator is GeoMatch, and the value is a two letter country/region code of interest. "ZZ" country code or "Unknown" country captures IP addresses that are not yet mapped to a country in our dataset. You may add ZZ to your match condition to avoid false positives. You can combine a GeoMatch condition and a REQUEST_URI string match condition to create a path-based geo-filtering rule.
-
-You can configure a geo-filtering policy for your Front Door by using [Azure PowerShell](front-door-tutorial-geo-filtering.md) or by using a [quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-geo-filtering).
-
-> [!IMPORTANT]
-> Include the country code **ZZ** whenever you use geo-filtering. The **ZZ** country code (or *Unknown* country) captures IP addresses that are not yet mapped to a country in our dataset. This avoids false positives.
-
-## Country/Region code reference
-
-|Country/Region code | Country/Region name |
-| -- | -- |
-| AD | Andorra |
-| AE | United Arab Emirates|
-| AF | Afghanistan|
-| AG | Antigua and Barbuda|
-| AL | Albania|
-| AM | Armenia|
-| AO | Angola|
-| AR | Argentina|
-| AS | American Samoa|
-| AT | Austria|
-| AU | Australia|
-| AZ | Azerbaijan|
-| BA | Bosnia and Herzegovina|
-| BB | Barbados|
-| BD | Bangladesh|
-| BE | Belgium|
-| BF | Burkina Faso|
-| BG | Bulgaria|
-| BH | Bahrain|
-| BI | Burundi|
-| BJ | Benin|
-| BL | Saint Barthélemy|
-| BN | Brunei Darussalam|
-| BO | Bolivia|
-| BR | Brazil|
-| BS | Bahamas|
-| BT | Bhutan|
-| BW | Botswana|
-| BY | Belarus|
-| BZ | Belize|
-| CA | Canada|
-| CD | Democratic Republic of the Congo|
-| CG | Republic of the Congo |
-| CF | Central African Republic|
-| CH | Switzerland|
-| CI | Cote d'Ivoire|
-| CL | Chile|
-| CM | Cameroon|
-| CN | China|
-| CO | Colombia|
-| CR | Costa Rica|
-| CU | Cuba|
-| CV | Cabo Verde|
-| CY | Cyprus|
-| CZ | Czech Republic|
-| DE | Germany|
-| DK | Denmark|
-| DO | Dominican Republic|
-| DZ | Algeria|
-| EC | Ecuador|
-| EE | Estonia|
-| EG | Egypt|
-| ES | Spain|
-| ET | Ethiopia|
-| FI | Finland|
-| FJ | Fiji|
-| FM | Micronesia, Federated States of|
-| FR | France|
-| GB | United Kingdom|
-| GE | Georgia|
-| GF | French Guiana|
-| GH | Ghana|
-| GN | Guinea|
-| GP | Guadeloupe|
-| GR | Greece|
-| GT | Guatemala|
-| GY | Guyana|
-| HK | Hong Kong SAR|
-| HN | Honduras|
-| HR | Croatia|
-| HT | Haiti|
-| HU | Hungary|
-| ID | Indonesia|
-| IE | Ireland|
-| IL | Israel|
-| IN | India|
-| IQ | Iraq|
-| IR | Iran, Islamic Republic of|
-| IS | Iceland|
-| IT | Italy|
-| JM | Jamaica|
-| JO | Jordan|
-| JP | Japan|
-| KE | Kenya|
-| KG | Kyrgyzstan|
-| KH | Cambodia|
-| KI | Kiribati|
-| KN | Saint Kitts and Nevis|
-| KP | Korea, Democratic People's Republic of|
-| KR | Korea, Republic of|
-| KW | Kuwait|
-| KY | Cayman Islands|
-| KZ | Kazakhstan|
-| LA | Lao People's Democratic Republic|
-| LB | Lebanon|
-| LI | Liechtenstein|
-| LK | Sri Lanka|
-| LR | Liberia|
-| LS | Lesotho|
-| LT | Lithuania|
-| LU | Luxembourg|
-| LV | Latvia|
-| LY | Libya |
-| MA | Morocco|
-| MD | Moldova, Republic of|
-| MG | Madagascar|
-| MK | North Macedonia|
-| ML | Mali|
-| MM | Myanmar|
-| MN | Mongolia|
-| MO | Macao SAR|
-| MQ | Martinique|
-| MR | Mauritania|
-| MT | Malta|
-| MV | Maldives|
-| MW | Malawi|
-| MX | Mexico|
-| MY | Malaysia|
-| MZ | Mozambique|
-| NA | Namibia|
-| NE | Niger|
-| NG | Nigeria|
-| NI | Nicaragua|
-| NL | Netherlands|
-| NO | Norway|
-| NP | Nepal|
-| NR | Nauru|
-| NZ | New Zealand|
-| OM | Oman|
-| PA | Panama|
-| PE | Peru|
-| PH | Philippines|
-| PK | Pakistan|
-| PL | Poland|
-| PR | Puerto Rico|
-| PT | Portugal|
-| PW | Palau|
-| PY | Paraguay|
-| QA | Qatar|
-| RE | Reunion|
-| RO | Romania|
-| RS | Serbia|
-| RU | Russian Federation|
-| RW | Rwanda|
-| SA | Saudi Arabia|
-| SD | Sudan|
-| SE | Sweden|
-| SG | Singapore|
-| SI | Slovenia|
-| SK | Slovakia|
-| SN | Senegal|
-| SO | Somalia|
-| SR | Suriname|
-| SS | South Sedan|
-| SV | El Salvador|
-| SY | Syrian Arab Republic|
-| SZ | Swaziland|
-| TC | Turks and Caicos Islands|
-| TG | Togo|
-| TH | Thailand|
-| TN | Tunisia|
-| TR | Turkey|
-| TT | Trinidad and Tobago|
-| TW | Taiwan|
-| TZ | Tanzania, United Republic of|
-| UA | Ukraine|
-| UG | Uganda|
-| US | United States|
-| UY | Uruguay|
-| UZ | Uzbekistan|
-| VC | Saint Vincent and the Grenadines|
-| VE | Venezuela|
-| VG | Virgin Islands, British|
-| VI | Virgin Islands, U.S.|
-| VN | Vietnam|
-| ZA | South Africa|
-| ZM | Zambia|
-| ZW | Zimbabwe|
-
-## Next steps
--- Learn how to [create a Front Door](quickstart-create-front-door.md).-- Learn how to [set up a geo-filtering WAF policy](front-door-tutorial-geo-filtering.md).
hdinsight Hdinsight 50 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-50-component-versioning.md
+
+ Title: Open-source components and versions - Azure HDInsight 5.0
+description: Learn about the open-source components and versions in Azure HDInsight 5.0.
++ Last updated : 08/25/2022++
+# HDInsight 5.0 component versions
+
+In this article, you learn about the open-source components and their versions in Azure HDInsight 5.0.
+
+Starting June 1, 2022, we have started rolling out a new version of HDInsight 5.0, this version is backward compatible with HDInsight 4.0. All new open-source releases will be added as incremental releases on HDInsight 5.0.
+
+## Open-source components available with HDInsight version 5.0
+
+The Open-source component versions associated with HDInsight 5.0 are listed in the following table.
+
+| Component | HDInsight 5.0 | HDInsight 4.0 |
+||| --|
+|Apache Spark | 3.1.2 | 2.4.4, 3.1 |
+|Apache Hive | 3.1.2 | 3.1.2 |
+|Apache Kafka | 2.4.1 | 2.1.1, 2.4.1(Preview) |
+|Apache Hadoop |3.1.1 | 3.1.1 |
+|Apache Tez | 0.9.1 | 0.9.1 |
+|Apache Pig | 0.16.0 | 0.16.1 |
+|Apache Ranger | 1.1.0 | 1.1.0 |
+|Apache Sqoop | 1.5.0 | 1.5.0 |
+|Apache Oozie | 4.3.1 | 4.3.1 |
+|Apache Zookeeper | 3.4.6 | 3.4.6 |
+|Apache Livy | 0.5 | 0.5 |
+|Apache Ambari | 2.7.0 | 2.7.0 |
+|Apache Zeppelin | 0.8.0 | 0.8.0 |
+
+This table lists certain HDInsight 4.0 cluster types that have retired or will be retired soon.
+
+| Cluster Type | Framework version | Support expiration date | Retirement date |
+||-||--|
+| HDInsight 4.0 Kafka | 2.1.0 | Sep 30, 2022 | Oct 1, 2022 |
+
+## Spark
++
+> [!NOTE]
+> * If you are using Azure User Interface to create a Spark Cluster for HDInsight, you will see from the dropdown list an additional version Spark 3.1.(HDI 5.0) along with the older versions. This version is a renamed version of Spark 3.1.(HDI 4.0) and it is backward compatible.
+> * This is only an UI level change, which doesnΓÇÖt impact anything for the existing users and users who are already using the ARM template to build their clusters.
+> * For backward compatibility, ARM supports creating Spark 3.1 with HDI 4.0 and 5.0 versions which maps to same versions Sspark 3.1 (HDI 5.0)
+> * Spark 3.1 (HDI 5.0) cluster comes with HWC 2.0 which works well together with Interactive Query (HDI 5.0) cluster.
+
+## Interactive Query
++
+> [!NOTE]
+> * If you are creating an Interactive Query Cluster, you will see from the dropdown list an other version as Interactive Query 3.1 (HDI 5.0).
+> * If you are going to use Spark 3.1 version along with Hive which require ACID support via Hive Warehouse Connector (HWC).
+you need to select this version Interactive Query 3.1 (HDI 5.0).
+
+## Kafka
+
+**Known Issue** ΓÇô Current ARM template supports only 4.0 even though it shows 5.0 image in portal Cluster creation may fail with the following error message if you select version 5.0 in the UI.
+
+`HDI Version'5.0" is not supported for clusterType ''Kafka" and component Version ΓÇÿ2.4'.,Cluster component version is not applicable for HDI version: 5.0 cluster type: KAFKA (Code: BadRequest)`
+
+We're working on this issue, and a fix will be rolled out shortly.
+
+### Upcoming version upgrades.
+HDInsight team is working on upgrading other open-source components.
+1. Spark 3.2.0
+1. Kafka 3.2.1
+1. HBase 2.4.9
+
+## Next steps
+
+- [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md)
+- [Enterprise Security Package](./enterprise-security-package.md)
+- [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)
hdinsight Hdinsight Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-component-versioning.md
Title: Apache Hadoop components and versions - Azure HDInsight
-description: Learn about the Apache Hadoop components and versions in Azure HDInsight.
+ Title: Open-source components and versions - Azure HDInsight
+description: Learn about the open-source components and versions in Azure HDInsight.
Previously updated : 08/05/2022 Last updated : 08/25/2022 # Azure HDInsight versions
-HDInsight bundles Apache Hadoop environment components and HDInsight platform into a package that is deployed on a cluster. For more information, see [how HDInsight versioning works](hdinsight-overview-versioning.md).
+HDInsight bundles open-source components and HDInsight platform into a package that is deployed on a cluster. For more information, see [how HDInsight versioning works](hdinsight-overview-versioning.md).
## Supported HDInsight versions
This table lists the versions of HDInsight that are available in the Azure porta
| HDInsight version | VM OS | Release date| Support type | Support expiration date | Retirement date | High availability | | | | | | | | |
+| [HDInsight 5.0](hdinsight-50-component-versioning.md) |Ubuntu 18.0.4 LTS |July 01, 2022 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | See [HDInsight 5.0](hdinsight-50-component-versioning.md) for date details. | See [HDInsight 5.0](hdinsight-50-component-versioning.md) for date details. |Yes |
| [HDInsight 4.0](hdinsight-40-component-versioning.md) |Ubuntu 18.0.4 LTS |September 24, 2018 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | See [HDInsight 4.0](hdinsight-40-component-versioning.md) for date details. | See [HDInsight 4.0](hdinsight-40-component-versioning.md) for date details. |Yes | | [HDInsight 3.6](hdinsight-36-component-versioning.md) |Ubuntu 16.0.4 LTS |April 4, 2017 | [Basic](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Standard support expired on June 30, 2021 for all cluster types.<br> Basic support expires on September 30, 2022. See [HDInsight 3.6 component versions](hdinsight-36-component-versioning.md) for cluster type details. |October 1, 2022 |Yes |
-**Support expiration** means that Microsoft no longer provides support for the specific HDInsight version. And it may no longer available through the Azure portal for cluster creation.
+**Support expiration** means that Microsoft no longer provides support for the specific HDInsight version. You may not be able to create clusters from the Azure portal.
-**Retirement** means that existing clusters of an HDInsight version continue to run as is. New clusters of this version can't be created through any means, which includes the CLI and SDKs. Other control plane features, such as manual scaling and autoscaling, are not guaranteed to work after retirement date. Support isn't available for retired versions.
+**Retirement** means that existing clusters of an HDInsight version continue to run as is. New clusters of this version can't be created through any means, which includes the CLI and SDKs. Other control plane features, such as manual scaling and autoscaling, aren't guaranteed to work after retirement date. Support isn't available for retired versions.
## Support options for HDInsight versions
Support is defined as a time period that an HDInsight version is supported by Mi
Standard support provides updates and support on HDInsight clusters. Microsoft recommends building solutions using the most recent fully supported version.
-Standard support includes the following:
+Standard support includes
- Ability to create support requests on HDInsight 4.0 clusters. - Support for troubleshooting solutions built on 4.0 clusters. - Requests to restart services or nodes. - Root cause analysis investigations on support requests. - Root cause analysis or fixes to improve job or query performance.-- Root cause analysis or fixes to improve customer-initiated changes, e.g., changing service configurations or issues due to custom script actions.
+- Root cause analysis or fixes to improve customer-initiated changes, for example, changing service configurations or issues due to custom script actions.
- Product updates for critical security fixes until version retirement. - Scoped product updates to the HDInsight Resource provider. - Selective fixes or changes to HDInsight 4.0 images or open-source software (OSS) component versions. ### Basic support
-Basic support provides limited servicing to the HDInsight Resource provider. HDInsight images and open-source software (OSS) components will not be serviced. Only critical security fixes will be patched on HDInsight clusters.
+Basic support provides limited servicing to the HDInsight Resource provider. HDInsight images and open-source software (OSS) components won't be serviced. Only critical security fixes will be patched on HDInsight clusters.
-Basic support includes the following:
+Basic support includes
- Continued use of existing HDInsight 3.6 clusters. - Ability for existing HDInsight 3.6 customers to create new 3.6 clusters. - Ability to scale HDInsight 3.6 clusters up and down via autoscale or manual scale.
Basic support includes the following:
- Ability to create support requests on HDInsight 3.6 clusters. - Requests to restart services or nodes.
-Basic support does not include the following:
+Basic support doesn't include
- Fixes or changes to HDInsight 3.6 images or open-source software (OSS) component versions. - Support for troubleshooting solutions built on 3.6 clusters. - Adding new features or functionality. - Support for advice or ad-hoc queries. - Root cause analysis investigations on support requests. - Root cause analysis or fixes to improve job or query performance.-- Root cause analysis or fixes to improve customer-initiated changes, e.g., changing service configurations or issues due to custom script actions.
+- Root cause analysis or fixes to improve customer-initiated changes, for example, changing service configurations or issues due to custom script actions.
-Microsoft does not encourage creating analytics pipelines or solutions on clusters in basic support. We recommend migrating existing clusters to the most recent fully supported version.
+Microsoft doesn't encourage creating analytics pipelines or solutions on clusters in basic support. We recommend migrating existing clusters to the most recent fully supported version.
## HDInsight 3.6 to 4.0 Migration Guides - [Migrate Apache Spark 2.1 and 2.2 workloads to 2.3 and 2.4](spark/migrate-versions.md).
Microsoft does not encourage creating analytics pipelines or solutions on cluste
## Release notes
-For additional release notes on the latest versions of HDInsight, see [HDInsight release notes](hdinsight-release-notes.md).
+For extra release notes on the latest versions of HDInsight, see [HDInsight release notes](hdinsight-release-notes.md).
## Versioning considerations-- Once a cluster is deployed with an image, that cluster is not automatically upgraded to newer image version. When creating new clusters, most recent image version will be deployed.
+- Once a cluster is deployed with an image, that cluster isn't automatically upgraded to newer image version. When you create new clusters, most recent image version will be deployed.
- Customers should test and validate that applications run properly when using new HDInsight version. - HDInsight reserves the right to change the default version without prior notice. If you have a version dependency, specify the HDInsight version when you create your clusters. - HDInsight may retire an OSS component version before retiring the HDInsight version.
healthcare-apis How To Do Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/how-to-do-custom-search.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 # Defining custom search parameters
-The FHIR specification defines a set of search parameters for all resources and search parameters that are specific to a resource(s). However, there are scenarios where you might want to search against an element in a resource that isnΓÇÖt defined by the FHIR specification as a standard search parameter. This article describes how you can define your own [search parameters](https://www.hl7.org/fhir/searchparameter.html) to be used in the FHIR service in Azure Health Data Services (hereby called FHIR service).
+The FHIR specification defines a set of search parameters that apply to all resources. Additionally, FHIR defines many search parameters that are specific to certain resources. There are scenarios, however, where you might want to search against an element in a resource that isnΓÇÖt defined by the FHIR specification as a standard search parameter. This article describes how you can define your own custom [search parameters](https://www.hl7.org/fhir/searchparameter.html) for use in the FHIR service in Azure Health Data Services.
> [!NOTE]
-> Each time you create, update, or delete a search parameter youΓÇÖll need to run a [reindex job](how-to-run-a-reindex.md) to enable the search parameter to be used in production. Below we will outline how you can test search parameters before reindexing the entire FHIR service.
+> Each time you create, update, or delete a search parameter, youΓÇÖll need to run a [reindex job](how-to-run-a-reindex.md) to enable the search parameter for live production. Below we will outline how you can test search parameters before reindexing the entire FHIR service database.
## Create new search parameter
-To create a new search parameter, you `POST` the `SearchParameter` resource to the database. The code example below shows how to add the [US Core Race SearchParameter](http://hl7.org/fhir/us/core/STU3.1.1/SearchParameter-us-core-race.html) to the `Patient` resource.
+To create a new search parameter, you need to `POST` a `SearchParameter` resource to the FHIR service database. The code example below shows how to add the [US Core Race search parameter](http://hl7.org/fhir/us/core/STU3.1.1/SearchParameter-us-core-race.html) to the `Patient` resource type in your FHIR service database.
```rest POST {{FHIR_URL}}/SearchParameter
POST {{FHIR_URL}}/SearchParameter
``` > [!NOTE]
-> The new search parameter will appear in the capability statement of the FHIR service after you POST the search parameter to the database **and** reindex your database. Viewing the `SearchParameter` in the capability statement is the only way tell if a search parameter is supported in your FHIR service. If you can find the search parameter by searching for the search parameter but cannot see it in the capability statement, you still need to index the search parameter. You can POST multiple search parameters before triggering a reindex operation.
+> The new search parameter will appear in the capability statement of the FHIR service after you `POST` the search parameter to the database **and** reindex your database. Viewing the `SearchParameter` in the capability statement is the only way to tell if a search parameter is supported in your FHIR service. If you cannot find the `SearchParameter` in the capability statement, then you still need to reindex your database to activate the search parameter. You can `POST` multiple search parameters before triggering a reindex operation.
-Important elements of a `SearchParameter`:
+Important elements of a `SearchParameter` resource:
-* **url**: A unique key to describe the search parameter. Many organizations, such as HL7, use a standard URL format for the search parameters that they define, as shown above in the US Core race search parameter.
+* `url`: A unique key to describe the search parameter. Organizations such as HL7 use a standard URL format for the search parameters that they define, as shown above in the US Core Race search parameter.
-* **code**: The value stored in **code** is what youΓÇÖll use when searching. For the example above, you would search with `GET {FHIR_URL}/Patient?race=<code>` to get all patients of a specific race. The code must be unique for the resource(s) the search parameter applies to.
+* `code`: The value stored in the **code** element is the name used for the search parameter when it is included in an API call. For the example above, you would search with `GET {{FHIR_URL}}/Patient?race=<code>` where `<code>` is in the value set from the specified coding system. This call would retrieve all patients of a certain race.
-* **base**: Describes which resource(s) the search parameter applies to. If the search parameter applies to all resources, you can use `Resource`; otherwise, you can list all the relevant resources.
+* `base`: Describes which resource type(s) the search parameter applies to. If the search parameter applies to all resources, you can use `Resource`; otherwise, you can list all the relevant resource types.
-* **type**: Describes the data type for the search parameter. Type is limited by the support for the FHIR service. This means that you canΓÇÖt define a search parameter of type Special or define a [composite search parameter](overview-of-search.md) unless it's a supported combination.
+* `type`: Describes the data type for the search parameter. Type is limited by the support for data types in the FHIR service. This means that you canΓÇÖt define a search parameter of type Special or define a [composite search parameter](overview-of-search.md) unless it's a supported combination.
-* **expression**: Describes how to calculate the value for the search. When describing a search parameter, you must include the expression, even though it isn't required by the specification. This is because you need either the expression or the xpath syntax and the FHIR service ignores the xpath syntax.
+* `expression`: Describes how to calculate the value for the search. When describing a search parameter, you must include the expression, even though it isn't required by the specification. This is because you need either the expression or the xpath syntax and the FHIR service ignores the xpath syntax.
-## Test search parameters
+## Test new search parameters
-While you canΓÇÖt use the search parameters in production until you run a reindex job, there are a few ways to test your search parameters before reindexing the entire database.
+While you canΓÇÖt use the new search parameters in production until you run a reindex job, there are a few ways to test your custom search parameters before reindexing the entire database.
-First, you can test your new search parameter to see what values will be returned. By running the command below against a specific resource instance (by inputting their ID), you'll get back a list of value pairs with the search parameter name and the value stored. This will include all of the search parameters for the resource and you can scroll through to find the search parameter you created. Running this command won't change any behavior in your FHIR service.
+First, you can test a new search parameter to see what values will be returned. By running the command below against a specific resource instance (by supplying the resource ID), you'll get back a list of value pairs with the search parameter name and the value stored in the corresponding element. This will include all of the search parameters for the resource. You can scroll through to find the search parameter you created. Running this command won't change any behavior in your FHIR service.
```rest GET https://{{FHIR_URL}}/{{RESOURCE}}/{{RESOUCE_ID}}/$reindex
The result will look like this:
{ "name": "race", "valueString": "2028-9"
- },
-...
+ }
+ ]
+ ...}
```
-Once you see that your search parameter is displaying as expected, you can reindex a single resource to test searching with the element. First you'll reindex a single resource:
+
+Once you see that your search parameter is displaying as expected, you can reindex a single resource to test searching with your new search parameter. To reindex a single resource:
```rest POST https://{{FHIR_URL}/{{RESOURCE}}/{{RESOURCE_ID}}/$reindex ```
-Running this, sets the indices for any search parameters for the specific resource that you defined for that resource type. This does make an update to the FHIR service. Now you can search and set the use partial indices header to true, which means that it will return results where any of the resources has the search parameter indexed, even if not all resources of that type have it indexed.
+Running this `POST` call sets the indices for any search parameters defined for the resource instance specified in the request. This call does make a change to the FHIR service database. Now you can search and set the `x-ms-use-partial-indices` header to `true`, which causes the FHIR service to return results for any of the resources that have the search parameter indexed, even if not all resource instances of that type have it indexed.
Continuing with our example above, you could index one patient to enable the US Core Race `SearchParameter`: ```rest
-POST https://{{FHIR_URL}/Patient/{{PATIENT_ID}}/$reindex
+POST {{FHIR_URL}}/Patient/{{PATIENT_ID}}/$reindex
```
-And then search for patients that have a specific race:
+And then do a test search for the patient by race:
```rest
-GET https://{{FHIR_URL}}/Patient?race=2028-9
+GET {{FHIR_URL}}/Patient?race=2028-9
x-ms-use-partial-indices: true ```
-After you have tested and are satisfied that your search parameter is working as expected, run or schedule your reindex job so the search parameters can be used in the FHIR service for production use cases.
+After you have tested your new search parameter and confirmed that it is working as expected, run or schedule your reindex job so the new search parameter(s) can be used in live production.
+
+See [Running a reindex job](../fhir/how-to-run-a-reindex.md) for information on how to re-index your FHIR service database.
## Update a search parameter
-To update a search parameter, use `PUT` to create a new version of the search parameter. You must include the `SearchParameter ID` in the `id` element of the body of the `PUT` request and in the `PUT` call.
+To update a search parameter, use `PUT` to create a new version of the search parameter. You must include the search parameter ID in the `id` field in the body of the `PUT` request as well as the `PUT` request string.
> [!NOTE]
-> If you don't know the ID for your search parameter, you can search for it. Using `GET {{FHIR_URL}}/SearchParameter` will return all custom search parameters, and you can scroll through the search parameter to find the search parameter you need. You could also limit the search by name. With the example below, you could search for name using `USCoreRace: GET {{FHIR_URL}}/SearchParameter?name=USCoreRace`.
+> If you don't know the ID for your search parameter, you can search for it using `GET {{FHIR_URL}}/SearchParameter`. This will return all custom as well as standard search parameters, and you can scroll through the list to find the search parameter you need. You could also limit the search by name. As shown in the example request below, the name of the custom `SearchParameter` resource instance is `USCoreRace`. You could search for this `SearchParameter` resource by name using `GET {{FHIR_URL}}/SearchParameter?name=USCoreRace`.
```rest
-PUT {{FHIR_URL}}/SearchParameter/{SearchParameter ID}
+PUT {{FHIR_URL}}/SearchParameter/{{SearchParameter_ID}}
{ "resourceType" : "SearchParameter",
- "id" : "SearchParameter ID",
+ "id" : "{{SearchParameter_ID}}",
"url" : "http://hl7.org/fhir/us/core/SearchParameter/us-core-race", "version" : "3.1.1", "name" : "USCoreRace",
PUT {{FHIR_URL}}/SearchParameter/{SearchParameter ID}
```
-The result will be an updated `SearchParameter` and the version will increment.
+The result of the above request will be an updated `SearchParameter` resource.
> [!Warning]
-> Be careful when updating SearchParameters that have already been indexed in your database. Changing an existing SearchParameterΓÇÖs behavior could have impacts on the expected behavior. We recommend running a reindex job immediately.
+> Be careful when updating search parameters. Changing an existing search parameter could have impacts on the expected behavior. We recommend running a reindex job immediately.
## Delete a search parameter If you need to delete a search parameter, use the following: ```rest
-Delete {{FHIR_URL}}/SearchParameter/{SearchParameter ID}
+DELETE {{FHIR_URL}}/SearchParameter/{{SearchParameter_ID}}
``` > [!Warning]
-> Be careful when deleting SearchParameters that have already been indexed in your database. Changing an existing SearchParameterΓÇÖs behavior could have impacts on the expected behavior. We recommend running a reindex job immediately.
+> Be careful when deleting search parameters. Changing an existing search parameter could have impacts on the expected behavior. We recommend running a reindex job immediately.
## Next steps
-In this article, youΓÇÖve learned how to create a search parameter. Next you can learn how to reindex your FHIR service. For more information, see
+In this article, youΓÇÖve learned how to create a custom search parameter. Next you can learn how to reindex your FHIR service database. For more information, see
>[!div class="nextstepaction"] >[How to run a reindex job](how-to-run-a-reindex.md)
healthcare-apis How To Run A Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/how-to-run-a-reindex.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 # Running a reindex job
-There are scenarios where you may have search or sort parameters in the FHIR service in Azure Health Data Services (hereby called FHIR service) that haven't yet been indexed. This scenario is relevant when you define your own search parameters. Until the search parameter is indexed, it can't be used in search. This article covers an overview of how to run a reindex job to index any search or sort parameters that haven't yet been indexed in your database.
+There are scenarios where you may have search parameters in the FHIR service in Azure Health Data Services that haven't yet been indexed. This is the case when you define your own custom search parameters. Until the search parameter is indexed, it can't be used in live production. This article covers how to run a reindex job to index any custom search parameters that haven't yet been indexed in your FHIR service database.
> [!Warning]
-> It's important that you read this entire article before getting started. A reindex job can be very performance intensive. This article includes options for how to throttle and control the reindex job.
+> It's important that you read this entire article before getting started. A reindex job can be very performance intensive. This article discusses options for how to throttle and control a reindex job.
## How to run a reindex job
-To start a reindex job, use the following code example:
+To reindex the entire FHIR service database and make your custom search parameter operational, use the following `POST` call with the JSON formatted `Parameters` resource in the request body:
```json
-POST {{FHIR URL}}/$reindex
+POST {{FHIR_URL}}/$reindex
{
POST {{FHIR URL}}/$reindex
} ```
-If the request is successful, a status of **201 Created** gets returned. The result of this message will look like:
+Leave the `"parameter": []` field blank (as shown) if you don't need to tweak the compute resources allocated to the reindex job. If the request is successful, you will receive a **201 Created** status code in addition to a `Parameters` resource in response:
```json HTTP/1.1 201 Created
Content-Location: https://{{FHIR URL}}/_operations/reindex/560c7c61-2c70-4c54-b8
``` > [!NOTE]
-> To check the status of or to cancel a reindex job, you'll need the reindex ID. This is the ID of the resulting Parameters resource. In the example above, the ID for the reindex job would be `560c7c61-2c70-4c54-b86d-c53a9d29495e`.
+> To check the status of a reindex job or to cancel the job, you'll need the reindex ID. This is the `"id"` carried in the `"parameter"` value returned in the response. In the example above, the ID for the reindex job would be `560c7c61-2c70-4c54-b86d-c53a9d29495e`.
## How to check the status of a reindex job Once youΓÇÖve started a reindex job, you can check the status of the job using the following call:
-`GET {{FHIR URL}}/_operations/reindex/{{reindexJobId}`
+`GET {{FHIR_URL}}/_operations/reindex/{{reindexJobId}}`
-The status of the reindex job result is shown below:
+An example response is shown below:
```json {
The status of the reindex job result is shown below:
"name": "resources", "valueString":
- "{LIST OF IMPACTED RESOURCES}"
- },
- {
+ "{{LIST_OF_IMPACTED_RESOURCES}}"
+ }
+ ]
+}
```
-The following information is shown in the reindex job result:
+The following information is shown in the above response:
-* **totalResourcesToReindex**: Includes the total number of resources that are being reindexed as part of the job.
+* `totalResourcesToReindex`: Includes the total number of resources that are being reindexed in this job.
-* **resourcesSuccessfullyReindexed**: The total that have already been successfully reindexed.
+* `resourcesSuccessfullyReindexed`: The total number of resources that have already been reindexed in this job.
-* **progress**: Reindex job percent complete. Equals resourcesSuccessfullyReindexed/totalResourcesToReindex x 100.
+* `progress`: Reindex job percent complete. Equals `resourcesSuccessfullyReindexed`/`totalResourcesToReindex` x 100.
-* **status**: States if the reindex job is queued, running, complete, failed, or canceled.
+* `status`: States if the reindex job is queued, running, complete, failed, or canceled.
-* **resources**: Lists all the resource types impacted by the reindex job.
+* `resources`: Lists all the resource types impacted by the reindex job.
## Delete a reindex job
-If you need to cancel a reindex job, use a delete call and specify the reindex job ID:
+If you need to cancel a reindex job, use a `DELETE` call and specify the reindex job ID:
-`Delete {{FHIR URL}}/_operations/reindex/{{reindexJobId}`
+`DELETE {{FHIR URL}}/_operations/reindex/{{reindexJobId}}`
## Performance considerations
-A reindex job can be quite performance intensive. WeΓÇÖve implemented some throttling controls to help you manage how a reindex job will run on your database.
+A reindex job can be quite performance intensive. The FHIR service offers some throttling controls to help you manage how a reindex job will run on your database.
> [!NOTE] > It is not uncommon on large datasets for a reindex job to run for days.
-Below is a table outlining the available parameters, defaults, and recommended ranges. You can use these parameters to either speedup the process (use more compute) or slow down the process (use less compute).
+Below is a table outlining the available parameters, defaults, and recommended ranges for controlling reindex job compute resources. You can use these parameters to either speed up the process (use more compute) or slow down the process (use less compute).
| **Parameter** | **Description** | **Default** | **Available Range** | | | - | | - |
-| QueryDelayIntervalInMilliseconds | The delay between each batch of resources being kicked off during the reindex job. A smaller number will speed up the job while a higher number will slow it down. | 500 MS (.5 seconds) | 50 to 500000 |
-| MaximumResourcesPerQuery | The maximum number of resources included in the batch of resources to be reindexed. | 100 | 1-5000 |
-| MaximumConcurrency | The number of batches done at a time. | 1 | 1-10 |
+| `QueryDelayIntervalInMilliseconds` | The delay between each batch of resources being kicked off during the reindex job. A smaller number will speed up the job while a larger number will slow it down. | 500 MS (.5 seconds) | 50 to 500000 |
+| `MaximumResourcesPerQuery` | The maximum number of resources included in the batch of resources to be reindexed. | 100 | 1-5000 |
+| `MaximumConcurrency` | The number of batches done at a time. | 1 | 1-10 |
-If you want to use any of the parameters above, you can pass them into the Parameters resource when you start the reindex job.
+If you want to use any of the parameters above, you can pass them into the `Parameters` resource when you send the initial `POST` request to start a reindex job.
```json+
+POST {{FHIR_URL}}/$reindex
+ { "resourceType": "Parameters", "parameter": [
If you want to use any of the parameters above, you can pass them into the Param
## Next steps
-In this article, you've learned how to start a reindex job. To learn how to define new search parameters that require the reindex job, see
+In this article, you've learned how to perform a reindex job in your FHIR service. To learn how to define custom search parameters, see
>[!div class="nextstepaction"] >[Defining custom search parameters](how-to-do-custom-search.md)
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview-of-search.md
Previously updated : 06/06/2022- Last updated : 08/18/2022+ # Overview of FHIR search
-The Fast Healthcare Interoperability Resources (FHIR&#174;) specification defines the fundamentals of search for FHIR resources. This article will guide you through some key aspects to searching resources in FHIR. For complete details about searching FHIR resources, refer to [Search](https://www.hl7.org/fhir/search.html) in the HL7 FHIR Specification. Throughout this article, we'll give examples of search syntax. Each search will be against your FHIR server, which typically has a URL of `https://<WORKSPACE NAME>-<ACCOUNT-NAME>.fhir.azurehealthcareapis.com`. In the examples, we'll use the placeholder `{{FHIR_URL}}` for this URL.
+The Fast Healthcare Interoperability Resources (FHIR&#174;) specification defines an API for querying resources in a FHIR server database. This article will guide you through some key aspects of querying data in FHIR. For complete details about the FHIR search API, refer to the HL7 [FHIR Search](https://www.hl7.org/fhir/search.html) documentation.
-FHIR searches can be against a specific resource type, a specified [compartment](https://www.hl7.org/fhir/compartmentdefinition.html), or all resources. The simplest way to execute a search in FHIR is to use a `GET` request. For example, if you want to pull all patients in the database, you could use the following request:
+Throughout this article, we'll demonstrate FHIR search syntax in example API calls with the placeholder `{{FHIR_URL}}` to represent the FHIR server URL. In the case of the FHIR service in Azure Health Data Services, this URL would be `https://<WORKSPACE-NAME>-<FHIR-SERVICE-NAME>.fhir.azurehealthcareapis.com`.
+
+FHIR searches can be against a specific resource type, a specified [compartment](https://www.hl7.org/fhir/compartmentdefinition.html), or all resources in the FHIR server database. The simplest way to execute a search in FHIR is to use a `GET` request. For example, if you want to pull all `Patient` resources in the database, you could use the following request:
```rest GET {{FHIR_URL}}/Patient ```
-You can also search using `POST`, which is useful if the query string is too long. To search using `POST`, the search parameters can be submitted as a form body. This allows for longer, more complex series of query parameters that might be difficult to see and understand in a query string.
+You can also search using `POST`. To search using `POST`, the search parameters are delivered in the body of the request. This makes it easier to send queries with longer, more complex series of parameters.
-If the search request is successful, you'll receive a FHIR bundle response with the type `searchset`. If the search fails, youΓÇÖll find the error details in the `OperationOutcome` to help you understand why the search failed.
+With either `POST` or `GET`, if the search request is successful, you'll receive a FHIR `searchset` bundle containing the resource instance(s) returned from the search. If the search fails, youΓÇÖll find the error details in an `OperationOutcome` response.
-In the following sections, we'll cover the various aspects involved in searching. Once youΓÇÖve reviewed these details, refer to our [samples page](search-samples.md) that has examples of searches that you can make in the FHIR service in the Azure Health Data Services.
+In the following sections, we'll cover the various aspects of querying resources in FHIR. Once youΓÇÖve reviewed these topics, refer to the [FHIR search samples page](search-samples.md), which features examples of different FHIR search methods.
## Search parameters
-When you do a search, you'll search based on various attributes of the resource. These attributes are called search parameters. Each resource has a set of defined search parameters. The search parameter must be defined and indexed in the database for you to successfully search against it.
+When you do a search in FHIR, you are searching the database for resources that match certain search criteria. The FHIR API specifies a rich set of search parameters for fine-tuning search criteria. Each resource in FHIR carries information as a set of elements, and search parameters work to query the information in these elements. In a FHIR search API call, if a positive match is found between the request's search parameters and element values stored in a resource instance, then the FHIR server returns a bundle containing the resource instance(s) whose elements satisfied the search criteria.
-Each search parameter has a defined [data types](https://www.hl7.org/fhir/search.html#ptypes). The support for the various data types is outlined below:
+For each search parameter, the FHIR specification defines the [data type(s)](https://www.hl7.org/fhir/search.html#ptypes) that can be used. Support in the FHIR service for the various data types is outlined below.
-| **Search parameter type** | **Azure API for FHIR** | **FHIR service in Azure Health Data Services** | **Comment**|
+| **Search parameter type** | **FHIR service in Azure Health Data Services** | **Azure API for FHIR** | **Comment**|
| - | -- | - | | | number | Yes | Yes | | date | Yes | Yes | | string | Yes | Yes | | token | Yes | Yes | | reference | Yes | Yes |
-| composite | Partial | Partial | The list of supported composite types is described later in this article |
+| composite | Partial | Partial | The list of supported composite types is given later in this article. |
| quantity | Yes | Yes | | uri | Yes | Yes | | special | No | No | ### Common search parameters
-There are [common search parameters](https://www.hl7.org/fhir/search.html#all) that apply to all resources. These are listed below, along with their support:
+There are [common search parameters](https://www.hl7.org/fhir/search.html#all) that apply to all resources in FHIR. These are listed below, along with their support in the FHIR service:
-| **Common search parameter** | **Azure API for FHIR** | **FHIR service in Azure Health Data Services** | **Comment**|
+| **Common search parameter** | **FHIR service in Azure Health Data Services** | **Azure API for FHIR** | **Comment**|
| - | -- | - | |
-| _id | Yes | Yes
-| _lastUpdated | Yes | Yes |
-| _tag | Yes | Yes |
-| _type | Yes | Yes |
-| _security | Yes | Yes |
-| _profile | Yes | Yes |
-| _has | Yes. | Yes | |
-| _query | No | No |
-| _filter | No | No |
-| _list | No | No |
-| _text | No | No |
-| _content | No | No |
+| `_id ` | Yes | Yes
+| `_lastUpdated` | Yes | Yes |
+| `_tag` | Yes | Yes |
+| `_type` | Yes | Yes |
+| `_security` | Yes | Yes |
+| `_profile` | Yes | Yes |
+| `_has` | Yes | Yes |
+| `_query` | No | No |
+| `_filter` | No | No |
+| `_list` | No | No |
+| `_text` | No | No |
+| `_content` | No | No |
### Resource-specific parameters
-With FHIR service in Azure Health Data Services, we support almost all [resource-specific search parameters](https://www.hl7.org/fhir/searchparameter-registry.html) defined by the FHIR specification. The only search parameters we donΓÇÖt support are available in the links below:
+The FHIR service in Azure Health Data Services supports almost all [resource-specific search parameters](https://www.hl7.org/fhir/searchparameter-registry.html) defined in the FHIR specification. Search parameters that are not supported are listed in the links below:
* [STU3 Unsupported Search Parameters](https://github.com/microsoft/fhir-server/blob/main/src/Microsoft.Health.Fhir.Core/Data/Stu3/unsupported-search-parameters.json)
You can also see the current support for search parameters in the [FHIR Capabili
GET {{FHIR_URL}}/metadata ```
-To see the search parameters in the capability statement, navigate to `CapabilityStatement.rest.resource.searchParam` to see the search parameters for each resource and `CapabilityStatement.rest.searchParam` to find the search parameters for all resources.
+To view the supported search parameters in the capability statement, navigate to `CapabilityStatement.rest.resource.searchParam` for the resource-specific search parameters and `CapabilityStatement.rest.searchParam` for search parameters that apply to all resources.
> [!NOTE]
-> FHIR service in Azure Health Data Services does not automatically create or index any search parameters that are not defined by the FHIR specification. However, we do provide support for you to to define your own [search parameters](how-to-do-custom-search.md).
+> The FHIR service in Azure Health Data Services does not automatically index search parameters that are not defined in the base FHIR specification. However, the FHIR service does support [custom search parameters](how-to-do-custom-search.md).
### Composite search parameters
-Composite search allows you to search against value pairs. For example, if you were searching for a height observation where the person was 60 inches, you would want to make sure that a single component of the observation contained the code of height **and** the value of 60. You wouldn't want to get an observation where a weight of 60 and height of 48 was stored, even though the observation would have entries that qualified for value of 60 and code of height, just in different component sections.
+Composite searches in FHIR allow you to search against element pairs as logically connected units. For example, if you were searching for observations where the height of the patient was over 60 inches, you would want to make sure that a single property of the observation contained the height code *and* a value greater than 60 inches (the value should only pertain to height). You wouldn't want to return a positive match on an observation with the height code *and* an arm to arm length over 60 inches, for example. Composite search parameters prevent this problem by searching against pre-specified pairs of elements whose values must both meet the search criteria for a positive match to occur.
-With the FHIR service for the Azure Health Data Services, we support the following search parameter type pairings:
+The FHIR service in Azure Health Data Services supports the following search parameter type pairings for composite searches:
* Reference, Token * Token, Date
With the FHIR service for the Azure Health Data Services, we support the followi
* Token, String * Token, Token
-For more information, see the HL7 [Composite Search Parameters](https://www.hl7.org/fhir/search.html#composite).
+For more information, see the HL7 [Composite Search Parameters](https://www.hl7.org/fhir/search.html#composite) documentation.
> [!NOTE]
-> Composite search parameters do not support modifiers per the FHIR specification.
+> Composite search parameters do not support modifiers, as per the FHIR specification.
### Modifiers & prefixes
-[Modifiers](https://www.hl7.org/fhir/search.html#modifiers) allow you to modify the search parameter. Below is an overview of all the FHIR modifiers and the support:
+[Modifiers](https://www.hl7.org/fhir/search.html#modifiers) allow you to qualify search parameters with additional conditions. Below is a list of FHIR modifiers and their support in the FHIR service:
-| **Modifiers** | **Azure API for FHIR** | **FHIR service in Azure Health Data Services** | **Comment**|
+| **Modifiers** | **FHIR service in Azure Health Data Services** | **Azure API for FHIR** | **Comment**|
| - | -- | - | |
-| :missing | Yes | Yes |
-| :exact | Yes | Yes |
-| :contains | Yes | Yes |
-| :text | Yes | Yes |
-| :type (reference) | Yes | Yes |
-| :not | Yes | Yes |
-| :below (uri) | Yes | Yes |
-| :above (uri) | Yes | Yes |
-| :in (token) | No | No |
-| :below (token) | No | No |
-| :above (token) | No | No |
-| :not-in (token) | No | No |
-
-For search parameters that have a specific order (numbers, dates, and quantities), you can use a [prefix](https://www.hl7.org/fhir/search.html#prefix) on the parameter to help with finding matches. The FHIR service in the Azure Health Data Services supports all prefixes.
+| `:missing` | Yes | Yes |
+| `:exact` | Yes | Yes |
+| `:contains` | Yes | Yes |
+| `:text` | Yes | Yes |
+| `:type` (reference) | Yes | Yes |
+| `:not` | Yes | Yes |
+| `:below` (uri) | Yes | Yes |
+| `:above` (uri) | Yes | Yes |
+| `:in` (token) | No | No |
+| `:below` (token) | No | No |
+| `:above` (token) | No | No |
+| `:not-in` (token) | No | No |
+
+For search parameters that have a specific order (numbers, dates, and quantities), you can use a [prefix](https://www.hl7.org/fhir/search.html#prefix) before the parameter value to refine the search criteria (e.g. `Patient?_lastUpdated=gt2022-08-01` where the prefix `gt` means "greater than"). The FHIR service in Azure Health Data Services supports all prefixes defined in the FHIR standard.
### Search result parameters
-To help manage the returned resources, there are search result parameters that you can use in your search. For details on how to use each of the search result parameters, refer to the [HL7](https://www.hl7.org/fhir/search.html#return) website.
+FHIR specifies a set of search result parameters to help manage the information returned from a search. For detailed information on how to use search result parameters in FHIR, refer to the [HL7](https://www.hl7.org/fhir/search.html#return) website. Below is a list of FHIR search result parameters and their support in the FHIR service.
-| **Search result parameters** | **Azure API for FHIR** | **FHIR service in Azure Health Data Services** | **Comment**|
+| **Search result parameters** | **FHIR service in Azure Health Data Services** | **Azure API for FHIR** | **Comment**|
| - | -- | - | |
-| _elements | Yes | Yes |
-| _count | Yes | Yes | _count is limited to 1000 resources. If it's set higher than 1000, only 1000 will be returned and a warning will be returned in the bundle. |
-| _include | Yes | Yes | Included items are limited to 100. _include on PaaS and OSS on Cosmos DB don't include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). |
-| _revinclude | Yes | Yes |Included items are limited to 100. _revinclude on PaaS and OSS on Cosmos DB don't include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). There's also an incorrect status code for a bad request [#1319](https://github.com/microsoft/fhir-server/issues/1319) |
-| _summary | Yes | Yes |
-| _total | Partial | Partial | _total=none and _total=accurate |
-| _sort | Partial | Partial | sort=_lastUpdated is supported on Azure API for FHIR and the FHIR service. For the FHIR service and the OSS SQL DB FHIR servers, sorting by strings and dateTime fields are supported. For Azure API for FHIR and OSS Cosmos DB databases created after April 20, 2021, sort is supported on first name, last name, birthdate, and clinical date. |
-| _contained | No | No |
-| _containedType | No | No |
-| _score | No | No |
+| `_elements` | Yes | Yes |
+| `_count` | Yes | Yes | `_count` is limited to 1000 resources. If it's set higher than 1000, only 1000 will be returned and a warning will be included in the bundle. |
+| `_include` | Yes | Yes | Items retrieved with `_include` are limited to 100. `_include` on PaaS and OSS on Cosmos DB doesn't support `:iterate` [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). |
+| `_revinclude` | Yes | Yes |Items retrieved with `_revinclude` are limited to 100. `_revinclude` on PaaS and OSS on Cosmos DB doesn't support `:iterate` [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). There's also an incorrect status code for a bad request [#1319](https://github.com/microsoft/fhir-server/issues/1319). |
+| `_summary` | Yes | Yes |
+| `_total` | Partial | Partial | `_total=none` and `_total=accurate` |
+| `_sort` | Partial | Partial | `sort=_lastUpdated` is supported on the FHIR service. For the FHIR service and the OSS SQL DB FHIR servers, sorting by strings and dateTime fields are supported. For Azure API for FHIR and OSS Cosmos DB databases created after April 20, 2021, sort is supported on first name, last name, birthdate, and clinical date. |
+| `_contained` | No | No |
+| `_containedType` | No | No |
+| `_score` | No | No |
> [!NOTE]
-> By default `_sort` sorts the record in ascending order. You can use the prefix `'-'` to sort in descending order. In addition, the FHIR service and the Azure API for FHIR only allow you to sort on a single field at a time.
+> By default, `_sort` arranges records in ascending order. You can also use the prefix `-` to sort in descending order. The FHIR service only allows you to sort on a single field at a time.
-By default, the FHIR service in the Azure Health Data Services is set to lenient handling. This means that the server will ignore any unknown or unsupported parameters. If you want to use strict handling, you can use the **Prefer** header and set `handling=strict`.
+By default, the FHIR service in Azure Health Data Services is set to lenient handling. This means that the server will ignore any unknown or unsupported parameters. If you want to use strict handling, you can include the `Prefer` header and set `handling=strict`.
## Chained & reverse chained searching
-A [chained search](https://www.hl7.org/fhir/search.html#chaining) allows you to search using a search parameter on a resource referenced by another resource. For example, if you want to find encounters where the patientΓÇÖs name is Jane, use:
+A [chained search](https://www.hl7.org/fhir/search.html#chaining) allows you to perform fine-targeted queries for resources that have a reference to another resource. For example, if you want to find encounters where the patientΓÇÖs name is Jane, use:
`GET {{FHIR_URL}}/Encounter?subject:Patient.name=Jane`
-Similarly, you can do a reverse chained search. This allows you to get resources where you specify criteria on other resources that refer to them. For more examples of chained and reverse chained search, refer to the [FHIR search examples](search-samples.md) page.
+The `.` in the above request steers the path of the chained search to the target parameter (`name` in this case).
+
+Similarly, you can do a reverse chained search with the `_has` parameter. This allows you to retrieve resource instances by specifying criteria on other resources that reference the resources of interest. For examples of chained and reverse chained search, refer to the [FHIR search examples](search-samples.md) page.
## Pagination
-As mentioned above, the results from a search will be a paged bundle. By default, the search will return 10 results per page, but this can be increased (or decreased) by specifying `_count`. Within the bundle, there will be a self link that contains the current result of the search. If there are more matches, the bundle will contain a next link. You can continue to use the next link to get the subsequent pages of results. `_count` is limited to 1000 items or less.
+As mentioned above, the results from a FHIR search will be available in paginated form at a link provided in the `searchset` bundle. By default, the FHIR service will display 10 search results per page, but this can be increased (or decreased) by setting the `_count` parameter. If there are more matches than fit on one page, the bundle will include a `next` link. Repeatedly fetching the `next` link will yield the subsequent pages of results. Note that the `_count` parameter value cannot exceed 1000.
-Currently, FHIR service in Azure Health Data Services only supports the next link in bundles, and it doesnΓÇÖt support first, last, or previous links.
+Currently, the FHIR service in Azure Health Data Services only supports the `next` link and doesnΓÇÖt support `first`, `last`, or `previous` links in bundles returned from a search.
## Next steps
-Now that you've learned about the basics of search, see the search samples page for details about how to search using different search parameters, modifiers, and other FHIR search scenarios. To read about FHIR search examples, see
+Now that you've learned about the basics of FHIR search, see the search samples page for details about how to search using search parameters, modifiers, and other FHIR search methods.
>[!div class="nextstepaction"] >[FHIR search examples](search-samples.md)
healthcare-apis Search Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/search-samples.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 # FHIR search examples
-Below are some examples of using Fast Healthcare Interoperability Resources (FHIR&#174;) search operations, including search parameters and modifiers, chain and reverse chain search, composite search, viewing the next entry set for search results, and searching with a `POST` request. For more information about search, see [Overview of FHIR Search](overview-of-search.md).
+Below are some examples of Fast Healthcare Interoperability Resources (FHIR&#174;) search API calls featuring various search parameters, modifiers, chained and reverse chained searches, composite searches, `POST` search requests, and more. For a general introduction to FHIR search concepts, see [Overview of FHIR Search](overview-of-search.md).
## Search result parameters
-### _include
+### `_include`
-`_include` searches for resources that include the specified parameter of the resource. For example, you can search across `MedicationRequest` resources to find only the ones that include information about the prescriptions for a specific patient, which is the `reference` parameter `patient`. In the example below, this will pull all the `MedicationRequests` and all patients that are referenced from the `MedicationRequests`:
+`_include` lets you search for resource instances and include in the results other resources referenced by the target resource instances. For example, you can use `_include` to query for `MedicationRequest` resources and limit the search to prescriptions for a specific patient. The FHIR service would then return the `MedicationRequest` resources as well as the referenced `Patient` resource. In the example below, the request will pull all `MedicationRequest` resource instances in the database and all patients that are referenced by the `MedicationRequest` instances:
```rest
- GET [your-fhir-server]/MedicationRequest?_include=MedicationRequest:patient
+ GET {{FHIR_URL}}/MedicationRequest?_include=MedicationRequest:patient
``` > [!NOTE]
-> **_include** and **_revinclude** are limited to 100 items.
+> The FHIR service in Azure Health Data Services limits searches with `_include` and `_revinclude` to return a maximum of 100 items.
-### _revinclude
+### `_revinclude`
-`_revinclude` allows you to search the opposite direction as `_include`. For example, you can search for patients and then reverse include all encounters that reference the patients:
+`_revinclude` allows you to search for resource instances and include in the results other resources that reference the target resource instances. For example, you can search for patients and then reverse include all encounters that reference the patients:
```rest
-GET [your-fhir-server]/Patient?_revinclude=Encounter:subject
+GET {{FHIR_URL}}/Patient?_revinclude=Encounter:subject
```
-### _elements
+### `_elements`
-`_elements` narrows down the search result to a subset of fields to reduce the response size by omitting unnecessary data. The parameter accepts a comma-separated list of base elements:
+`_elements` narrows the information in the search results to a subset of the elements defined for a resource type. The `_elements` parameter accepts a comma-separated list of base elements:
```rest
-GET [your-fhir-server]/Patient?_elements=identifier,active
+GET {{FHIR_URL}}/Patient?_elements=identifier,active
```
-In this request, you'll get back a bundle of patients, but each resource will only include the identifier(s) and the patient's active status. Resources in this returned response will contain a `meta.tag` value of `SUBSETTED` to indicate that they're an incomplete set of results.
+In the above request, you'll receive a bundle of patients, but each entry will only include the identifier(s) and the patient's active status. The entries in the response will contain a `meta.tag` value of `SUBSETTED` to indicate that not all elements defined for the resource are included.
## Search modifiers
-### :not
+### `:not`
-`:not` allows you to find resources where an attribute isn't true. For example, you could search for patients where the gender isn't female:
+`:not` allows you to find resources with an element that does not have a given value. For example, you could search for patients who are not female:
```rest
-GET [your-fhir-server]/Patient?gender:not=female
-
+GET {{FHIR_URL}}/Patient?gender:not=female
```
-As a return value, you would get all patient entries where the gender isn't female, including empty values (entries specified without gender). This is different than searching for Patients where gender is male, since that wouldn't include the entries without a specific gender.
+In return, you would get all `Patient` resources whose `gender` element value is not `female`, including any patients with no gender value specified. This is different from searching for `Patient` resources with the `male` gender value since that would ignore patients with no specified gender.
-### :missing
+### `:missing`
-`:missing` returns all resources that don't have a value for the specified element when the value is `true`, and returns all the resources that contain the specified element when the value is `false`. For simple data type elements, `:missing=true` will match on all resources where the element is present with extensions but has an empty value. For example, if you want to find all `Patient` resources that are missing information on birth date, you can do:
+`:missing` returns all resources that don't have a value for the specified element when `:missing=true`. Additionally, `:missing` returns all resources that contain the specified element when `:missing=false`. For simple data type elements, `:missing=true` will match on all resources where an element is present but has an empty value. For example, if you want to find all `Patient` resources that are missing information on `birthdate`, you can call:
```rest
-GET [your-fhir-server]/Patient?birthdate:missing=true
+GET {{FHIR_URL}}/Patient?birthdate:missing=true
```
-### :exact
-`:exact` is used for `string` parameters, and returns results that match the parameter precisely, such as in casing and character concatenating.
+### `:exact`
+`:exact` is used to search for elements with `string` data types and returns positive if the parameter value precisely matches the case and full character sequence of the element value.
```rest
-GET [your-fhir-server]/Patient?name:exact=Jon
+GET {{FHIR_URL}}/Patient?name:exact=Jon
```
-This request returns `Patient` resources that have the name exactly the same as `Jon`. If the resource had patients with names such as `Jonathan` or `joN`, the search would ignore and skip the resource as it doesn't exactly match the specified value.
+This request returns `Patient` resources that have the `given` or `family` name of `Jon`. If there were patients with names such as `Jonathan` or `JON`, the search would ignore those resources as their names do not match the specified value exactly.
-### :contains
-`:contains` is used for `string` parameters and searches for resources with partial matches of the specified value anywhere in the string within the field being searched. `contains` is case insensitive and allows character concatenating. For example:
+### `:contains`
+`:contains` is used to query for `string` type elements and allows for matches with the specified value anywhere within the field. `contains` is case insensitive and recognizes matching strings concatenated with other characters. For example:
```rest
-GET [your-fhir-server]/Patient?address:contains=Meadow
+GET {{FHIR_URL}}/Patient?address:contains=Meadow
```
-This request would return you all `Patient` resources with `address` fields that have values that contain the string "Meadow". This means you could have addresses that include values such as "Meadowers" or "59 Meadow ST" returned as search results.
+This request would return all `Patient` resources with `address` element fields that contain the string "Meadow" (case insensitive). This means you could have addresses with values such as "Meadows Lane", "Pinemeadow Place", or "Meadowlark St" that return positive matches.
## Chained search
-To perform a series of search operations that cover multiple reference parameters, you can "chain" the series of reference parameters by appending them to the server request one by one using a period `.`. For example, if you want to view all `DiagnosticReport` resources with a `subject` reference to a `Patient` resource that includes a particular `name`:
+To perform search operations that cover elements contained within a referenced resource, you can "chain" a series of parameters together with `.`. For example, if you want to view all `DiagnosticReport` resources with a `subject` reference to a patient specified by `name`:
```rest
- GET [your-fhir-server]/DiagnosticReport?subject:Patient.name=Sarah
+ GET {{FHIR_URL}}/DiagnosticReport?subject:Patient.name=Sarah
```
-This request would return all the `DiagnosticReport` resources with a patient subject named "Sarah". The period `.` after the field `Patient` performs the chained search on the reference parameter of the `subject` parameter.
+This request would return all `DiagnosticReport` resources with a patient subject named "Sarah". The `.` points the chained search to the `name` element within the referenced `Patient` resource.
-Another common use of a regular search (not a chained search) is finding all encounters for a specific patient. `Patient`s will often have one or more `Encounter`s with a subject. To search for all `Encounter` resources for a `Patient` with the provided `id`:
+Another common use of FHIR search is finding all encounters for a specific patient. To do a regular (non-chained) search for `Encounter` resources that reference a `Patient` with a given `id`:
```rest
-GET [your-fhir-server]/Encounter?subject=Patient/78a14cbe-8968-49fd-a231-d43e6619399f
+GET {{FHIR_URL}}/Encounter?subject=Patient/78a14cbe-8968-49fd-a231-d43e6619399f
```
-Using chained search, you can find all the `Encounter` resources that match a particular piece of `Patient` information, such as the `birthdate`:
+Using chained search, you can find all `Encounter` resources that reference patients whose details match a search parameter. The example below demonstrates how to search for encounters referencing patients narrowed by `birthdate`:
```rest
-GET [your-fhir-server]/Encounter?subject:Patient.birthdate=1987-02-20
+GET {{FHIR_URL}}/Encounter?subject:Patient.birthdate=1987-02-20
```
-This would allow not just searching `Encounter` resources for a single patient, but across all patients that have the specified birth date value.
+This would return all `Encounter` instances that reference patients with the specified `birthdate` value.
-In addition, chained search can be done more than once in one request by using the symbol `&`, which allows you to search for multiple conditions in one request. In such cases, chained search "independently" searches for each parameter, instead of searching for conditions that only satisfy all the conditions at once:
+In addition, you can initiate multiple chained searches by using the `&` operator, which allows searching against multiple references in one request. In such cases with `&`, chained search "independently" scans for each element value:
```rest
-GET [your-fhir-server]/Patient?general-practitioner:Practitioner.name=Sarah&general-practitioner:Practitioner.address-state=WA
+GET {{FHIR_URL}}/Patient?general-practitioner:Practitioner.name=Sarah&general-practitioner:Practitioner.address-state=WA
```
-This would return all `Patient` resources that have "Sarah" as the `generalPractitioner` and have a `generalPractitioner` that has the address with the state WA. In other words, if a patient had Sarah from the state NY and Bill from the state WA both referenced as the patient's `generalPractitioner`, the would be returned.
+This would return all `Patient` resources that have a reference to "Sarah" as a `generalPractitioner` plus a reference to a `generalPractitioner` that has an address in the state of Washington. In other words, if a patient had a `generalPractitioner` named Sarah from New York state and another `generalPractitioner` named Bill from Washington state, this would meet the conditions for a positive match when doing this search.
-For scenarios in which the search has to be an AND operation that covers all conditions as a group, refer to the **composite search** example below.
+For scenarios in which the search criteria carries a logical AND condition that strictly checks for paired element values, refer to the **composite search** examples below.
-## Reverse chain search
+## Reverse chained search
-Chain search lets you search for resources based on the properties of resources they refer to. Using reverse chain search, allows you do it the other way around. You can search for resources based on the properties of resources that refer to them, using `_has` parameter. For example, `Observation` resource has a search parameter `patient` referring to a Patient resource. To find all Patient resources that are referenced by `Observation` with a specific `code`:
+Using reverse chained search in FHIR allows you to search for target resource instances referenced by other resources. In other words, you can search for resources based on the properties of resources that refer to them. This is accomplished with the `_has` parameter. For example, the `Observation` resource has a search parameter `patient` that checks for a reference to a `Patient` resource. To find all `Patient` resources that are referenced by an `Observation` with a specific `code`:
```rest
-GET [base]/Patient?_has:Observation:patient:code=527
+GET {{FHIR_URL}}/Patient?_has:Observation:patient:code=527
```
-This request returns Patient resources that are referred by `Observation` with the code `527`.
+This request returns `Patient` resources that are referenced by `Observation` resources with the code `527`.
-In addition, reverse chain search can have a recursive structure. For example, if you want to search for all patients that have `Observation` where the observation has an audit event from a specific user `janedoe`, you could do:
+In addition, reverse chained search can have a recursive structure. For example, if you want to search for all patients referenced by an `Observation` where the observation is referenced by an `AuditEvent` from a specific practitioner named `janedoe`:
```rest
-GET [base]/Patient?_has:Observation:patient:_has:AuditEvent:entity:agent:Practitioner.name=janedoe
+GET {{FHIR_URL}}/Patient?_has:Observation:patient:_has:AuditEvent:entity:agent:Practitioner.name=janedoe
``` ## Composite search
-To search for resources that meet multiple conditions at once, use composite search that joins a sequence of single parameter values with a symbol `$`. The returned result would be the intersection of the resources that match all of the conditions specified by the joined search parameters. Such search parameters are called composite search parameters, and they define a new parameter that combines the multiple parameters in a nested structure. For example, if you want to find all `DiagnosticReport` resources that contain `Observation` with a potassium value less than or equal to 9.2:
+To search for resources that contain elements grouped together as logically connected pairs, FHIR defines composite search, which joins single parameter values together with the `$` operator ΓÇô making a connected pair of parameters. In a composite search, a positive match occurs when the intersection of element values satisfies all of the conditions set in the paired search parameters. For example, if you want to find all `DiagnosticReport` resources that contain a potassium value less than `9.2`:
```rest
-GET [your-fhir-server]/DiagnosticReport?result.code-value-quantity=2823-3$lt9.2
+GET {{FHIR_URL}}/DiagnosticReport?result.code-value-quantity=2823-3$lt9.2
```
-This request specifies the component containing a code of `2823-3`, which in this case would be potassium. Following the `$` symbol, it specifies the range of the value for the component using `lt` for "less than or equal to" and `9.2` for the potassium value range.
+The paired elements in this case would be the `code` element (from an `Observation` resource referenced as the `result`) and the `value` element connected with the `code`. Following the code with the `$` operator sets the `value` condition as `lt` (for "less than") `9.2` (for the potassium mmol/L value).
-Composite search parameters can also be used to filter multiple component code value quantities with an OR. For example, to express the query to find diastolic blood pressure greater than 90 OR systolic blood pressure greater than 140:
+Composite search parameters can also be used to filter multiple component code value quantities with a logical OR. For example, to query for observations with diastolic blood pressure greater than 90 OR systolic blood pressure greater than 140:
```rest
-GET [your-fhir-server]/Observation?component-code-value-quantity=http://loinc.org|8462-4$gt90,http://loinc.org|8480-6$gt140
+GET {{FHIR_URL}}/Observation?component-code-value-quantity=http://loinc.org|8462-4$gt90,http://loinc.org|8480-6$gt140
```
-## Search the next entry set
+Note how `,` functions as the logical OR operator between the two conditions.
+
+## View the next entry set
-The maximum number of entries that can be returned per a single search query is 1000. However, you might have more than 1000 entries that match the search query, and you might want to see the next set of entries after the first 1000 entries that were returned. In such case, you would use the continuation token `url` value in `searchset` as in the `Bundle` result below:
+The maximum number of resources that can be returned at once from a search query is 1000. However, you might have more than 1000 resource instances that match the search query and you want to retrieve the next set of results after the first 1000 entries. In such a case, you would use the continuation (i.e. `"next"`) token `url` value in the `searchset` bundle returned from the search:
```json "resourceType": "Bundle",
The maximum number of entries that can be returned per a single search query is
"link": [ { "relation": "next",
- "url": "[your-fhir-server]/Patient?_sort=_lastUpdated&ct=WzUxMDAxNzc1NzgzODc5MjAwODBd"
+ "url": "{{FHIR_URL}}/Patient?_sort=_lastUpdated&ct=WzUxMDAxNzc1NzgzODc5MjAwODBd"
}, { "relation": "self",
- "url": "[your-fhir-server]/Patient?_sort=_lastUpdated"
+ "url": "{{FHIR_URL}}/Patient?_sort=_lastUpdated"
} ], ```
-And you would do a GET request for the provided URL under the field `relation: next`:
+You would make a `GET` request for the provided URL:
```rest
-GET [your-fhir-server]/Patient?_sort=_lastUpdated&ct=WzUxMDAxNzc1NzgzODc5MjAwODBd
+GET {{FHIR_URL}}/Patient?_sort=_lastUpdated&ct=WzUxMDAxNzc1NzgzODc5MjAwODBd
```
-This will return the next set of entries for your search result. The `searchset` is the complete set of search result entries, and the continuation token `url` is the link provided by the server for you to retrieve the entries that don't show up on the first set because the restriction on the maximum number of entries returned for a search query.
+This would return the next set of entries for your search results. The `searchset` bundle is the complete set of search result entries, and the continuation token `url` is the link provided by the FHIR service to retrieve the entries that don't fit in the first subset (due to the restriction on the maximum number of entries returned for one page).
-## Search using POST
+## Search using `POST`
-All of the search examples mentioned above have used `GET` requests. You can also do search operations using `POST` requests using `_search`:
+All of the search examples mentioned above use `GET` requests. However, you can also make FHIR search API calls using `POST` with the `_search` parameter:
```rest
-POST [your-fhir-server]/Patient/_search?_id=45
+POST {{FHIR_URL}}/Patient/_search?_id=45
```
-This request would return all `Patient` resources with the `id` value of 45. Just as in GET requests, the server determines which of the set of resources meets the condition(s), and returns a bundle resource in the HTTP response.
+This request would return the `Patient` resource instance with the given `id` value. Just as with `GET` requests, the server determines which resource instances satisfy the condition(s) and returns a bundle in the HTTP response.
-Another example of searching using POST where the query parameters are submitted as a form body is:
+Another feature of searching with `POST` is that it lets you submit the query parameters as a form body:
```rest
-POST [your-fhir-server]/Patient/_search
+POST {{FHIR_URL}}/Patient/_search
content-type: application/x-www-form-urlencoded name=John ```+ ## Next steps
-In this article, you learned about how to search using different search parameters, modifiers, and other search tools for FHIR. For more information about FHIR search, see
+In this article, you learned about searching in FHIR using search parameters, modifiers, and other methods. For more information about FHIR search, see
>[!div class="nextstepaction"] >[Overview of FHIR Search](overview-of-search.md)
healthcare-apis Iot Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-overview.md
Title: What is the MedTech service? - Azure Health Data Services description: In this article, you'll learn about the MedTech service, its features, functions, integrations, and next steps. -+ Previously updated : 07/19/2022- Last updated : 08/25/2022+
-# What is the MedTech service?
+# What is MedTech service?
## Overview
-The MedTech service is an optional service of the Azure Health Data Services designed to ingest health data from multiple and disparate Internet of Medical Things (IoMT) devices and persisting the health data in a Fast Healthcare Interoperability Resources (FHIR&#174;) service.
+MedTech service in Azure Health Data Services is a Platform as a service (PaaS) that enables you to gather data from diverse medical devices and change it into a Fast Healthcare Interoperability Resources (FHIR&#174;) service format. MedTech service's device data translation capabilities make it possible to convert a wide variety of data into a unified FHIR format that provides secure health data management in a cloud environment.
-The MedTech service is important because health data collected from patients and health care consumers can be fragmented from access across multiple systems, device types, and formats. Managing healthcare data can be difficult, however, trying to gain insight from the data can be one of the biggest barriers to population and personal wellness understanding and sustaining health.
+MedTech service is important because healthcare data can be difficult to access or lost when it comes from diverse or incompatible devices, systems, or formats. If medical information isn't easy to access, it may have a negative impact on gaining clinical insights and a patient's health and wellness. The ability to translate many types of medical device data into a unified FHIR format enables MedTech service to successfully link devices, health data, labs, and remote in-person care to support the clinician, care team, patient, and family. As a result, this capability can facilitate the discovery of important clinical insights and trend capture. It can also help make connections to new device applications and enable advanced research projects.
+## How MedTech service works
-The MedTech service transforms device data into FHIR-based Observation resources and then persists the transformed messages into the Azure Health Data Services FHIR service. Allowing for a unified approach to health data access, standardization, and trend capture enabling the discovery of operational and clinical insights, connecting new device applications, and enabling new research projects.
+The following diagram outlines the basic elements of how MedTech service transforms medical device data into a standardized FHIR resource in the cloud.
-Below is an overview of what the MedTech service does after IoMT device data is received. Each step will be further explained in the [The MedTech service data flows](./iot-data-flow.md) article.
-> [!NOTE]
-> Learn more about [Azure Event Hubs](../../event-hubs/index.yml) use cases, features and architectures.
+These elements are:
+### Deployment
-## Scalable
+In order to implement MedTech service, you need to have an Azure subscription and set up a workspace and namespace to deploy three Azure
-The MedTech service is designed out-of-the-box to support growth and adaptation to the changes and pace of healthcare by using autoscaling features. The service enables developers to modify and extend the capabilities to support more device mapping template types and FHIR resources.
+### Devices
-## Configurable
+When the PaaS deployment is completed, high-velocity and low-velocity patient medical data can be collected from a wide range of JSON-compatible IoMT devices, systems, and formats.
-The MedTech service is configured by using [Device](./how-to-use-device-mappings.md) and [FHIR destination](./how-to-use-fhir-mappings.md) mappings. The mappings instruct the filtering and transformation of your IoMT device messages into the FHIR format.
+### Event Hubs service
-The different points for extension are:
-* Normalization: Health data from disparate devices can be aligned and standardized into a common format to make sense of the data from a unified lens and capture trends.
-* FHIR conversion: Health data is normalized and grouped by mapping commonalities to FHIR. Observations can be created or updated according to chosen or configured templates. Devices and health care consumers can be linked for enhanced insights and trend capture.
+ IoMT data is then sent from a device over the Internet to Event Hubs service to hold it temporarily in the cloud. The event hub can asynchronously process millions of data points per second, eliminating data traffic jams, making it possible to easily handle huge amounts of information in real time.
-> [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
+### MedTech service
-## Extensible
+When device data has been loaded into Event Hubs service, MedTech service is able to pick it up and convert it into a unified FHIR format in five stages.
-The MedTech service may also be used with our [open-source projects](./iot-git-projects.md) for ingesting IoMT device data from the following wearables:
-* Fitbit&#174;
-* Apple&#174;
-* Google&#174;
+These stages are:
-The MedTech service may also be used with the following Microsoft solutions to provide more functionalities and insights:
- * [Azure Machine Learning Service](./iot-connector-machine-learning.md)
- * [Microsoft Power BI](./iot-connector-power-bi.md)
- * [Microsoft Teams](./iot-connector-teams.md)
-
-## Secure
-The MedTech service uses [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) and a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for granular security and access control of your MedTech service assets.
+1. **Ingest** - MedTech service asynchronously loads the device data from the event hub at very high speed.
+
+2. **Normalize** - After the data has been ingested, MedTech service uses device mapping to streamline and process it into a normalized schema format.
+
+3. **Group** - The normalized data is then grouped by parameters to prepare it for the next stage of processing. The parameters are: device identity, measurement type, time period, and (optionally) correlation id.
+
+4. **Transform** - When the normalized data is grouped, it is transformed through FHIR destination mapping templates and is ready to become FHIR Observation resources.
+
+5. **Persist** - After the transformation is done, the new data is sent to FHIR service and persisted as an Observation resource.
+
+### FHIR service
+
+MedTech service data processing is complete when the new FHIR Observation resource is successfully persisted and saved into the FHIR service. Now it's ready for use by the care team, clinician, laboratory, or research facility.
+
+## Key features of MedTech service
+
+MedTech service has many features that make it very secure, configurable, scalable, and extensible.
+
+### Secure
+
+MedTech service delivers your data to FHIR service in Azure Health Data Services, ensuring that your Protected Personal Health Information (PHI) has unparalleled security and advanced threat protection. The FHIR service isolates your data in a unique database per API instance and protects it with multi-region failover. In addition, MedTech service uses [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) and a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for extra security and control of your MedTech service assets.
+
+### Configurable
+
+Your MedTech service can be customized and configured by using [Device](./how-to-use-device-mappings.md) and [FHIR destination](./how-to-use-fhir-mappings.md) mappings to define the filtering and transformation of your data into FHIR Observation resources.
+
+Useful options could include:
+
+- Linking Devices and health care consumers together for enhanced insights, trend capture, interoperability between systems, and proactive and remote monitoring.
+
+- Observations that can be created or updated according to existing or new templates.
+
+- Being able to choose Health data terms that work best for your organization and provide consistency in device data ingestion. For example, you could have either "hr" or "heart rate" or "Heart Rate" to define heart rate information.
+
+- Facilitating customization, editing, testing, and troubleshooting MedTech service Device and FHIR destination mappings with The [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) open-source tool.
+
+### Scalable
+
+The MedTech service has special autoscaling features that enable developers to easily modify and extend the capabilities to support new device mapping template types and FHIR resources.
+
+### Extensible
+
+The MedTech service may also be integrated into our [open-source projects](./iot-git-projects.md) for ingesting IoMT device data from these wearables:
+
+- Fitbit&#174;
+
+- Apple&#174;
+
+- Google&#174;
+
+The following Microsoft solutions can leverage MedTech service for additional functionality:
+
+- [**Microsoft Azure IoT Hub**](../../iot-hub/iot-concepts-and-iot-hub.md) - enhances workflow and ease of use.
+
+- [**Azure Machine Learning Service**](./iot-connector-machine-learning.md) - helps build, deploy, and manage models, integrate tools, and increase open-source operability.
+
+- [**Microsoft Power BI**](./iot-connector-power-bi.md) - enables data visualization features.
+
+- [**Microsoft Teams**](./iot-connector-teams.md) - facilitates virtual visits.
## Next steps
-In this article, you learned about the MedTech service. To learn about the MedTech service data flows and how to deploy the MedTech service in the Azure portal, see
+In this article, you learned about the MedTech service. To learn more about the MedTech service data flow and how to deploy the MedTech service in the Azure portal, see
>[!div class="nextstepaction"] >[The MedTech service data flows](./iot-data-flow.md)
In this article, you learned about the MedTech service. To learn about the MedTe
>[!div class="nextstepaction"] >[Deploy the MedTech service using the Azure portal](./deploy-iot-connector-in-azure.md)
+>[!div class="nextstepaction"]
+>[Frequently asked questions about the MedTech service](./iot-connector-faqs.md)
+ FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-central How To Connect Devices X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-devices-x509.md
Make a note of the location of these files. You need it later.
1. Open the enrollment group you created and select **Manage Primary**.
-1. Select file option to upload the root certificate file called _mytestrootcert_cert.pem_ that you generated previously:
-
- ![Certificate Upload](./media/how-to-connect-devices-x509/certificate-upload.png)
+1. Select file option to upload the root certificate file called _mytestrootcert_cert.pem_ that you generated previously.
1. To complete the verification, generate the verification code, copy it, and then use it to create an X.509 verification certificate at the command prompt:
These commands produce the following device certificates:
1. Open the device you created and select **Connect**.
-1. Select **Individual Enrollments** as the **Connect Method** and **Certificates (X.509)** as the mechanism:
-
- ![Individual enrollment](./media/how-to-connect-devices-x509/individual-device-connect.png)
+1. Select **Individual Enrollments** as the **Connect Method** and **Certificates (X.509)** as the mechanism.
1. Select file option under primary and upload the certificate file called _mytestselfcertprimary_cert.pem_ that you generated previously.
iot-central Howto Connect Eflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-eflow.md
To enable an operator to view the telemetry from the device, define a view in th
1. Select **Save** to save the **View IoT Edge device telemetry** view. - ### Publish the template Before you can add a device that uses the **Environmental Sensor Edge Device** template, you must publish the template.
-Navigate to the **Environmental Sensor Edge Device** template and select **Publish**. On the **Publish this device template to the application** panel, select **Publish** to publish the template:
-
+Navigate to the **Environmental Sensor Edge Device** template and select **Publish**. On the **Publish this device template to the application** panel, select **Publish** to publish the template
## Add an IoT Edge device
iot-central Howto Connect Rigado Cascade 500 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-rigado-cascade-500.md
To onboard a Cascade 500 gateway device into your Azure IoT Central application
To add a Cascade 500 device template:
-1. Navigate to the **Device Templates** tab in the left pane, select **+ New**:
-
- ![Create new device template](./media/howto-connect-rigado-cascade-500/device-template-new.png)
+1. Navigate to the **Device Templates** tab in the left pane, select **+ New**
1. The page gives you an option to **Create a custom template** or **Use a preconfigured device template**.
-1. Select the C500 device template from the list of preconfigured device templates as shown below:
-
- ![Select C500 device template](./media/howto-connect-rigado-cascade-500/device-template-preconfigured.png)
+1. Select the C500 device template from the list of preconfigured device templates.
1. Select **Next: Customize** to continue to the next step.
iot-central Howto Connect Ruuvi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-ruuvi.md
To onboard a RuuviTag sensor into your Azure IoT Central application instance, y
To add a RuuviTag device template:
-1. Navigate to the **Device Templates** tab in the left pane, select **+ New**:
+1. Navigate to the **Device Templates** tab in the left pane, select **+ New**. The page gives you an option to **Create a custom template** or **Use a preconfigured device template**.
- ![Create new device template](./media/howto-connect-ruuvi/device-template-new.png)
-
- The page gives you an option to **Create a custom template** or **Use a preconfigured device template**.
-
-1. Select the RuuviTag Multisensor device template from the list of preconfigured device templates:
-
- ![Select RuuviTag device template](./media/howto-connect-ruuvi/device-template-pre-configured.png)
+1. Select the RuuviTag Multisensor device template from the list of preconfigured device templates.
1. Select **Next: Customize** to continue to the next step.
iot-central Howto Create And Manage Applications Csp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-and-manage-applications-csp.md
You land on the Azure IoT Central Application Manager page. Azure IoT Central ke
![Create Manager for CSPs](media/howto-create-and-manage-applications-csp/image3.png)
-To create an Azure IoT Central application, select **Build** in the left menu. Choose one of the industry templates, or choose **Custom app** to create an application from scratch. This will load the Application Creation page. You must complete all the fields on this page and then choose **Create**. You find more information about each of the fields below.
-
-![Screenshot that shows the "Build your IoT application" page with the "Build" button selected.](media/howto-create-and-manage-applications-csp/image4.png)
-
-![Create Application Page for CSPs](media/howto-create-and-manage-applications-csp/image4-1.png)
-
-![Create Application Page for CSPs Billing Info](media/howto-create-and-manage-applications-csp/image4-2.png)
+To create an Azure IoT Central application, select **Build** in the left menu. Choose one of the industry templates, or choose **Custom app** to create an application from scratch. This will load the Application Creation page. You must complete all the fields on this page and then choose **Create**.
## Application name
iot-central Howto Create Iot Central Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-iot-central-application.md
The easiest way to get started creating IoT Central applications is on the [Azur
The [Build](https://apps.azureiotcentral.com/build) lets you select the application template you want to use: - If you select **Create app**, you can provide the necessary information to create an application from the template: :::image type="content" source="media/howto-create-iot-central-application/create-application.png" alt-text="Screenshot showing create application page for IoT Central.":::
To create an application template from an existing IoT Central application:
1. On the **Template Export** page, enter a name and description for your template. 1. Select the **Export** button to create the application template. You can now copy the **Shareable Link** that enables someone to create a new application from the template: :::image type="content" source="media/howto-create-iot-central-application/create-template-2.png" alt-text="Screenshot that shows export an application template.":::
iot-central Howto Manage Devices In Bulk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-in-bulk.md
The following example shows you how to create and run a job to set the light thr
To configure a **Property** job, select a property and set its new value. A property job can set multiple properties. To configure a **Command** job, choose the command to run. To configure a **Change device template** job, select the device template to assign to the devices in the device group.
- :::image type="content" source="media/howto-manage-devices-in-bulk/configure-job.png" alt-text="Screenshot that shows selections for creating a property job called Set Light Threshold":::
- Select **Save and exit** to add the job to the list of saved jobs on the **Jobs** page. You can later return to a job from the list of saved jobs. 1. Select **Next** to move to the **Delivery Options** page. The **Delivery Options** page lets you set the delivery options for this job: **Batches** and **Cancellation threshold**.
The following example shows you how to create and run a job to set the light thr
The cancellation threshold lets you automatically cancel a job if the number of errors exceeds your set limit. The threshold can apply to all the devices in the job, or to individual batches.
- :::image type="content" source="media/howto-manage-devices-in-bulk/job-wizard-delivery-options.png" alt-text="Screenshot of job wizard delivery options page":::
-
-1. Select **Next** to move to the **Schedule** page. The **Schedule** page lets you enable a schedule to run the job in the future:
+1. Select **Next** to move to the **Schedule** page. The **Schedule** page lets you enable a schedule to run the job in the future.
Choose a recurrence option for the schedule. You can set up a job to run:
The following example shows you how to create and run a job to set the light thr
Scheduled jobs always run on the devices in a device group, even if the device group membership changes over time.
- :::image type="content" source="media/howto-manage-devices-in-bulk/job-wizard-schedule.png" alt-text="Screenshot of job wizard schedule options page":::
- 1. Select **Next** to move to the **Review** page. The **Review** page shows the job configuration details. Select **Schedule** to schedule the job: :::image type="content" source="media/howto-manage-devices-in-bulk/job-wizard-schedule-review.png" alt-text="Screenshot of scheduled job wizard review page":::
The following example shows you how to create and run a job to set the light thr
On this page, you can **Unschedule** the job or **Edit** the scheduled job. You can return to a scheduled job from the list of scheduled jobs.
- :::image type="content" source="media/howto-manage-devices-in-bulk/job-schedule-details.png" alt-text="Screenshot of scheduled job details page":::
- 1. In the job wizard, you can choose to not schedule a job, and run it immediately. The following screenshot shows a job without a schedule that's ready to run immediately. Select **Run** to run the job:
- :::image type="content" source="media/howto-manage-devices-in-bulk/job-wizard-schedule-immediate.png" alt-text="Screenshot of job wizard review page":::
- 1. A job goes through *pending*, *running*, and *completed* phases. The job execution details contain result metrics, duration details, and a device list grid. When the job is complete, you can select **Results log** to download a CSV file of your job details, including the devices and their status values. This information can be useful for troubleshooting.
The following example shows you how to create and run a job to set the light thr
To stop a running job, open it and select **Stop**. The job status changes to reflect that the job is stopped. The **Summary** section shows which devices have completed, have failed, or are still pending. - When a job is in a stopped state, you can select **Continue** to resume running the job. The job status changes to reflect that the job is now running again. The **Summary** section continues to update with the latest progress. :::image type="content" source="media/howto-manage-devices-in-bulk/stopped-job.png" alt-text="Screenshot that shows a stopped job and the button for continuing a job":::
To bulk export devices from your application:
1. Select the devices that you want to export and then select the **Export** action.
- :::image type="content" source="media/howto-manage-devices-in-bulk/export-1.png" alt-text="Screenshot showing export action settings.":::
- 1. The export process starts. You can track the status using the **Device Operations** panel. 1. When the export completes, a success message is shown along with a link to download the generated file.
iot-central Howto Use Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-commands.md
The call to `onDeviceMethod` sets up the `commandHandler` method. This command h
1. Completes the long-running operation. 1. Uses a reported property with the same name as the command to tell IoT Central that the command completed.
-The following screenshot shows how the command response displays in the IoT Central UI when it receives the 202 response code:
-- The following screenshot shows the IoT Central UI when it receives the property update that indicates the command is complete: :::image type="content" source="media/howto-use-commands/long-running-finish.png" alt-text="Screenshot that shows long-running command finished":::
iot-central Howto Use Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-properties.md
For more information on device twins, see [Configure your devices from a back-en
When the operator sets a writable property in the Azure IoT Central application, the application uses a device twin desired property to send the value to the device. The device then responds by using a device twin reported property. When Azure IoT Central receives the reported property value, it updates the property view with a status of **Accepted**.
-The following view shows the writable properties. When you enter the value and select **Save**, the initial status is **Pending**. When the device accepts the change, the status changes to **Accepted**.
--
+When you enter the value and select **Save**, the initial status is **Pending**. When the device accepts the change, the status changes to **Accepted**.
## Use properties on unassigned devices
iot-dps How To Legacy Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-legacy-device-symm-key.md
If you can easily install a [hardware security module (HSM)](concepts-service.md
This tutorial also assumes that the device update takes place in a secure environment to prevent unauthorized access to the master group key or the derived device key.
-This tutorial is oriented toward a Windows-based workstation. However, you can perform the procedures on Linux. For a Linux example, see [How to provision for multitenancy](how-to-provision-multitenant.md).
+This tutorial is oriented toward a Windows-based workstation. However, you can perform the procedures on Linux. For a Linux example, see [Tutorial: Provision for geolatency](how-to-provision-multitenant.md).
> [!NOTE] > The sample used in this tutorial is written in C. There is also a [C# device provisioning symmetric key sample](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/device/SymmetricKeySample) available. To use this sample, download or clone the [azure-iot-samples-csharp](https://github.com/Azure-Samples/azure-iot-samples-csharp) repository and follow the in-line instructions in the sample code. You can follow the instructions in this tutorial to create a symmetric key enrollment group using the portal and to find the ID Scope and enrollment group primary and secondary keys needed to run the sample. You can also create individual enrollments using the sample.
iot-dps How To Provision Multitenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-provision-multitenant.md
Title: Tutorial - Provision devices for multitenancy in Azure IoT Hub Device Provisioning Service
-description: This tutorial shows how to provision devices for multitenancy with your Device Provisioning Service (DPS) instance
+ Title: Tutorial - Provision devices for geolatency in Azure IoT Hub Device Provisioning Service
+description: This tutorial shows how to provision devices for geolocation/geolatency with your Device Provisioning Service (DPS) instance
Previously updated : 08/19/2022 Last updated : 08/24/2022
-# Tutorial: Provision for multitenancy
+# Tutorial: Provision for geolatency
-This tutorial shows how to securely provision multiple simulated symmetric key devices to a group of IoT Hubs using an [allocation policy](concepts-service.md#allocation-policy). Allocation policies that are defined by the provisioning service support a variety of allocation scenarios. Two common scenarios are:
+This tutorial shows how to securely provision multiple simulated symmetric key devices to a group of IoT Hubs using an [allocation policy](concepts-service.md#allocation-policy). IoT Hub Device Provisioning Service (DPS) supports a variety of allocation scenarios through its built-in allocation policies and its support for custom allocation policies.
-* **Geolocation / GeoLatency**: As a device moves between locations, network latency is improved by having the device provisioned to the IoT hub that's closest to each location. In this scenario, a group of IoT hubs, which span across regions, are selected for enrollments. The **Lowest latency** allocation policy is selected for these enrollments. This policy causes the Device Provisioning Service to evaluate device latency and determine the closet IoT hub out of the group of IoT hubs.
+Provisioning for **Geolocation/ GeoLatency** is a common allocation scenario. As a device moves between locations, network latency is improved by having the device provisioned to the IoT hub that's closest to each location. In this scenario, a group of IoT hubs, which span across regions, are selected for enrollments. The built-in **Lowest latency** allocation policy is selected for these enrollments. This policy causes the Device Provisioning Service to evaluate device latency and determine the closet IoT hub out of the group of IoT hubs.
-* **Multi-tenancy**: Devices used within an IoT solution may need to be assigned to a specific IoT hub or group of IoT hubs. The solution may require all devices for a particular tenant to communicate with a specific group of IoT hubs. In some cases, a tenant may own IoT hubs and require devices to be assigned to their IoT hubs.
-
-It's common to combine these two scenarios. For example, a multitenant IoT solution commonly assigns tenant devices using a group of IoT hubs that are scattered across different regions. These tenant devices can be assigned to the IoT hub in the group that has the lowest latency based on geographic location.
-
-This tutorial uses a simulated device sample from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) to demonstrate how to provision devices in a multitenant scenario across regions. You will perform the following steps in this tutorial:
+This tutorial uses a simulated device sample from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) to demonstrate how to provision devices across regions. You'll perform the following steps in this tutorial:
> [!div class="checklist"] > * Use the Azure CLI to create two regional IoT hubs (**West US 2** and **East US**)
-> * Create a multitenant enrollment
+> * Create an enrollment that provisions devices based on geolocation (lowest latency)
> * Use the Azure CLI to create two regional Linux VMs to act as devices in the same regions (**West US 2** and **East US**) > * Set up the development environment for the Azure IoT C SDK on both Linux VMs
-> * Simulate the devices to see that they are provisioned for the same tenant in the closest region.
+> * Simulate the devices and verify that they're provisioned to the IoT hub in the closest region.
>[!IMPORTANT] > Some regions may, from time to time, enforce restrictions on the creation of Virtual Machines. At the time of writing this guide, the *westus2* and *eastus* regions permitted the creation of VMs. If you're unable to create in either one of those regions, you can try a different region. To learn more about choosing Azure geographical regions when creating VMs, see [Regions for virtual machines in Azure](../virtual-machines/regions.md)
This tutorial uses a simulated device sample from the [Azure IoT C SDK](https://
## Create two regional IoT hubs
-In this section, you'll create an Azure resource group, and two new regional IoT hub resources for a tenant. One IoT hub will be for the **West US 2** region and the other will be for the **East US** region.
+In this section, you'll create an Azure resource group, and two new regional IoT hub resources. One IoT hub will be for the **West US 2** region and the other will be for the **East US** region.
>[!IMPORTANT]
->It's recommended that you use the same resource group for all resources created in this tutorial. This will make clean up easier after you are finished.
+>It's recommended that you use the same resource group for all resources created in this tutorial. This will make clean up easier after you're finished.
1. In the Azure Cloud Shell, create a resource group with the following [az group create](/cli/azure/group#az-group-create) command:
In this section, you'll create an Azure resource group, and two new regional IoT
This command may take a few minutes to complete.
-## Create the multitenant enrollment
+## Create an enrollment for geolatency
-In this section, you'll create a new enrollment group for the tenant devices.
+In this section, you'll create a new enrollment group for the your devices.
For simplicity, this tutorial uses [Symmetric key attestation](concepts-symmetric-key-attestation.md) with the enrollment. For a more secure solution, consider using [X.509 certificate attestation](concepts-x509-attestation.md) with a chain of trust.
For simplicity, this tutorial uses [Symmetric key attestation](concepts-symmetri
5. Select **Link a new IoT Hub**
- :::image type="content" source="./media/how-to-provision-multitenant/create-multitenant-enrollment.png" alt-text="Add multitenant enrollment group for symmetric key attestation.":::
+ :::image type="content" source="./media/how-to-provision-multitenant/create-multitenant-enrollment.png" alt-text="Add enrollment group for symmetric key attestation and lowest latency.":::
6. On the **Add link to IoT hub** page, enter the following information:
For simplicity, this tutorial uses [Symmetric key attestation](concepts-symmetri
## Create regional Linux VMs
-In this section, you'll create two regional Linux virtual machines (VMs). These VMs will run a device simulation sample from each region to demonstrate device provisioning for tenant devices from both regions.
+In this section, you'll create two regional Linux virtual machines (VMs). These VMs will run a device simulation sample from each region to demonstrate device provisioning for devices from both regions.
To make clean-up easier, these VMs will be added to the same resource group that contains the IoT hubs that were created, *contoso-us-resource-group*. However, the VMs will run in separate regions (**West US 2** and **East US**).
To make clean-up easier, these VMs will be added to the same resource group that
## Prepare the Azure IoT C SDK development environment
-In this section, you'll clone the Azure IoT C SDK on each VM. The SDK contains a sample that simulates a tenant's device provisioning from each region.
+In this section, you'll clone the Azure IoT C SDK on each VM. The SDK contains a sample that simulates a device provisioning from each region.
For each VM:
For **both** *eastus* and *westus 2* devices:
p3w2DQr9WqEGBLUSlFi1jPQ7UWQL4siAGy75HFTFbf8= ```
-3. Now each tenant device has their own derived device key and unique registration ID to perform symmetric key attestation with the enrollment group during the provisioning process.
+3. Now each device has its own derived device key and unique registration ID to perform symmetric key attestation with the enrollment group during the provisioning process.
## Simulate the devices from each region
The sample code simulates a device boot sequence that sends the provisioning req
cmake --build . --target prov_dev_client_sample --config Debug ```
-8. Once the build succeeds, run **prov\_dev\_client\_sample.exe** on both VMs to simulate a tenant device from each region. Notice that each device is allocated to the tenant IoT hub closest to the simulated device's regions.
+8. Once the build succeeds, run **prov\_dev\_client\_sample.exe** on both VMs to simulate a device from each region. Notice that each device is allocated to the IoT hub closest to the simulated device's region.
Run the simulation: ```bash
If you plan to continue working with resources created in this tutorial, you can
The steps here assume that you created all resources in this tutorial as instructed in the same resource group named **contoso-us-resource-group**. > [!IMPORTANT]
-> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the IoT Hub inside an existing resource group that contains resources you want to keep, only delete the IoT Hub resource itself instead of deleting the resource group.
+> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you don't accidentally delete the wrong resource group or resources. If you created the IoT Hub inside an existing resource group that contains resources you want to keep, only delete the IoT Hub resource itself instead of deleting the resource group.
> To delete the resource group by name:
iot-dps How To Use Custom Allocation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-use-custom-allocation-policies.md
The simulated devices will use the derived device keys with each registration ID
In this section, you prepare the development environment used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The SDK includes the sample code for the simulated device. This simulated device will attempt provisioning during the device's boot sequence.
-This section is oriented toward a Windows-based workstation. For a Linux example, see the set-up of the VMs in [How to provision for multitenancy](how-to-provision-multitenant.md).
+This section is oriented toward a Windows-based workstation. For a Linux example, see the set-up of the VMs in [Tutorial: Provision for geolatency](how-to-provision-multitenant.md).
1. Download the [CMake build system](https://cmake.org/download/).
iot-dps Quick Create Simulated Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-symm-key.md
In this quickstart, you'll create a simulated device on your Windows machine. Th
If you're unfamiliar with the process of provisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview.
-This quickstart demonstrates a solution for a Windows-based workstation. However, you can also perform the procedures on Linux. For a Linux example, see [How to provision for multitenancy](how-to-provision-multitenant.md).
+This quickstart demonstrates a solution for a Windows-based workstation. However, you can also perform the procedures on Linux. For a Linux example, see [Tutorial: provision for geolatency](how-to-provision-multitenant.md).
## Prerequisites
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
In this quickstart, you'll create a simulated device on your Windows machine. Th
If you're unfamiliar with the process of provisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview. Also make sure you've completed the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md) before continuing.
-This quickstart demonstrates a solution for a Windows-based workstation. However, you can also perform the procedures on Linux. For a Linux example, see [How to provision for multitenancy](how-to-provision-multitenant.md).
+This quickstart demonstrates a solution for a Windows-based workstation. However, you can also perform the procedures on Linux. For a Linux example, see [Tutorial: Provision for geolatency](how-to-provision-multitenant.md).
## Prerequisites
iot-dps Tutorial Custom Allocation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-allocation-policies.md
contoso-heatpump-088 : 6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg=
- ## Prepare an Azure IoT C SDK development environment Devices will request provisioning using provisioning sample code included in the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). In this section, you prepare the development environment used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The SDK includes the sample code for the simulated device. This simulated device will attempt provisioning during the device's boot sequence.
-This section is oriented toward a Windows-based workstation. For a Linux example, see the set-up of the VMs in [How to provision for multitenancy](how-to-provision-multitenant.md).
+This section is oriented toward a Windows-based workstation. For a Linux example, see the set-up of the VMs in [Tutorial: Provision for geolatency](how-to-provision-multitenant.md).
1. Download the [CMake build system](https://cmake.org/download/).
iot-hub Iot Hub Create Through Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-through-portal.md
To delete an IoT hub, find the IoT hub you want to delete, then choose **Delete*
## Next steps
-Follow these links to learn more about managing Azure IoT Hub:
+Learn more about managing Azure IoT Hub:
* [Message routing with IoT Hub](tutorial-routing.md) * [Monitor your IoT hub](monitor-iot-hub.md)
iot-hub Iot Hub Create Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-using-cli.md
This article shows you how to create an IoT hub using Azure CLI. - [!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] ## Create an IoT Hub
Use the Azure CLI to create a resource group and then add an IoT hub.
[!INCLUDE [iot-hub-pii-note-naming-hub](../../includes/iot-hub-pii-note-naming-hub.md)]
-The previous command creates an IoT hub in the S1 pricing tier for which you are billed. For more information, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/).
+The previous command creates an IoT hub in the S1 pricing tier for which you're billed. For more information, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/).
## Remove an IoT Hub
-You can use Azure CLI to [delete an individual resource](/cli/azure/resource), such as an IoT hub, or delete a resource group and all its resources, including any IoT hubs.
+There are various commands to [delete an individual resource](/cli/azure/resource), such as an IoT hub, or delete a resource group and all its resources, including any IoT hubs.
To [delete an IoT hub](/cli/azure/iot/hub#az-iot-hub-delete), run the following command:
az group delete --name {your resource group name}
## Next steps
-To learn more about using an IoT hub, see the following articles:
+Learn more about using an IoT hub:
* [IoT Hub developer guide](iot-hub-devguide.md) * [Using the Azure portal to manage IoT Hub](iot-hub-create-through-portal.md)
iot-hub Iot Hub Create Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-using-powershell.md
You can use Azure PowerShell cmdlets to create and manage Azure IoT hubs. This tutorial shows you how to create an IoT hub with PowerShell.
-To complete this how-to, you need an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
To complete this how-to, you need an Azure subscription. If you don't have an Az
## Connect to your Azure subscription
-If you are using the Cloud Shell, you are already logged in to your subscription. If you are running PowerShell locally instead, enter the following command to sign in to your Azure subscription:
+If you're using the Cloud Shell, you're already logged in to your subscription. If you're running PowerShell locally instead, enter the following command to sign in to your Azure subscription:
```powershell # Log into Azure account.
The name of the IoT hub must be globally unique.
[!INCLUDE [iot-hub-pii-note-naming-hub](../../includes/iot-hub-pii-note-naming-hub.md)]
-You can list all the IoT hubs in your subscription using the [Get-AzIotHub](/powershell/module/az.IotHub/Get-azIotHub) command:
+To list all the IoT hubs in your subscription, use the [Get-AzIotHub](/powershell/module/az.IotHub/Get-azIotHub) command.
+
+This example shows the S1 Standard IoT Hub you created in the previous step.
```azurepowershell-interactive Get-AzIotHub ```
-This example shows the S1 Standard IoT Hub you created in the previous step.
-
-You can delete the IoT hub using the [Remove-AzIotHub](/powershell/module/az.iothub/remove-aziothub) command:
+To delete the IoT hub, use the [Remove-AzIotHub](/powershell/module/az.iothub/remove-aziothub) command.
```azurepowershell-interactive Remove-AzIotHub `
Remove-AzIotHub `
-Name MyTestIoTHub ```
-Alternatively, you can remove a resource group and all the resources it contains using the [Remove-AzResourceGroup](/powershell/module/az.Resources/Remove-azResourceGroup) command:
+Alternatively, to remove a resource group and all the resources it contains, use the [Remove-AzResourceGroup](/powershell/module/az.Resources/Remove-azResourceGroup) command:
```azurepowershell-interactive Remove-AzResourceGroup -Name MyIoTRG1
Remove-AzResourceGroup -Name MyIoTRG1
## Next steps
-Now you have deployed an IoT hub using a PowerShell cmdlet, if you want to explore further, check out the following articles:
+Now that you've deployed an IoT hub using a PowerShell cmdlet, explore more articles:
* [PowerShell cmdlets for working with your IoT hub](/powershell/module/az.iothub/). * [IoT Hub resource provider REST API](/rest/api/iothub/iothubresource).
-To learn more about developing for IoT Hub, see the following articles:
+Develop for IoT Hub:
* [Introduction to C SDK](iot-hub-device-sdk-c-intro.md) * [Azure IoT SDKs](iot-hub-devguide-sdks.md)
-To further explore the capabilities of IoT Hub, see:
+Explore the capabilities of IoT Hub:
* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
machine-learning Reference Yaml Job Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-command.md
Previously updated : 03/31/2022 Last updated : 08/08/2022
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `environment` | string or object | **Required (if not using `component` field).** The environment to use for the job. This can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment use the `azureml:<environment_name>:<environment_version>` syntax or `azureml:<environment_name>@latest` (to reference the latest version of an environment). <br><br> To define an environment inline please follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). Exclude the `name` and `version` properties as they are not supported for inline environments. | | | | `environment_variables` | object | Dictionary of environment variable key-value pairs to set on the process where the command is executed. | | | | `distribution` | object | The distribution configuration for distributed training scenarios. One of [MpiConfiguration](#mpiconfiguration), [PyTorchConfiguration](#pytorchconfiguration), or [TensorFlowConfiguration](#tensorflowconfiguration). | | |
-| `compute` | string | Name of the compute target to execute the job on. This can be either a reference to an existing compute in the workspace (using the `azureml:<compute_name>` syntax) or `local` to designate local execution. | | `local` |
+| `compute` | string | Name of the compute target to execute the job on. This can be either a reference to an existing compute in the workspace (using the `azureml:<compute_name>` syntax) or `local` to designate local execution. **Note:** jobs in pipeline didn't support `local` as `compute` | | `local` |
| `resources.instance_count` | integer | The number of nodes to use for the job. | | `1` | | `resources.instance_type` | string | The instance type to use for the job. Applicable for jobs running on Azure Arc-enabled Kubernetes compute (where the compute target specified in the `compute` field is of `type: kubernentes`). If omitted, this will default to the default instance type for the Kubernetes cluster. For more information, see [Create and select Kubernetes instance types](how-to-attach-kubernetes-anywhere.md). | | | | `limits.timeout` | integer | The maximum time in seconds the job is allowed to run. Once this limit is reached the system will cancel the job. | | |
machine-learning Reference Yaml Job Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-pipeline.md
- Previously updated : 03/31/2022 Last updated : 08/08/2022
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - |
-| `type` | string | The type of job input. Specify `uri_file` for input data that points to a single file source, or `uri_folder` for input data that points to a folder source. | `uri_file`, `uri_folder` | `uri_folder` |
+| `type` | string | The type of job input. Specify `uri_file` for input data that points to a single file source, or `uri_folder` for input data that points to a folder source. [Learn more about data access.](concept-data.md)| `uri_file`, `uri_folder`, `mltable`, `mlflow_model` | `uri_folder` |
| `path` | string | The path to the data to use as input. This can be specified in a few ways: <br><br> - A local path to the data source file or folder, e.g. `path: ./iris.csv`. The data will get uploaded during job submission. <br><br> - A URI of a cloud path to the file or folder to use as the input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information on how to use the `azureml://` URI format. <br><br> - An existing registered Azure ML data asset to use as the input. To reference a registered data asset use the `azureml:<data_name>:<data_version>` syntax or `azureml:<data_name>@latest` (to reference the latest version of that data asset), e.g. `path: azureml:cifar10-data:1` or `path: azureml:cifar10-data@latest`. | | | | `mode` | string | Mode of how the data should be delivered to the compute target. <br><br> For read-only mount (`ro_mount`), the data will be consumed as a mount path. A folder will be mounted as a folder and a file will be mounted as a file. Azure ML will resolve the input to the mount path. <br><br> For `download` mode the data will be downloaded to the compute target. Azure ML will resolve the input to the downloaded path. <br><br> If you only want the URL of the storage location of the data artifact(s) rather than mounting or downloading the data itself, you can use the `direct` mode. This will pass in the URL of the storage location as the job input. Note that in this case you are fully responsible for handling credentials to access the storage. | `ro_mount`, `download`, `direct` | `ro_mount` |
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - |
-| `type` | string | The type of job output. For the default `uri_folder` type, the output will correspond to a folder. | `uri_folder` | `uri_folder` |
+| `type` | string | The type of job output. For the default `uri_folder` type, the output will correspond to a folder. | `uri_file`, `uri_folder`, `mltable`, `mlflow_model` | `uri_folder` |
| `mode` | string | Mode of how output file(s) will get delivered to the destination storage. For read-write mount mode (`rw_mount`) the output directory will be a mounted directory. For upload mode the file(s) written will get uploaded at the end of the job. | `rw_mount`, `upload` | `rw_mount` | ## Remarks
Examples are available in the [examples GitHub repository](https://github.com/Az
## Next steps - [Install and use the CLI (v2)](how-to-configure-cli.md)
+- [Create ML pipelines using components](how-to-create-component-pipelines-cli.md)
machine-learning Reference Yaml Job Sweep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-sweep.md
Previously updated : 03/31/2022 Last updated : 08/08/2022
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - |
-| `type` | string | The type of job input. Specify `uri_file` for input data that points to a single file source, or `uri_folder` for input data that points to a folder source. | `uri_file`, `uri_folder` | `uri_folder` |
+| `type` | string | The type of job input. Specify `uri_file` for input data that points to a single file source, or `uri_folder` for input data that points to a folder source. [Learn more about data access.](concept-data.md)| `uri_file`, `uri_folder`, `mltable`, `mlflow_model` | `uri_folder` |
| `path` | string | The path to the data to use as input. This can be specified in a few ways: <br><br> - A local path to the data source file or folder, e.g. `path: ./iris.csv`. The data will get uploaded during job submission. <br><br> - A URI of a cloud path to the file or folder to use as the input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information on how to use the `azureml://` URI format. <br><br> - An existing registered Azure ML data asset to use as the input. To reference a registered data asset use the `azureml:<data_name>:<data_version>` syntax or `azureml:<data_name>@latest` (to reference the latest version of that data asset), e.g. `path: azureml:cifar10-data:1` or `path: azureml:cifar10-data@latest`. | | | | `mode` | string | Mode of how the data should be delivered to the compute target. <br><br> For read-only mount (`ro_mount`), the data will be consumed as a mount path. A folder will be mounted as a folder and a file will be mounted as a file. Azure ML will resolve the input to the mount path. <br><br> For `download` mode the data will be downloaded to the compute target. Azure ML wil resolve the input to the downloaded path. <br><br> If you only want the URL of the storage location of the data artifact(s) rather than mounting or downloading the data itself, you can use the `direct` mode. This will pass in the URL of the storage location as the job input. Note that in this case you are fully responsible for handling credentials to access the storage. | `ro_mount`, `download`, `direct` | `ro_mount` |
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - |
-| `type` | string | The type of job output. For the default `uri_folder` type, the output will correspond to a folder. | `uri_folder` | `uri_folder` |
+| `type` | string | The type of job output. For the default `uri_folder` type, the output will correspond to a folder. | `uri_file`, `uri_folder`, `mltable`, `mlflow_model` | `uri_folder` |
| `mode` | string | Mode of how output file(s) will get delivered to the destination storage. For read-write mount mode (`rw_mount`) the output directory will be a mounted directory. For upload mode the file(s) written will get uploaded at the end of the job. | `rw_mount`, `upload` | `rw_mount` | ## Remarks
marketplace Company Work Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/company-work-accounts.md
- Title: Company work accounts and Partner Center
-description: Find out how to link a work email account domain to Partner Center. Learn how to create a work account and use multiple accounts. See troubleshooting tips.
------ Previously updated : 06/30/2022---
-# Company work accounts and Partner Center
-
-Partner Center uses company work accounts, also known as Azure Active Directory (Azure AD) tenants, for many purposes:
--- To manage account access for multiple users-- To control permissions-- To host groups and applications-- To maintain profile data-
-If you link your company's work email account domain to your Partner Center account, your employees can sign in to Partner Center to manage marketplace offers by using their own work account usernames and passwords.
-
-## Check whether your company already has a work account
-
-If your company subscribes to a Microsoft cloud service such as Azure, Microsoft Intune, or Microsoft 365, you already have a work email account domain. You can use that work account with Partner Center.
-
-Follow these steps to check for a work account:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Search for and select **Azure Active Directory**, and then select **Custom domain names**.
-1. Search for your domain name in the list. If you already have a work account, the list will contain your domain name.
-
-If your company doesn't already have a work account, one will be created for you during the Partner Center enrollment process.
-
-## Set up multiple work accounts
-
-Before you decide to use an existing work account, consider how many users in the work account need to access Partner Center. If you have users in the work account who don't need to access Partner Center, you might want to consider creating multiple work accounts. That way, only users who need to access Partner Center are represented on a particular account.
-
-## Create a new work account
-
-To create a new work account for your company, take the following steps. You might need to request assistance from the person who has administrative permissions for your company's Microsoft Azure account.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Search for and select **Azure Active Directory**, and then select **Users**.
-
-1. Select **New user**, and then follow these steps to configure a new Azure work account:
-
- 1. Enter a name and work email address.
- 1. For **Directory role**, ensure the value meets the user requirement.
- 1. At the bottom, select **Show password**.
- 1. Make a note of the autogenerated password.
- 1. Complete the other required fields.
-
-1. Select **Create** to save the new user.
-
-The email address for the user account must be a verified domain name in your directory. To list all the verified domains in your directory, select **Azure Active Directory** > **Custom domain names**.
-
-To learn more about adding custom domains in Azure AD, see [Add or associate a domain in Azure AD](../active-directory/fundamentals/add-custom-domain.md).
-
-## Troubleshoot work email sign-in
-
-If you're having trouble signing in to your work account, find the scenario on the following diagram that best matches your situation, and take the recommended steps.
--
-## Next steps
--- [Manage your commercial marketplace account in Partner Center](./manage-account.md)
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-supported-versions.md
Previously updated : 07/14/2022 Last updated : 08/25/2022 # Supported PostgreSQL major versions in Azure Database for PostgreSQL - Flexible Server
Azure Database for PostgreSQL - Flexible Server currently supports the following
## PostgreSQL version 14
-The current minor release is **14.3**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/14/static/release-14-3.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
+The current minor release is **14.4**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/about/news/postgresql-144-released-2470/) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
+ >[!NOTE]
-> PostgreSQL community released an out of band version 14.4 to address a critical issue. See the [release notes](https://www.postgresql.org/docs/release/14.4/) and this [discussion thread](https://www.postgresql.org/message-id/165473835807.573551.1512237163040609764%40wrigleys.postgresql.org) for details and the workaround till your server is patched to 14.4.
+> PostgreSQL community has released this out of band version 14.4 to address a critical issue. See the [release notes](https://www.postgresql.org/docs/release/14.4/) for details.
## PostgreSQL version 13
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Previously updated : 08/11/2022 Last updated : 08/25/2022 # Release notes - Azure Database for PostgreSQL - Flexible Server
This page provides latest news and updates regarding feature additions, engine v
## Release: August 2022
+* Support for [PostgreSQL minor version](./concepts-supported-versions.md) 14.4. <sup>$</sup>
* Support for [new regions](overview.md#azure-regions) Qatar Central, Switzerland West, France South.
+<sup>**$**</sup> New PostgreSQL 14 servers will be provisioned with version 14.4. Your existing PostgreSQL 14.3 servers will be upgraded to 14.4 in your server's future maintenance window.
+ ## Release: July 2022 * Support for [Geo-redundant backup](./concepts-backup-restore.md#geo-redundant-backup-and-restore-preview) in [additional regions](./overview.md#azure-regions) - Australia East, Australia Southeast, Canada Central, Canada East, UK South, UK West, East US, West US, East Asia, Southeast Asia, North Central US, South Central US, and France Central.
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-limits.md
When connections exceed the limit, you may receive the following error:
> [!IMPORTANT] > For best experience, we recommend that you use a connection pooler like pgBouncer to efficiently manage connections.
-A PostgreSQL connection, even idle, can occupy about 10MB of memory. Also, creating new connections takes time. Most applications request many short-lived connections, which compounds this situation. The result is fewer resources available for your actual workload leading to decreased performance. A connection pooler that decreases idle connections and reuses existing connections will help avoid this. To learn more, visit our [blog post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717).
+A PostgreSQL connection, even idle, can occupy up to 2MB of memory. Also, creating new connections takes time. Most applications request many short-lived connections, which compounds this situation. The result is fewer resources available for your actual workload leading to decreased performance. A connection pooler that decreases idle connections and reuses existing connections will help avoid this. To learn more, visit our [blog post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717).
## Functional limitations
postgresql Quickstart Create Server Database Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-database-portal.md
Go to the [Azure portal](https://portal.azure.com/) to create an Azure Database
>[!div class="mx-imgBorder"] > :::image type="content" source="./media/quickstart-create-database-portal/search-postgres.png" alt-text="Find Azure Database for PostgreSQL.":::
-1. Select **Add**.
+1. Select **Add**. mark is showing me how to make a change
2. On the Create a Azure Database for PostgreSQL page , select **Single server**.
purview How To Resource Set Pattern Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-resource-set-pattern-rules.md
Below are the available types that can be used in static and dynamic replacers:
| - | | | string | A series of 1 or more Unicode characters including delimiters like spaces. | | int | A series of 1 or more 0-9 ASCII characters, it can be 0 prefixed (e.g. 0001). |
-| guid | A series of 32 or 8-4-4-4-12 string representation of an UUID as defineddefa in [RFC 4122](https://tools.ietf.org/html/rfc4122). |
+| guid | A series of 32 or 8-4-4-4-12 string representation of an UUID as defined in [RFC 4122](https://tools.ietf.org/html/rfc4122). |
| date | A series of 6 or 8 0-9 ASCII characters with optionally separators: yyyymmdd, yyyy-mm-dd, yymmdd, yy-mm-dd, specified in [RFC 3339](https://tools.ietf.org/html/rfc3339). | | time | A series of 4 or 6 0-9 ASCII characters with optionally separators: HHmm, HH:mm, HHmmss, HH:mm:ss specified in [RFC 3339](https://tools.ietf.org/html/rfc3339). | | timestamp | A series of 12 or 14 0-9 ASCII characters with optionally separators: yyyy-mm-ddTHH:mm, yyyymmddhhmm, yyyy-mm-ddTHH:mm:ss, yyyymmddHHmmss specified in [RFC 3339](https://tools.ietf.org/html/rfc3339). |
Asset 4
## Next steps
-Get started by [registering and scanning an Azure Data Lake Gen2 storage account](register-scan-adls-gen2.md).
+Get started by [registering and scanning an Azure Data Lake Gen2 storage account](register-scan-adls-gen2.md).
remote-rendering Install Remote Rendering Unity Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/unity/install-remote-rendering-unity-package.md
Azure Remote Rendering uses a Unity package to encapsulate the integration into
This package contains the entire C# API and all plugin binaries required to use Azure Remote Rendering with Unity. Following Unity's naming scheme for packages, the package is called **com.microsoft.azure.remote-rendering**.
-The package is not part of the [ARR samples repository](https://github.com/Azure/azure-remote-rendering), and it is not available from Unity's internal package registry.
+The package isn't part of the [ARR samples repository](https://github.com/Azure/azure-remote-rendering), and it isn't available from Unity's internal package registry.
You can choose one of the following options to install the Unity package. ## Install Remote Rendering package using the Mixed Reality Feature Tool
To update a local package, just repeat the respective download steps you used an
## Unity render pipelines
-Remote Rendering works with both the **:::no-loc text="Universal render pipeline":::** and the **:::no-loc text="Standard render pipeline":::**. For performance reasons, the Universal render pipeline is recommended.
+Remote Rendering works with both the **:::no-loc text="Standard render pipeline":::** ("built-in render pipeline") and the **:::no-loc text="Universal render pipeline":::** ("URP"). For performance reasons, it's recommended to use the built-in render pipeline, unless there are strong reasons that require URP.
To use the **:::no-loc text="Universal render pipeline":::**, its package has to be installed in Unity. The installation can either be done in Unity's **Package Manager** UI (package name **Universal RP**, version 7.3.1 or newer), or through the `Packages/manifest.json` file, as described in the [Unity project setup tutorial](../../tutorials/unity/view-remote-models/view-remote-models.md#include-the-azure-remote-rendering-and-openxr-packages).
remote-rendering System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/system-requirements.md
The following software must be installed:
* The latest version of **Visual Studio 2019** [(download)](https://visualstudio.microsoft.com/vs/older-downloads/) * [Visual Studio tools for Mixed Reality](/windows/mixed-reality/install-the-tools). Specifically, the following *Workload* installations are mandatory: * **Desktop development with C++**
- * **Universal Windows Platform (UWP) development**
+ * **Universal Windows Platform (UWP) development**
* **Windows SDK 10.0.18362.0** [(download)](https://developer.microsoft.com/windows/downloads/windows-10-sdk) * **GIT** [(download)](https://git-scm.com/downloads) * Optional: To view the video stream from the server on a desktop PC, you need the **HEVC Video Extensions** [(Microsoft Store link)](https://www.microsoft.com/p/hevc-video-extensions/9nmzlz57r3t7). Ensure that the latest version is installed by checking for updates in the store.
ARR for Unity 2019 supports both the legacy **built-in XR** integration for Wind
For Unity 2020, use latest version of Unity 2020.3. > [!IMPORTANT]
-> When working with the OpenXR version of the plugin, it has to be verified that the *Universal Render Pipeline* (URP) has version 10.5.1 or higher. To check that, open the *Package Manager* from the Unity *Windows* menu and refer to the *Universal RP* section:
+> When working with the OpenXR version of the plugin and the *Universal Render Pipeline* (URP), it has to be verified that the *Universal Render Pipeline* has version 10.5.1 or higher. To check that, open the *Package Manager* from the Unity *Windows* menu and refer to the *Universal RP* section:
> ![Version of the Universal RP](./media/unity-universal-rp-version-10-5-1.png)
-> [!IMPORTANT]
-> The **WMR (Windows Mixed Reality) plugin for Unity 2020.3** currently has a performance degradation with ARR. For a better experience, we suggest to either stay on Unity 2019.X or switch to the OpenXR version.
+### Unity 2021
+
+For Unity 2021, use latest version of Unity 2021.3.
## Next steps
search Cognitive Search Tutorial Blob Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-blob-python.md
ms.devlang: python Previously updated : 12/01/2021 Last updated : 08/24/2022
The skillset is attached to the indexer. It uses built-in skills from Microsoft
* [Azure Storage](https://azure.microsoft.com/services/storage/) * [Azure Cognitive Search](https://azure.microsoft.com/services/search/)
-> [!Note]
+> [!NOTE]
> You can use the free service for this tutorial. A free search service limits you to three indexes, three indexers, and three data sources. This tutorial creates one of each. Before starting, make sure you have room on your service to accept the new resources. ## Download files
To interact with your Azure Cognitive Search service you will need the service U
2. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
- Get the query key as well. It's a best practice to issue query requests with read-only access.
- ![Get the service name and admin and query keys](media/search-get-started-javascript/service-name-and-keys.png) All requests require an api-key in the header of every request sent to your service. A valid key establishes trust, on a per request basis, between the application sending the request and the service that handles it.
search Search Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal.md
Previously updated : 12/29/2021 Last updated : 08/24/2022 # Quickstart: Create an Azure Cognitive Search index in the Azure portal
Typically, in a code-based exercise, index creation is completed prior to loadin
Fields have data types and attributes. The check boxes across the top are *index attributes* controlling how the field is used. + **Retrievable** means that it shows up in search results list. You can mark individual fields as off limits for search results by clearing this checkbox, for example for fields used only in filter expressions.
-+ **Key** is the unique document identifier. It's always a string, and it is required.
++ **Key** is the unique document identifier. It's always a string, and it's required. + **Filterable**, **Sortable**, and **Facetable** determine whether fields are used in a filter, sort, or faceted navigation structure. + **Searchable** means that a field is included in full text search. Strings are searchable. Numeric fields and Boolean fields are often marked as not searchable.
-Storage requirements do not vary as a result of your selection. For example, if you set the **Retrievable** attribute on multiple fields, storage requirements do not go up.
+Storage requirements don't vary as a result of your selection. For example, if you set the **Retrievable** attribute on multiple fields, storage requirements don't go up.
By default, the wizard scans the data source for unique identifiers as the basis for the key field. *Strings* are attributed as **Retrievable** and **Searchable**. *Integers* are attributed as **Retrievable**, **Filterable**, **Sortable**, and **Facetable**.
Wait for the portal page to refresh. After a few minutes, you should see the ind
From this list, you can click on the *hotels-sample* index that you just created, view the index schema. and optionally add new fields.
-The **Fields** tab shows the index schema. If you are writing queries and need to check whether a field is filterable or sortable, this tab shows you the attributes.
+The **Fields** tab shows the index schema. If you're writing queries and need to check whether a field is filterable or sortable, this tab shows you the attributes.
-Scroll to the bottom of the list to enter a new field. While you can always create a new field, in most cases, you cannot change existing fields. Existing fields have a physical representation in your search service and are thus non-modifiable, not even in code. To fundamentally change an existing field, create a new index, dropping the original.
+Scroll to the bottom of the list to enter a new field. While you can always create a new field, in most cases, you can't change existing fields. Existing fields have a physical representation in your search service and are thus non-modifiable, not even in code. To fundamentally change an existing field, create a new index, dropping the original.
:::image type="content" source="media/search-get-started-portal/sample-index-def.png" alt-text="sample index definition"::: Other constructs, such as scoring profiles and CORS options, can be added at any time.
-To clearly understand what you can and cannot edit during index design, take a minute to view index definition options. Grayed-out options are an indicator that a value cannot be modified or deleted.
+To clearly understand what you can and can't edit during index design, take a minute to view index definition options. Grayed-out options are an indicator that a value can't be modified or deleted.
## <a name="query-index"></a> Query using Search explorer
When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+If you're using a free service, remember that the limit is three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
## Next steps
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-sharepoint-online.md
Previously updated : 06/01/2022 Last updated : 08/25/2022 # Index data from SharePoint document libraries
Last updated 06/01/2022
> [!IMPORTANT] > SharePoint indexer support is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). [Request access](https://aka.ms/azure-cognitive-search/indexer-preview) to this feature, and after access is enabled, use a [preview REST API (2020-06-30-preview or later)](search-api-preview.md) to index your content. There is currently limited portal support and no .NET SDK support.
-Configure a [search indexer](search-indexer-overview.md) to index documents stored in SharePoint document libraries for full text search in Azure Cognitive Search. This article explains the configuration steps, followed by a deeper exploration of behaviors and scenarios you are likely to encounter.
+This article explains how to configure a [search indexer](search-indexer-overview.md) to index documents stored in SharePoint document libraries for full text search in Azure Cognitive Search. Configuration steps are followed by a deeper exploration of behaviors and scenarios you're likely to encounter.
> [!NOTE] > SharePoint supports a granular authorization model that determines per-user access at the document level. The SharePoint indexer does not pull these permissions into the search index, and Cognitive Search does not support document-level authorization. When a document is indexed from SharePoint into a search service, the content is available to anyone who has read access to the index. If you require document-level permissions, you should investigate [security filters to trim results](search-security-trimming-for-azure-search-with-aad.md) of unauthorized content.
The SharePoint indexer can extract text from the following document formats:
## Configure the SharePoint indexer
-To set up the SharePoint indexer, you will need to perform some tasks in the Azure portal, and other tasks through the preview REST API.
+To set up the SharePoint indexer, you'll need to perform some tasks in the Azure portal and others through the preview REST API.
-The following video shows how to set up the SharePoint indexer.
+The following video shows you how to set up the SharePoint indexer.
> [!VIDEO https://www.youtube.com/embed/QmG65Vgl0JI]
The following video shows how to set up the SharePoint indexer.
When a system-assigned managed identity is enabled, Azure creates an identity for your search service that can be used by the indexer. This identity is used to automatically detect the tenant the search service is provisioned in.
-If the SharePoint site is in the same tenant as the search service, you will need to enable the system-assigned managed identity for the search service in the Azure portal. If the SharePoint site is in a different tenant from the search service, skip this step.
+If the SharePoint site is in the same tenant as the search service, you'll need to enable the system-assigned managed identity for the search service in the Azure portal. If the SharePoint site is in a different tenant from the search service, skip this step.
:::image type="content" source="media/search-howto-index-sharepoint-online/enable-managed-identity.png" alt-text="Enable system assigned managed identity":::
-After selecting **Save** you will see an Object ID that has been assigned to your search service.
+After selecting **Save** you'll see an Object ID that has been assigned to your search service.
:::image type="content" source="media/search-howto-index-sharepoint-online/system-assigned-managed-identity.png" alt-text="System assigned managed identity":::
After selecting **Save** you will see an Object ID that has been assigned to you
The SharePoint indexer supports both [delegated and application](/graph/auth/auth-concepts#delegated-and-application-permissions) permissions. Choose which permissions you want to use based on your scenario:
-+ Delegated permissions, where the indexer runs under the identity of the user or app that sent the request. Data access is limited to the sites and files to which the user has access. To support deleted permissions, the indexer requires a [device code prompt](../active-directory/develop/v2-oauth2-device-code.md) to log in on behalf of the user.
++ Delegated permissions, where the indexer runs under the identity of the user or app sending the request. Data access is limited to the sites and files to which the user has access. To support deleted permissions, the indexer requires a [device code prompt](../active-directory/develop/v2-oauth2-device-code.md) to sign in on behalf of the user. + Application permissions, where the indexer runs under the identity of the SharePoint tenant with access to all sites and files within the SharePoint tenant. The indexer requires a [client secret](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) to access the SharePoint tenant. The indexer will also require [tenant admin approval](../active-directory/manage-apps/grant-admin-consent.md) before it can index any content.
-Note that if your Azure Active Directory organization has [Conditional Access enabled](../active-directory/conditional-access/overview.md) and your administrator is not able to grant any device access for Delegated permissions, you should consider Application permissions instead. For more information, refer to [SharePoint Conditional Access policies](./search-indexer-troubleshooting.md#sharepoint-conditional-access-policies).
+If your Azure Active Directory organization has [Conditional Access enabled](../active-directory/conditional-access/overview.md) and your administrator isn't able to grant any device access for Delegated permissions, you should consider Application permissions instead. For more information, see [SharePoint Conditional Access policies](./search-indexer-troubleshooting.md#sharepoint-conditional-access-policies).
### Step 3: Create an Azure AD application
The SharePoint indexer will use this Azure Active Directory (Azure AD) applicati
1. Give admin consent.
- Tenant admin consent is required when using application API permissions. Some tenants are locked down in such a way that tenant admin consent is required for delegated API permissions as well. If either of these are the case, youΓÇÖll need to have a tenant admin grant consent for this Azure AD application before creating the indexer.
+ Tenant admin consent is required when using application API permissions. Some tenants are locked down in such a way that tenant admin consent is required for delegated API permissions as well. If either of these conditions apply, youΓÇÖll need to have a tenant admin grant consent for this Azure AD application before creating the indexer.
:::image type="content" source="media/search-howto-index-sharepoint-online/aad-app-grant-admin-consent.png" alt-text="Azure AD app grant admin consent":::
The SharePoint indexer will use this Azure Active Directory (Azure AD) applicati
:::image type="content" source="media/search-howto-index-sharepoint-online/application-client-secret.png" alt-text="New client secret":::
- + In the menu that pops up, enter a description for the new client secret. Adjust the expiration date if necessary. If the secret expires it will need to be recreated and the indexer needs to be updated with the new secret.
+ + In the menu that pops up, enter a description for the new client secret. Adjust the expiration date if necessary. If the secret expires, it will need to be recreated and the indexer needs to be updated with the new secret.
:::image type="content" source="media/search-howto-index-sharepoint-online/application-client-secret-setup.png" alt-text="Setup client secret":::
There are a few steps to creating the indexer:
} ```
-1. Provide the code that was provided in the error message.
+1. Provide the code that was included in the error message.
:::image type="content" source="media/search-howto-index-sharepoint-online/enter-device-code.png" alt-text="Enter device code":::
-1. The SharePoint indexer will access the SharePoint content as the signed-in user. The user that logs in during this step will be that signed-in user. So, if you log in with a user account that doesnΓÇÖt have access to a document in the Document Library that you want to index, the indexer wonΓÇÖt have access to that document.
+1. The SharePoint indexer will access the SharePoint content as the signed-in user. The user that logs in during this step will be that signed-in user. So, if you sign in with a user account that doesnΓÇÖt have access to a document in the Document Library that you want to index, the indexer wonΓÇÖt have access to that document.
If possible, we recommend creating a new user account and giving that new user the exact permissions that you want the indexer to have.
api-key: [admin key]
## Updating the data source
-If there are no updates to the data source object, the indexer can run on a schedule without any user interaction. However, every time the Azure Cognitive Search data source object is updated, you will need to sign in again in order for the indexer to run. For example, if you change the data source query, sign in again using the `https://microsoft.com/devicelogin` and a new code.
+If there are no updates to the data source object, the indexer can run on a schedule without any user interaction. However, every time the Azure Cognitive Search data source object is updated, you'll need to sign in again in order for the indexer to run. For example, if you change the data source query, sign in again using the `https://microsoft.com/devicelogin` and a new code.
Once the data source has been updated, follow the below steps:
If you have set the indexer to index document metadata (`"dataToExtract": "conte
| Identifier | Type | Description | | - | -- | -- |
-| metadata_spo_site_library_item_id | Edm.String | The combination key of site ID, library ID and item ID which uniquely identifies an item in a document library for a site. |
+| metadata_spo_site_library_item_id | Edm.String | The combination key of site ID, library ID, and item ID which uniquely identifies an item in a document library for a site. |
| metadata_spo_site_id | Edm.String | The ID of the SharePoint site. | | metadata_spo_library_id | Edm.String | The ID of document library. | | metadata_spo_item_id | Edm.String | The ID of the (document) item in the library. |
The "name" property is required and must be one of three values:
| Value | Description | |-|-|
-| defaultSiteLibrary | Index all the content from the sites default document library. |
-| allSiteLibraries | Index all the content from all the document libraries in a site. This will not index document libraries from a subsite. Those can be specified in the "query" though. |
-| useQuery | Only index content defined in the "query". |
+| defaultSiteLibrary | Index all content from the site's default document library. |
+| allSiteLibraries | Index all content from all document libraries in a site. Document libraries from a subsite are out of scope/ If you need content from subsites, choose "useQuery" and specify "includeLibrariesInSite". |
+| useQuery | Only index the content defined in the "query". |
<a name="query"></a>
The "query" parameter of the data source is made up of keyword/value pairs. The
| Keyword | Value description and examples | | - | | | null | If null or empty, index either the default document library or all document libraries depending on the container name. <br><br>Example: <br><br>``` "container" : { "name" : "defaultSiteLibrary", "query" : null } ``` |
-| includeLibrariesInSite | Index content from all libraries under the specified site in the connection string. These are limited to subsites of your site. The value should be the URI of the site or subsite. <br><br>Example: <br><br>```"container" : { "name" : "useQuery", "query" : "includeLibrariesInSite=https://mycompany.sharepoint.com/mysite" }``` |
-| includeLibrary | Index all content from this library. The value is the fully-qualified path to the library, which can be copied from your browser: <br><br>Example 1 (fully-qualified path): <br><br>```"container" : { "name" : "useQuery", "query" : "includeLibrary=https://mycompany.sharepoint.com/mysite/MyDocumentLibrary" }``` <br><br>Example 2 (URI copied from your browser): <br><br>```"container" : { "name" : "useQuery", "query" : "includeLibrary=https://mycompany.sharepoint.com/teams/mysite/MyDocumentLibrary/Forms/AllItems.aspx" }``` |
-| excludeLibrary | Do not index content from this library. The value is the fully-qualified path to the library, which can be copied from your browser: <br><br> Example 1 (fully-qualified path): <br><br>```"container" : { "name" : "useQuery", "query" : "includeLibrariesInSite=https://mysite.sharepoint.com/subsite1; excludeLibrary=https://mysite.sharepoint.com/subsite1/MyDocumentLibrary" }``` <br><br> Example 2 (URI copied from your browser): <br><br>```"container" : { "name" : "useQuery", "query" : "includeLibrariesInSite=https://mycompany.sharepoint.com/teams/mysite; excludeLibrary=https://mycompany.sharepoint.com/teams/mysite/MyDocumentLibrary/Forms/AllItems.aspx" }``` |
+| includeLibrariesInSite | Index content from all libraries under the specified site in the connection string. The scope includes any subsites of your site. The value should be the URI of the site or subsite. <br><br>Example: <br><br>```"container" : { "name" : "useQuery", "query" : "includeLibrariesInSite=https://mycompany.sharepoint.com/mysite" }``` |
+| includeLibrary | Index all content from this library. The value is the fully qualified path to the library, which can be copied from your browser: <br><br>Example 1 (fully qualified path): <br><br>```"container" : { "name" : "useQuery", "query" : "includeLibrary=https://mycompany.sharepoint.com/mysite/MyDocumentLibrary" }``` <br><br>Example 2 (URI copied from your browser): <br><br>```"container" : { "name" : "useQuery", "query" : "includeLibrary=https://mycompany.sharepoint.com/teams/mysite/MyDocumentLibrary/Forms/AllItems.aspx" }``` |
+| excludeLibrary | Don't index content from this library. The value is the fully qualified path to the library, which can be copied from your browser: <br><br> Example 1 (fully qualified path): <br><br>```"container" : { "name" : "useQuery", "query" : "includeLibrariesInSite=https://mysite.sharepoint.com/subsite1; excludeLibrary=https://mysite.sharepoint.com/subsite1/MyDocumentLibrary" }``` <br><br> Example 2 (URI copied from your browser): <br><br>```"container" : { "name" : "useQuery", "query" : "includeLibrariesInSite=https://mycompany.sharepoint.com/teams/mysite; excludeLibrary=https://mycompany.sharepoint.com/teams/mysite/MyDocumentLibrary/Forms/AllItems.aspx" }``` |
| additionalColumns | Index columns from the document library. The value is a comma-separated list of column names you want to index. Use a double backslash to escape semicolons and commas in column names: <br><br> Example 1 (additionalColumns=MyCustomColumn,MyCustomColumn2): <br><br>```"container" : { "name" : "useQuery", "query" : "includeLibrary=https://mycompany.sharepoint.com/mysite/MyDocumentLibrary;additionalColumns=MyCustomColumn,MyCustomColumn2" }``` <br><br> Example 2 (escape characters using double backslash): <br><br> ```"container" : { "name" : "useQuery", "query" : "includeLibrary=https://mycompany.sharepoint.com/teams/mysite/MyDocumentLibrary/Forms/AllItems.aspx;additionalColumns=MyCustomColumnWith\\,,MyCustomColumnWith\\;" }``` | ## Handling errors
-By default, the SharePoint indexer stops as soon as it encounters a document with an unsupported content type (for example, an image). You can of course use the `excludedFileNameExtensions` parameter to skip certain content types. However, you may need to index documents without knowing all the possible content types in advance. To continue indexing when an unsupported content type is encountered, set the `failOnUnsupportedContentType` configuration parameter to false:
+By default, the SharePoint indexer stops as soon as it encounters a document with an unsupported content type (for example, an image). You can use the `excludedFileNameExtensions` parameter to skip certain content types. However, you may need to index documents without knowing all the possible content types in advance. To continue indexing when an unsupported content type is encountered, set the `failOnUnsupportedContentType` configuration parameter to false:
```http PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2020-06-30-Preview
search Search Import Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-import-data-portal.md
Previously updated : 05/03/2022 Last updated : 08/24/2022 # Import data wizard in Azure Cognitive Search
-The **Import data wizard** in the Azure portal creates multiple objects used for indexing and AI enrichment on a search service. If you are new to Azure Cognitive Search, it's one of the most powerful features at your disposal. With minimal effort, you can create an indexing or enrichment pipeline that exercises most of the functionality of Azure Cognitive Search.
+The **Import data wizard** in the Azure portal creates multiple objects used for indexing and AI enrichment on a search service. If you're new to Azure Cognitive Search, it's one of the most powerful features at your disposal. With minimal effort, you can create an indexing or enrichment pipeline that exercises most of the functionality of Azure Cognitive Search.
If you're using the wizard for proof-of-concept testing, this article explains the internal workings of the wizard so that you can use it more effectively.
-This article is not a step by step. For help using the wizard with built-in sample data, see the [Quickstart: Create a search index](search-get-started-portal.md) or [Quickstart: Create a text translation and entity skillset](cognitive-search-quickstart-blob.md).
+This article isn't a step by step. For help with using the wizard with built-in sample data, see the [Quickstart: Create a search index](search-get-started-portal.md) or [Quickstart: Create a text translation and entity skillset](cognitive-search-quickstart-blob.md).
## Starting the wizard
-In the [Azure portal](https://portal.azure.com), open the search service page from the dashboard or [find your service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in the service list. In the service Overview page at the top, click **Import data**.
+In the [Azure portal](https://portal.azure.com), open the search service page from the dashboard or [find your service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in the service list. In the service Overview page at the top, select **Import data**.
:::image type="content" source="medi.png" alt-text="Screenshot of the Import data command" border="true":::
-The wizard opens fully expanded in the browser window so that you have more room to work. Several pages are quite dense.
+The wizard opens fully expanded in the browser window so that you have more room to work.
You can also launch **Import data** from other Azure services, including Azure Cosmos DB, Azure SQL Database, SQL Managed Instance, and Azure Blob Storage. Look for **Add Azure Cognitive Search** in the left-navigation pane on the service overview page.
The wizard will output the objects in the following table. After the objects are
| Object | Description | |--|-| | [Indexer](/rest/api/searchservice/create-indexer) | A configuration object specifying a data source, target index, an optional skillset, optional schedule, and optional configuration settings for error handing and base-64 encoding. |
-| [Data Source](/rest/api/searchservice/create-data-source) | Persists connection information to source data, including credentials. A data source object is used exclusively with indexers. |
+| [Data Source](/rest/api/searchservice/create-data-source) | Persists connection information to a [supported data source](search-indexer-overview.md#supported-data-sources) on Azure. A data source object is used exclusively with indexers. |
| [Index](/rest/api/searchservice/create-index) | Physical data structure used for full text search and other queries. |
-| [Skillset](/rest/api/searchservice/create-skillset) | Optional. A complete set of instructions for manipulating, transforming, and shaping content, including analyzing and extracting information from image files. Except for very simple and limited structures, it includes a reference to a Cognitive Services resource that provides enrichment. |
+| [Skillset](/rest/api/searchservice/create-skillset) | Optional. A complete set of instructions for manipulating, transforming, and shaping content, including analyzing and extracting information from image files. Unless the volume of work fall under the limit of 20 transactions per indexer per day, the skillset must include a reference to a Cognitive Services resource that provides enrichment. |
| [Knowledge store](knowledge-store-concept-intro.md) | Optional. Stores output from an [AI enrichment pipeline](cognitive-search-concept-intro.md) in tables and blobs in Azure Storage for independent analysis or downstream processing. | ## Benefits and limitations Before writing any code, you can use the wizard for prototyping and proof-of-concept testing. The wizard connects to external data sources, samples the data to create an initial index, and then imports the data as JSON documents into an index on Azure Cognitive Search.
-If you are evaluating skillsets, the wizard will handle all of the output field mappings and add helper functions to create usable objects. Text split is added if you specify a parsing mode. Text merge is added if you chose image analysis so that the wizard can reunite text descriptions with image content. Shaper skills added to support valid projections if you chose the knowledge store option. All of the above tasks come with a learning curve. If you are new to enrichment, the ability to have these steps handled for you allows you to measure the value of a skill without having to invest much time and effort.
+If you're evaluating skillsets, the wizard will handle all of the output field mappings and add helper functions to create usable objects. Text split is added if you specify a parsing mode. Text merge is added if you chose image analysis so that the wizard can reunite text descriptions with image content. Shaper skills added to support valid projections if you chose the knowledge store option. All of the above tasks come with a learning curve. If you're new to enrichment, the ability to have these steps handled for you allows you to measure the value of a skill without having to invest much time and effort.
Sampling is the process by which an index schema is inferred, and it has some limitations. When the data source is created, the wizard picks a random sample of documents to decide what columns are part of the data source. Not all files are read, as this could potentially take hours for very large data sources. Given a selection of documents, source metadata, such as field name or type, is used to create a fields collection in an index schema. Depending on the complexity of source data, you might need to edit the initial schema for accuracy, or extend it for completeness. You can make your changes inline on the index definition page. Overall, the advantages of using the wizard are clear: as long as requirements are met, you can prototype a queryable index within minutes. Some of the complexities of indexing, such as serializing data as JSON documents, are handled by the wizard.
-The wizard is not without limitations. Constraints are summarized as follows:
+The wizard isn't without limitations. Constraints are summarized as follows:
-+ The wizard does not support iteration or reuse. Each pass through the wizard creates a new index, skillset, and indexer configuration. Only data sources can be persisted and reused within the wizard. To edit or refine other objects, either delete the objects and start over, or use the REST APIs or .NET SDK to modify the structures.
++ The wizard doesn't support iteration or reuse. Each pass through the wizard creates a new index, skillset, and indexer configuration. Only data sources can be persisted and reused within the wizard. To edit or refine other objects, either delete the objects and start over, or use the REST APIs or .NET SDK to modify the structures. + Source content must reside in a [supported data source](search-indexer-overview.md#supported-data-sources).
The wizard is not without limitations. Constraints are summarized as follows:
+ AI enrichment, as exposed in the portal, is limited to a subset of built-in skills.
-+ A [knowledge store](knowledge-store-concept-intro.md), which can be created by the wizard, is limited to a few default projections and uses a default naming convention. If you want to customize names or projections, you will need to create the knowledge store through REST API or the SDKs.
++ A [knowledge store](knowledge-store-concept-intro.md), which can be created by the wizard, is limited to a few default projections and uses a default naming convention. If you want to customize names or projections, you'll need to create the knowledge store through REST API or the SDKs.
-+ Public access to all networks must be enabled on the supported data source while the wizard is used, since the portal won't be able to access the data source during setup if public access is disabled. This means that if your data source has a firewall enabled, you must disable it, run the Import Data wizard and then enable it after wizard setup is completed. If this is not an option, you can create Azure Cognitive Search data source, indexer, skillset and index through REST API or the SDKs.
++ Public access to all networks must be enabled on the supported data source while the wizard is used, since the portal won't be able to access the data source during setup if public access is disabled. This means that if your data source has a firewall enabled, you must disable it, run the Import Data wizard and then enable it after wizard setup is completed. If this isn't an option, you can create Azure Cognitive Search data source, indexer, skillset and index through REST API or the SDKs. ## Workflow
The wizard is organized into four main steps:
The **Import data** wizard connects to an external [supported data source](search-indexer-overview.md#supported-data-sources) using the internal logic provided by Azure Cognitive Search indexers, which are equipped to sample the source, read metadata, crack documents to read content and structure, and serialize contents as JSON for subsequent import to Azure Cognitive Search.
+You can paste in a connection to a supported data source in a different subscription or region, but the **Choose an existing connection** picker is scoped to the active subscription.
++ Not all preview data sources are guaranteed to be available in the wizard. Because each data source has the potential for introducing other changes downstream, a preview data source will only be added to the data sources list if it fully supports all of the experiences in the wizard, such as skillset definition and index schema inference. You can only import from a single table, database view, or equivalent data structure, however the structure can include hierarchical or nested substructures. For more information, see [How to model complex types](search-howto-complex-data-types.md). ### Skillset configuration in the wizard
-Skillset configuration occurs after the data source definition because the type of data source will inform the availability of certain built-in skills. In particular, if you are indexing files from Blob Storage, your choice of parsing mode of those files will determine whether sentiment analysis is available.
+Skillset configuration occurs after the data source definition because the type of data source will inform the availability of certain built-in skills. In particular, if you're indexing files from Blob Storage, your choice of parsing mode of those files will determine whether sentiment analysis is available.
The wizard will add the skills you choose, but it will also add other skills that are necessary for achieving a successful outcome. For example, if you specify a knowledge store, the wizard adds a Shaper skill to support projections (or physical data structures).
-Skillsets are optional and there is a button at the bottom of the page to skip ahead if you don't want AI enrichment.
+Skillsets are optional and there's a button at the bottom of the page to skip ahead if you don't want AI enrichment.
<a name="index-definition"></a>
Because sampling is an imprecise exercise, review the index for the following co
1. Is the field list accurate? If your data source contains fields that were not picked up in sampling, you can manually add any new fields that sampling missed, and remove any that don't add value to a search experience or that won't be used in a [filter expression](search-query-odata-filter.md) or [scoring profile](index-add-scoring-profiles.md).
-1. Is the data type appropriate for the incoming data? Azure Cognitive Search supports the [entity data model (EDM) data types](/rest/api/searchservice/supported-data-types). For Azure SQL data, there is [mapping chart](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#TypeMapping) that lays out equivalent values. For more background, see [Field mappings and transformations](search-indexer-field-mappings.md).
+1. Is the data type appropriate for the incoming data? Azure Cognitive Search supports the [entity data model (EDM) data types](/rest/api/searchservice/supported-data-types). For Azure SQL data, there's [mapping chart](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#TypeMapping) that lays out equivalent values. For more background, see [Field mappings and transformations](search-indexer-field-mappings.md).
1. Do you have one field that can serve as the *key*? This field must be Edm.string and it must uniquely identify a document. For relational data, it might be mapped to a primary key. For blobs, it might be the `metadata-storage-path`. If field values include spaces or dashes, you must set the **Base-64 Encode Key** option in the **Create an Indexer** step, under **Advanced options**, to suppress the validation check for these characters.
Because sampling is an imprecise exercise, review the index for the following co
+ **Searchable** enables full-text search. Every field used in free form queries or in query expressions must have this attribute. Inverted indexes are created for each field that you mark as **Searchable**.
- + **Retrievable** returns the field in search results. Every field that provides content to search results must have this attribute. Setting this field does not appreciably effect index size.
+ + **Retrievable** returns the field in search results. Every field that provides content to search results must have this attribute. Setting this field doesn't appreciably affect index size.
+ **Filterable** allows the field to be referenced in filter expressions. Every field used in a **$filter** expression must have this attribute. Filter expressions are for exact matches. Because text strings remain intact, more storage is required to accommodate the verbatim content.
Because sampling is an imprecise exercise, review the index for the following co
The last page of the wizard collects user inputs for indexer configuration. You can [specify a schedule](search-howto-schedule-indexers.md) and set other options that will vary by the data source type.
-Internally, the wizard also sets up the following, which is not visible in the indexer until after it is created:
+Internally, the wizard also sets up the following definitions, which aren't visible in the indexer until after it's created:
+ [field mappings](search-indexer-field-mappings.md) between the data source and index + [output field mappings](cognitive-search-output-field-mapping.md) between skill output and an index ## Next steps
-The best way to understand the benefits and limitations of the wizard is to step through it. The following quickstart will guide you through each step.
+The best way to understand the benefits and limitations of the wizard is to step through it. The following quickstart explains each step.
> [!div class="nextstepaction"] > [Quickstart: Create a search index using the Azure portal](search-get-started-portal.md)
security Threat Modeling Tool Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-authorization.md
editor: jegeib ms.assetid: na--++ na
security Threat Modeling Tool Communication Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-communication-security.md
editor: jegeib ms.assetid: na--++ na
security Threat Modeling Tool Cryptography https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-cryptography.md
description: Learn about cryptography mitigation for threats exposed in the Thre
editor: jegeib--++ Last updated 02/07/2017
security Threat Modeling Tool Exception Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-exception-management.md
editor: jegeib ms.assetid: na--+ na
security Threat Modeling Tool Feature Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-feature-overview.md
Title: Microsoft Threat Modeling Tool feature overview - Azure
description: Learn about all the features available in the Threat Modeling Tool, such as the analysis view and reports. --+ Last updated 08/17/2017
security Threat Modeling Tool Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-getting-started.md
editor: jegeib ms.assetid: na--++ na
security Threat Modeling Tool Mitigations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-mitigations.md
editor: jegeib ms.assetid: na--++ na
security Threat Modeling Tool Releases 71509112 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-71509112.md
description: Read the release notes for the Microsoft Threat Modeling Tool released on 9/12/2018. The notes include feature changes and bug fixes. --+ Last updated 01/15/2019
security Threat Modeling Tool Releases 71510231 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-71510231.md
description: Read the release notes for the threat modeling tool update released on 11/1/2018. This release does not contain any new functionality or fixes. --+ Last updated 01/15/2019
security Threat Modeling Tool Releases 71601261 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-71601261.md
description: Read the release notes for the Microsoft Threat Modeling Tool released on 1/29/2019. The notes include feature changes and known issues. --+ Last updated 01/25/2019
security Threat Modeling Tool Releases 71604081 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-71604081.md
description: Documenting the release notes for the threat modeling tool release 7.1.60408.1. --+ Last updated 04/03/2019
security Threat Modeling Tool Releases 71607021 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-71607021.md
description: Read the release notes for the threat modeling tool update released on 7/2/2019. The notes include accessibility improvements and bug fixes. --+ Last updated 07/02/2019
security Threat Modeling Tool Releases 71610151 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-71610151.md
Title: Microsoft Threat Modeling Tool release 10/16/2019 - Azure
description: Documenting the release notes for the threat modeling tool release 7.1.61015.1. -+ Last updated 10/16/2019
security Threat Modeling Tool Releases 73002061 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73002061.md
Title: Microsoft Threat Modeling Tool release 02/11/2020 - Azure
description: Documenting the release notes for the threat modeling tool release 7.3.00206.1. -+ Last updated 02/25/2020
security Threat Modeling Tool Releases 73003161 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73003161.md
Title: Microsoft Threat Modeling Tool release 03/22/2020 - Azure
description: Documenting the release notes for the threat modeling tool release 7.3.00316.1. -+ Last updated 03/22/2020
security Threat Modeling Tool Releases 73007142 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73007142.md
Title: Microsoft Threat Modeling Tool release 07/14/2020 - Azure
description: Documenting the release notes for the threat modeling tool release 7.3.00714.2. -+ Last updated 07/14/2020
security Threat Modeling Tool Releases 73007291 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73007291.md
Title: Microsoft Threat Modeling Tool release 07/29/2020 - Azure
description: Documenting the release notes for the threat modeling tool release 7.3.00729.1. -+ Last updated 07/29/2020
security Threat Modeling Tool Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases.md
editor: jegeib ms.assetid: na--++ na
security Threat Modeling Tool Sensitive Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-sensitive-data.md
editor: jegeib ms.assetid: na--++ na
security Threat Modeling Tool Session Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-session-management.md
editor: jegeib ms.assetid: na--++ na
security Threat Modeling Tool Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-threats.md
editor: jegeib ms.assetid: na--++ na
security Threat Modeling Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool.md
Title: Microsoft Threat Modeling Tool overview - Azure
description: Overview of the Microsoft Threat Modeling Tool, containing information on getting started with the tool, including the Threat Modeling process. --+ Last updated 02/16/2017
security Azure CA Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-CA-details.md
Previously updated : 04/28/2022 Last updated : 08/24/2022 -+
security Azure Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-domains.md
ms.assetid: --++ na
security Azure Marketplace Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-marketplace-images.md
documentationcenter: na
ms.assetid: --++ Last updated 01/11/2019
security Backup Plan To Protect Against Ransomware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/backup-plan-to-protect-against-ransomware.md
Title: Azure backup and restore plan to protect against ransomware | Microsoft Docs description: Learn what to do before and during a ransomware attack to protect your critical business systems and ensure a rapid recovery of business operations. --++
security Code Integrity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/code-integrity.md
Title: Platform code integrity - Azure Security description: Learn how Microsoft ensures that only authorized software is running. --++
security Cyber Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/cyber-services.md
editor: TomSh ms.assetid: 925ba3c6-fe35-413a-98ea-e1a1461f3022--++ na
security Data Encryption Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/data-encryption-best-practices.md
editor: TomSh ms.assetid: 17ba67ad-e5cd-4a8f-b435-5218df753ca4--++ na
security End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/end-to-end.md
ms.assetid: a5a7f60a-97e2-49b4-a8c5-7c010ff27ef8--++ na
security Event Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/event-support-ticket.md
editor: v-dabosl ms.assetid: f1ffde66-98f0-4c3e-ad94-fee1f97cae03--++ na
security Firmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/firmware.md
Title: Firmware security - Azure Security description: Learn how Microsoft secures Azure hardware and firmware. --++
security Hypervisor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/hypervisor.md
Title: Hypervisor security on the Azure fleet - Azure Security description: Technical overview of hypervisor security on the Azure fleet. --++
security Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/iaas.md
editor: TomSh ms.assetid: 02c5b7d2-a77f-4e7f-9a1e-40247c57e7e2--+ na
security Identity Management Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/identity-management-best-practices.md
editor: TomSh ms.assetid: 07d8e8a8-47e8-447c-9c06-3a88d2713bc1--++ na
security Identity Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/identity-management-overview.md
ms.assetid: 5aa0a7ac-8f18-4ede-92a1-ae0dfe585e28--++ na
security Infrastructure Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-availability.md
editor: TomSh ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e--++ na
security Infrastructure Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-components.md
editor: TomSh ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e--++ na
security Infrastructure Integrity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-integrity.md
editor: TomSh ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e--++ na
security Infrastructure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-monitoring.md
editor: TomSh ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e--+ na
security Infrastructure Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-network.md
ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e--++ na
security Infrastructure Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-operations.md
editor: TomSh ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e--++ na
security Infrastructure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-sql.md
editor: TomSh ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e--++ na
security Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure.md
editor: TomSh ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e--++ na
security Management Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/management-monitoring-overview.md
editor: TomSh ms.assetid: 5cf2827b-6cd3-434d-9100-d7411f7ed424--+ na
security Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/management.md
editor: TomSh ms.assetid: 2431feba-3364-4a63-8e66-858926061dd3--++ na
security Measured Boot Host Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/measured-boot-host-attestation.md
Title: Firmware measured boot and host attestation - Azure Security description: Technical overview of Azure firmware measured boot and host attestation. --++
security Network Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/network-best-practices.md
editor: TomShinder ms.assetid: 7f6aa45f-138f-4fde-a611-aaf7e8fe56d1--++ na
security Network Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/network-overview.md
ms.assetid: bedf411a-0781-47b9-9742-d524cf3dbfc1--++ na
security Ocsp Sha 1 Sunset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ocsp-sha-1-sunset.md
Previously updated : 03/17/2022 Last updated : 08/24/2022 -+ # Sunset for SHA-1 Online Certificate Standard Protocol signing
security Operational Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-checklist.md
editor: tomsh ms.assetid:--++ na
security Operational Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-overview.md
editor: tomsh ms.assetid:--++ na
security Paas Applications Using App Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/paas-applications-using-app-services.md
editor: '' ms.assetid:--+ na
security Paas Applications Using Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/paas-applications-using-storage.md
editor: '' ms.assetid:--++ na
security Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/platform.md
Title: Azure platform integrity and security - Azure Security description: Technical overview of Azure platform integrity and security. --++
security Production Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/production-network.md
editor: TomSh ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e--++ na
security Project Cerberus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/project-cerberus.md
Title: Firmware integrity - Azure Security description: Learn about cryptographic measurements to ensure firmware integrity. --++
security Secure Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/secure-boot.md
Title: Firmware secure boot - Azure Security description: Technical overview of Azure firmware secure boot. --++
security Shared Responsibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/shared-responsibility.md
editor: na ms.assetid:--++ na
security Threat Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/threat-detection.md
editor: TomSh ms.assetid:--++ na
security Virtual Machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/virtual-machines-overview.md
editor: TomSh ms.assetid: 467b2c83-0352-4e9d-9788-c77fb400fe54--++ na
sentinel Ama Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ama-migrate.md
Each organization will have different metrics of success and internal migration
**Include the following steps in your migration process**:
-1. Make sure that you've considered your environmental requirements and understand the gaps between the different agents. For more information, see [When should I migrate](../azure-monitor/agents/azure-monitor-agent-migration.md#when-should-i-migrate-to-the-azure-monitor-agent) in the Azure Monitor documentation.
+1. Make sure that you've considered your environmental requirements and understand the gaps between the different agents. For more information, see [Migration plan considerations](../azure-monitor/agents/azure-monitor-agent-migration.md#migration-plan-considerations) in the Azure Monitor documentation.
1. Run a proof of concept to test how the AMA sends data to Microsoft Sentinel, ideally in a development or sandbox environment.
service-bus-messaging Service Bus Resource Manager Namespace Queue Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-namespace-queue-bicep.md
+
+ Title: Create Azure Service Bus namespace and queue using Bicep
+description: 'Quickstart: Create a Service Bus namespace and a queue using Bicep'
+documentationcenter: .net
++ Last updated : 08/24/2022+
+ dotnet
+++
+# Quickstart: Create a Service Bus namespace and a queue using a Bicep file
+
+This article shows how to use a Bicep file that creates a Service Bus namespace and a queue within that namespace. The article explains how to specify which resources are deployed and how to define parameters that are specified when the deployment is executed. You can use this Bicep file for your own deployments, or customize it to meet your requirements.
++
+## Prerequisites
+
+If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/servicebus-create-queue/).
++
+The resources defined in the Bicep file include:
+
+- [**Microsoft.ServiceBus/namespaces**](/azure/templates/microsoft.servicebus/namespaces?pivots=deployment-language-bicep)
+- [**Microsoft.ServiceBus/namespaces/queues**](/azure/templates/microsoft.servicebus/namespaces/queues?pivots=deployment-language-bicep)
+
+> [!NOTE]
+> The following ARM templates are available for download and deployment.
+>
+> - [Create a Service Bus namespace with queue and authorization rule](service-bus-resource-manager-namespace-auth-rule.md)
+> - [Create a Service Bus namespace with topic and subscription](service-bus-resource-manager-namespace-topic.md)
+> - [Create a Service Bus namespace](service-bus-resource-manager-namespace.md)
+> - [Create a Service Bus namespace with topic, subscription, and rule](service-bus-resource-manager-namespace-topic-with-rule.md)
+
+You can find more Bicep/ARM templates from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Servicebus&pageNumber=1&sort=Popular)
+
+## Deploy the Bicep file
+
+With this Bicep file, you deploy a Service Bus namespace with a queue.
+
+[Service Bus queues](service-bus-queues-topics-subscriptions.md#queues) offer First In, First Out (FIFO) message delivery to one or more competing consumers.
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
+
+
+
+ You will be prompted to enter the following parameter values:
+
+ - **serviceBusNamespaceName**: Name of the Service Bus namespace.
+ - **serviceBusQueueName**: Name of the Queue.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the VM and all of the resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+See the following topic that shows how to create an authorization rule for the namespace/queue:
+
+[Create a Service Bus authorization rule for namespace and queue using an ARM template](service-bus-resource-manager-namespace-auth-rule.md)
+
+Learn how to manage these resources by viewing these articles:
+
+* [Manage Service Bus with PowerShell](service-bus-manage-with-ps.md)
+* [Manage Service Bus resources with the Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer/releases)
+
+[Authoring Azure Resource Manager templates]: ../azure-resource-manager/templates/syntax.md
+[Service Bus namespace and queue template]: https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.servicebus/servicebus-create-queue/azuredeploy.json/
+[Azure Quickstart Templates]: https://azure.microsoft.com/resources/templates/?term=service+bus
+[Learn more about Service Bus queues]: service-bus-queues-topics-subscriptions.md
+[Using Azure PowerShell with Azure Resource Manager]: ../azure-resource-manager/management/manage-resources-powershell.md
+[Using the Azure CLI for Mac, Linux, and Windows with Azure Resource Management]: ../azure-resource-manager/management/manage-resources-cli.md
service-bus-messaging Service Bus Resource Manager Namespace Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-namespace-queue.md
description: 'Quickstart: Create a Service Bus namespace and a queue using Azure
documentationcenter: .net Previously updated : 09/27/2021 Last updated : 08/25/2022 dotnet
The template used in this quickstart is from [Azure Quickstart Templates](https:
The resources defined in the template include: -- [**Microsoft.ServiceBus/namespaces**](/azure/templates/microsoft.servicebus/namespaces)-- [**Microsoft.ServiceBus/namespaces/queues**](/azure/templates/microsoft.servicebus/namespaces/queues)
+- [**Microsoft.ServiceBus/namespaces**](/azure/templates/microsoft.servicebus/namespaces?pivots=deployment-language-arm-template)
+- [**Microsoft.ServiceBus/namespaces/queues**](/azure/templates/microsoft.servicebus/namespaces/queues?pivots=deployment-language-arm-template)
> [!NOTE] > The following ARM templates are available for download and deployment. >
-> * [Create a Service Bus namespace with queue and authorization rule](service-bus-resource-manager-namespace-auth-rule.md)
-> * [Create a Service Bus namespace with topic and subscription](service-bus-resource-manager-namespace-topic.md)
-> * [Create a Service Bus namespace](service-bus-resource-manager-namespace.md)
-> * [Create a Service Bus namespace with topic, subscription, and rule](service-bus-resource-manager-namespace-topic-with-rule.md)
+> - [Create a Service Bus namespace with queue and authorization rule](service-bus-resource-manager-namespace-auth-rule.md)
+> - [Create a Service Bus namespace with topic and subscription](service-bus-resource-manager-namespace-topic.md)
+> - [Create a Service Bus namespace](service-bus-resource-manager-namespace.md)
+> - [Create a Service Bus namespace with topic, subscription, and rule](service-bus-resource-manager-namespace-topic-with-rule.md)
You can find more template from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Servicebus&pageNumber=1&sort=Popular)
With this template, you deploy a Service Bus namespace with a queue.
[Service Bus queues](service-bus-queues-topics-subscriptions.md#queues) offer First In, First Out (FIFO) message delivery to one or more competing consumers.
-To run the deployment automatically, click the following button: Create a new resource group for the deployment so that you can easily cleanup later.
+To run the deployment automatically, click the following button: Create a new resource group for the deployment so that you can easily clean up later.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.servicebus%2Fservicebus-create-queue%2Fazuredeploy.json) ## Validate the deployment
-1. Select **Notifications** at the top to see the status of the deployment. Wait until the deployment succeeds. Then, select **Go to resource group** in the notification message to navigate to the page for the resource group that contains the Service Bus namespace.
+1. Select **Notifications** at the top to see the status of the deployment. Wait until the deployment succeeds. Then, select **Go to resource group** in the notification message to navigate to the page for the resource group that contains the Service Bus namespace.
![Notification from deployment](./media/service-bus-resource-manager-namespace-queue/notification.png)
-2. Confirm that you see your Service Bus namespace in the list of resources.
+2. Confirm that you see your Service Bus namespace in the list of resources.
![Resource group - namespace](./media/service-bus-resource-manager-namespace-queue/resource-group-namespace.png)
-3. Select the namespace from the list to see the **Service Bus Namespace** page.
+3. Select the namespace from the list to see the **Service Bus Namespace** page.
## Clean up resources 1. In the Azure portal, navigate to the **Resource group** page for your resource group.
-2. Select **Delete resource group** from the toolbar.
-3. Type the name of the resource group, and select **Delete**.
+2. Select **Delete resource group** from the toolbar.
+3. Type the name of the resource group, and select **Delete**.
![Resource group - delete](./media/service-bus-resource-manager-namespace-queue/resource-group-delete.png)
See the following topic that shows how to create an authorization rule for the n
Learn how to manage these resources by viewing these articles:
-* [Manage Service Bus with PowerShell](service-bus-manage-with-ps.md)
-* [Manage Service Bus resources with the Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer/releases)
+- [Manage Service Bus with PowerShell](service-bus-manage-with-ps.md)
+- [Manage Service Bus resources with the Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer/releases)
[Authoring Azure Resource Manager templates]: ../azure-resource-manager/templates/syntax.md [Service Bus namespace and queue template]: https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.servicebus/servicebus-create-queue/azuredeploy.json/
storage Archive Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-blob.md
Previously updated : 07/21/2022 Last updated : 08/24/2022
You can use the Azure portal, PowerShell, Azure CLI, or an Azure Resource Manage
To create a lifecycle management policy to archive blobs in the Azure portal, follow these steps:
+#### Step 1: Create the rule and specify the blob type
+ 1. Navigate to your storage account in the portal.
-1. Under **Data management**, locate the **Lifecycle management** settings.
-1. Select the **Add a rule** button.
-1. On the **Details** tab, specify a name for your rule.
-1. Specify the rule scope: either **Apply rule to all blobs in your storage account**, or **Limit blobs with filters**.
-1. Select the types of blobs for which the rule is to be applied, and specify whether to include blob snapshots or versions.
+
+2. Under **Data management**, locate the **Lifecycle management** settings.
+
+3. Select the **Add a rule** button.
+
+4. On the **Details** tab, specify a name for your rule.
+
+5. Specify the rule scope: either **Apply rule to all blobs in your storage account**, or **Limit blobs with filters**.
+
+6. Select the types of blobs for which the rule is to be applied, and specify whether to include blob snapshots or versions.
:::image type="content" source="media/archive-blob/lifecycle-policy-details-tab-portal.png" alt-text="Screenshot showing how to configure a lifecycle management policy - Details tab.":::
+#### Step 2: Add rule conditions
+ 1. Depending on your selections, you can configure rules for base blobs (current versions), previous versions, or blob snapshots. Specify one of two conditions to check for: - Objects were last modified some number of days ago.
+ - Objects were created some number of days ago.
- Objects were last accessed some number of days ago. Only one of these conditions can be applied to move a particular type of object to the Archive tier per rule. For example, if you define an action that archives base blobs if they haven't been modified for 90 days, then you can't also define an action that archives base blobs if they haven't been accessed for 90 days. Similarly, you can define one action per rule with either of these conditions to archive previous versions, and one to archive snapshots.
-1. Next, specify the number of days to elapse after the object is modified or accessed.
-1. Specify that the object is to be moved to the Archive tier after the interval has elapsed.
+8. Next, specify the number of days to elapse after the object is modified or accessed.
+
+9. Specify that the object is to be moved to the Archive tier after the interval has elapsed.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing how to configure a lifecycle management policy - Base blob tab.](./media/archive-blob/lifecycle-policy-base-blobs-tab-portal.png)
+
+10. If you chose to limit the blobs affected by the rule with filters, you can specify a filter, either with a blob prefix or blob index match.
+
+#### Step 3: Ensure that the rule excludes rehydrated blobs
+
+If you rehydrate a blob by changing it's tier, this rule will move the blob back to the archive tier if the last modified time, creation time, or last access time is beyond the threshold set for the policy.
+
+If you selected the **Last modified** rule condition, you can prevent this from happening by selecting **Skip blobs that have been rehydrated in the last**, and then entering the number of days you want a rehydrated blob to be excluded from this rule.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot showing the skip blobs that have been rehydrated in the last setting.](./media/archive-blob/lifecycle-policy-base-blobs-tab-portal-exclude-rehydrated-blobs.png)
+
+> [!NOTE]
+> This option appears only if you selected the **Last modified** rule condition.
+
+Select the **Add** button to add the rule to the policy.
- :::image type="content" source="media/archive-blob/lifecycle-policy-base-blobs-tab-portal.png" alt-text="Screenshot showing how to configure a lifecycle management policy - Base blob tab.":::
-1. If you chose to limit the blobs affected by the rule with filters, you can specify a filter, either with a blob prefix or blob index match.
-1. Select the **Add** button to add the rule to the policy.
+#### View the policy JSON
After you create the lifecycle management policy, you can view the JSON for the policy on the **Lifecycle management** page by switching from **List view** to **Code view**.
Here's the JSON for the simple lifecycle management policy created in the images
"actions": { "baseBlob": { "tierToArchive": {
- "daysAfterLastAccessTimeGreaterThan": 90
+ "daysAfterLastAccessTimeGreaterThan": 90,
+ "daysAfterLastTierChangeGreaterThan": 7
} } },
storage Archive Rehydrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-overview.md
Previously updated : 07/26/2022 Last updated : 08/24/2022
You must copy the archived blob to a new blob with a different name or to a diff
Microsoft recommends performing a copy operation in most scenarios where you need to move a blob from the Archive tier to an online tier, for the following reasons: - A copy operation avoids the early deletion fee that is assessed if you change the tier of a blob from the Archive tier before the required 180-day period elapses. For more information, see [Archive access tier](access-tiers-overview.md#archive-access-tier).+ - If there's a lifecycle management policy in effect for the storage account, then rehydrating a blob with [Set Blob Tier](/rest/api/storageservices/set-blob-tier) can result in a scenario where the lifecycle policy moves the blob back to the Archive tier after rehydration because the last modified time is beyond the threshold set for the policy. A copy operation leaves the source blob in the Archive tier and creates a new blob with a different name and a new last modified time, so there's no risk that the rehydrated blob will be moved back to the Archive tier by the lifecycle policy. Copying a blob from the Archive tier can take hours to complete depending on the rehydration priority selected. Behind the scenes, a blob copy operation reads your archived source blob to create a new online blob in the selected destination tier. The new blob may be visible when you list the blobs in the parent container before the rehydration operation is complete, but its tier will be set to Archive. The data isn't available until the read operation from the source blob in the Archive tier is complete and the blob's contents have been written to the new destination blob in an online tier. The new blob is an independent copy, so modifying or deleting it doesn't affect the source blob in the Archive tier.
Once a [Set Blob Tier](/rest/api/storageservices/set-blob-tier) request is initi
To learn how to rehydrate a blob by changing its tier to an online tier, see [Rehydrate a blob by changing its tier](archive-rehydrate-to-online-tier.md#rehydrate-a-blob-by-changing-its-tier). > [!CAUTION]
-> Changing a blob's tier doesn't affect its last modified time. If there is a [lifecycle management](./lifecycle-management-overview.md) policy in effect for the storage account, then rehydrating a blob with **Set Blob Tier** can result in a scenario where the lifecycle policy moves the blob back to the Archive tier after rehydration because the last modified time is beyond the threshold set for the policy.
+> Changing a blob's tier doesn't affect its last modified time. If there is a [lifecycle management](./lifecycle-management-overview.md) policy in effect for the storage account, then rehydrating a blob with **Set Blob Tier** can result in a scenario where the lifecycle policy moves the blob back to the Archive tier after rehydration because the last modified time is beyond the threshold set for the policy.
>
-> To avoid this scenario, rehydrate the archived blob by copying it instead, as described in the [Copy an archived blob to an online tier](#copy-an-archived-blob-to-an-online-tier) section. Performing a copy operation creates a new instance of the blob with an updated last modified time, so it won't trigger the lifecycle management policy.
+> To avoid this scenario, add the `daysAfterLastTierChangeGreaterThan` condition to the `tierToArchive` action of the policy. Alternatively, you can rehydrate the archived blob by copying it instead, as described in the [Copy an archived blob to an online tier](#copy-an-archived-blob-to-an-online-tier) section. Performing a copy operation creates a new instance of the blob with an updated last modified time, so it won't trigger the lifecycle management policy.
## Check the status of a blob rehydration operation
storage Archive Rehydrate To Online Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-to-online-tier.md
Previously updated : 07/26/2022 Last updated : 08/24/2022
To read a blob that is in the Archive tier, you must first rehydrate the blob to an online tier (Hot or Cool) tier. You can rehydrate a blob in one of two ways: -- By copying it to a new blob in the Hot or Cool tier with the [Copy Blob](/rest/api/storageservices/copy-blob) operation. Microsoft recommends this option for most scenarios.
+- By copying it to a new blob in the Hot or Cool tier with the [Copy Blob](/rest/api/storageservices/copy-blob) operation.
- By changing its tier from Archive to Hot or Cool with the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation. When you rehydrate a blob, you can specify the priority for the operation to either standard priority or high priority. A standard-priority rehydration operation may take up to 15 hours to complete. A high-priority operation is prioritized over standard-priority requests and may complete in less than one hour for objects under 10 GB in size. You can change the rehydration priority from *Standard* to *High* while the operation is pending.
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
description: Use Azure Storage lifecycle management policies to create automated
Previously updated : 05/09/2022 Last updated : 08/24/2022
The following sample rule filters the account to run the actions on objects that
"daysAfterModificationGreaterThan": 30 }, "tierToArchive": {
- "daysAfterModificationGreaterThan": 90
+ "daysAfterModificationGreaterThan": 90,
+ "daysAfterLastTierChangeGreaterThan": 7
}, "delete": { "daysAfterModificationGreaterThan": 2555
The run conditions are based on age. Current versions use the last modified time
| daysAfterModificationGreaterThan | Integer value indicating the age in days | The condition for actions on a current version of a blob | | daysAfterCreationGreaterThan | Integer value indicating the age in days | The condition for actions on a previous version of a blob or a blob snapshot | | daysAfterLastAccessTimeGreaterThan | Integer value indicating the age in days | The condition for a current version of a blob when access tracking is enabled |
+| daysAfterLastTierChangeGreaterThan | Integer value indicating the age in days after last blob tier change time | This condition applies only to `tierToArchive` actions and can be used only with the `daysAfterModificationGreaterThan` condition. |
## Examples of lifecycle policies
In the following example, blobs are moved to cool storage if they haven't been a
### Archive data after ingest
-Some data stays idle in the cloud and is rarely, if ever, accessed. The following lifecycle policy is configured to archive data shortly after it's ingested. This example transitions block blobs in a container named `archivecontainer` into an archive tier. The transition is accomplished by acting on blobs 0 days after last modified time:
+Some data stays idle in the cloud and is rarely, if ever, accessed. The following lifecycle policy is configured to archive data shortly after it's ingested. This example transitions block blobs in a container named `archivecontainer` into an archive tier. The transition is accomplished by acting on blobs 0 days after last modified time.
```json {
Some data stays idle in the cloud and is rarely, if ever, accessed. The followin
}, "actions": { "baseBlob": {
- "tierToArchive": { "daysAfterModificationGreaterThan": 0 }
+ "tierToArchive": {
+ "daysAfterModificationGreaterThan": 0
+ }
} } }
The platform runs the lifecycle policy once a day. Once you configure a policy,
The updated policy takes up to 24 hours to go into effect. Once the policy is in effect, it could take up to 24 hours for the actions to run. Therefore, the policy actions may take up to 48 hours to complete. If the update is to disable or delete a rule, and enableAutoTierToHotFromCool was used, auto-tiering to Hot tier will still happen. For example, set a rule including enableAutoTierToHotFromCool based on last access. If the rule is disabled/deleted, and a blob is currently in cool and then accessed, it will move back to Hot as that is applied on access outside of lifecycle management. The blob won't then move from Hot to Cool given the lifecycle management rule is disabled/deleted. The only way to prevent autoTierToHotFromCool is to turn off last access time tracking.
-### I manually rehydrated an archived blob. How do I prevent it from being moved back to the Archive tier temporarily?
+### I rehydrated an archived blob. How do I prevent it from being moved back to the Archive tier temporarily?
-When a blob is moved from one access tier to another, its last modification time doesn't change. If you manually rehydrate an archived blob to hot tier, it would be moved back to archive tier by the lifecycle management engine. Disable the rule that affects this blob temporarily to prevent it from being archived again. Re-enable the rule when the blob can be safely moved back to archive tier. You may also copy the blob to another location if it needs to stay in hot or cool tier permanently.
+If there's a lifecycle management policy in effect for the storage account, then rehydrating a blob by changing it's tier can result in a scenario where the lifecycle policy moves the blob back to the archive tier. This can happen if the last modified time, creation time, or last access time is beyond the threshold set for the policy. There's three ways to prevent this from happening:
+
+- Add the `daysAfterLastTierChangeGreaterThan` condition to the tierToArchive action of the policy. This condition applies only to the last modified time. See [Use lifecycle management policies to archive blobs](archive-blob.md#use-lifecycle-management-policies-to-archive-blobs).
+
+- Disable the rule that affects this blob temporarily to prevent it from being archived again. Re-enable the rule when the blob can be safely moved back to archive tier.
+
+- If the blob needs to stay in the hot or cool tier permanently, copy the blob to another location where the lifecycle manage policy is not in effect.
### The blob prefix match string didn't apply the policy to the expected blobs
storage Lifecycle Management Policy Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-policy-configure.md
To enable last access time tracking with the Azure portal, follow these steps:
1. Navigate to your storage account in the Azure portal. 1. In the **Data management** section, select **Lifecycle management**.
- :::image type="content" source="media/lifecycle-management-policy-configure/last-access-tracking-enable.png" alt-text="Screenshot showing how to enable last access tracking in Azure portal":::
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing how to enable last access tracking in Azure portal.](media/lifecycle-management-policy-configure/last-access-tracking-enable.png)
#### [PowerShell](#tab/azure-powershell)
storage Storage Blob Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-change-feed.md
Use an Azure Resource Manager template to enable Change feed on your existing st
## Consume the change feed
-The change feed produces several metadata and log files. These files are located in the **$blobchangefeed** container of the storage account.
-
-> [!NOTE]
-> In the current release, the $blobchangefeed container is visible only in Azure portal but not visible in Azure Storage Explorer. You currently cannot see the $blobchangefeed container when you call ListContainers API but you are able to call the ListBlobs API directly on the container to see the blobs
+The change feed produces several metadata and log files. These files are located in the **$blobchangefeed** container of the storage account. The **$blobchangefeed** container can be viewed either via the Azure portal or via Azure Storage Explorer.
Your client applications can consume the change feed by using the blob change feed processor library that is provided with the change feed processor SDK.
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
You can manage virtual network rules for storage accounts through the Azure port
4. Add a network rule for a virtual network and subnet. ```azurecli
- $subnetid=(az network vnet subnet show --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --query id --output tsv)
+ subnetid=$(az network vnet subnet show --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --query id --output tsv)
az storage account network-rule add --resource-group "myresourcegroup" --account-name "mystorageaccount" --subnet $subnetid ```
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
Previously updated : 07/29/2022 Last updated : 08/25/2022
The cmdlets in the AzFilesHybrid PowerShell module make the necessary modificati
### Run Join-AzStorageAccount
-The `Join-AzStorageAccount` cmdlet performs the equivalent of an offline domain join on behalf of the specified storage account. The script uses the cmdlet to create a [computer account](/windows/security/identity-protection/access-control/active-directory-accounts#manage-default-local-accounts-in-active-directory) in your AD domain. If for whatever reason you cannot use a computer account, you can alter the script to create a [service logon account](/windows/win32/ad/about-service-logon-accounts) instead. If you choose to run the command manually, you should select the account best suited for your environment.
+The `Join-AzStorageAccount` cmdlet performs the equivalent of an offline domain join on behalf of the specified storage account. The script uses the cmdlet to create a [computer account](/windows/security/identity-protection/access-control/active-directory-accounts#manage-default-local-accounts-in-active-directory) in your AD domain. If for whatever reason you can't use a computer account, you can alter the script to create a [service logon account](/windows/win32/ad/about-service-logon-accounts) instead. Note that service logon accounts don't support AES256 encryption. If you choose to run the command manually, you should select the account best suited for your environment.
The AD DS account created by the cmdlet represents the storage account. If the AD DS account is created under an organizational unit (OU) that enforces password expiration, you must update the password before the maximum password age. Failing to update the account password before that date results in authentication failures when accessing Azure file shares. To learn how to update the password, see [Update AD DS account password](storage-files-identity-ad-ds-update-password.md).
-Replace the placeholder values with your own in the parameters below before executing it in PowerShell.
- > [!IMPORTANT]
-> The domain join cmdlet will create an AD account to represent the storage account (file share) in AD. You can choose to register as a computer account or service logon account, see [FAQ](./storage-files-faq.md#security-authentication-and-access-control) for details. For computer accounts, there is a default password expiration age set in AD at 30 days. Similarly, the service logon account may have a default password expiration age set on the AD domain or Organizational Unit (OU).
-> For both account types, we recommend you check the password expiration age configured in your AD environment and plan to [update the password of your storage account identity](storage-files-identity-ad-ds-update-password.md) of the AD account before the maximum password age. You can consider [creating a new AD Organizational Unit (OU) in AD](/powershell/module/activedirectory/new-adorganizationalunit) and disabling password expiration policy on [computer accounts](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj852252(v=ws.11)) or service logon accounts accordingly.
+> The `Join-AzStorageAccount` cmdlet will create an AD account to represent the storage account (file share) in AD. You can choose to register as a computer account or service logon account, see [FAQ](./storage-files-faq.md#security-authentication-and-access-control) for details. Service logon account passwords can expire in AD if they have a default password expiration age set on the AD domain or OU. Because computer account password changes are driven by the client machine and not AD, they don't expire in AD, although client computers change their passwords by default every 30 days.
+> For both account types, we recommend you check the password expiration age configured and plan to [update the password of your storage account identity](storage-files-identity-ad-ds-update-password.md) of the AD account before the maximum password age. You can consider [creating a new AD Organizational Unit in AD](/powershell/module/activedirectory/new-adorganizationalunit) and disabling password expiration policy on [computer accounts](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj852252(v=ws.11)) or service logon accounts accordingly.
+
+Replace the placeholder values with your own in the parameters below before executing it in PowerShell.
```PowerShell # Change the execution policy to unblock importing AzFilesHybrid.psm1 module
If you have already executed the `Join-AzStorageAccount` script above successful
First, you must check the state of your environment. Specifically, you must check if [Active Directory PowerShell](/powershell/module/activedirectory/) is installed, and if the shell is being executed with administrator privileges. Then check to see if the [Az.Storage 2.0 module (or newer)](https://www.powershellgallery.com/packages/Az.Storage/2.0.0) is installed, and install it if it isn't. After completing those checks, check your AD DS to see if there is either a [computer account](/windows/security/identity-protection/access-control/active-directory-accounts#manage-default-local-accounts-in-active-directory) (default) or [service logon account](/windows/win32/ad/about-service-logon-accounts) that has already been created with SPN/UPN as "cifs/your-storage-account-name-here.file.core.windows.net". If the account doesn't exist, create one as described in the following section.
+> [!IMPORTANT]
+> The Windows Server Active Directory PowerShell cmdlets in this section must be run in Windows PowerShell 5.1. PowerShell 7.x and Azure Cloud Shell won't work in this scenario.
+ ### Create an identity representing the storage account in your AD manually To create this account manually, first create a new Kerberos key for your storage account and get the access key using the PowerShell cmdlets below. This key is only used during setup. It can't be used for any control or data plane operations against the storage account.
Get-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAcco
The cmdlets above should return the key value. Once you have the kerb1 key, create either a service account or computer account in AD under your OU, and use the key as the password for the AD identity.
-1. Set the SPN to **cifs/your-storage-account-name-here.file.core.windows.net** either in the AD GUI or by running the `Setspn` command from the Windows command line as administrator (remember to replace the example text with your storage account name):
+1. Set the SPN to **cifs/your-storage-account-name-here.file.core.windows.net** either in the AD GUI or by running the `Setspn` command from the Windows command line as administrator (remember to replace the example text with your storage account name and AD account name:
```shell Setspn -S cifs/your-storage-account-name-here.file.core.windows.net <ADAccountName> ```
-2. Use PowerShell to set the AD account password to the value of the kerb1 key (you must have AD PowerShell cmdlets installed):
+2. Use PowerShell to set the AD account password to the value of the kerb1 key (you must have AD PowerShell cmdlets installed and execute the cmdlet in PowerShell 5.1 with elevated privileges):
```powershell Set-ADAccountPassword -Identity servername$ -Reset -NewPassword (ConvertTo-SecureString -AsPlainText "kerb1_key_value_here" -Force)
To enable AES-256 encryption, follow the steps in this section. If you plan to u
> [!IMPORTANT] > The domain object that represents your storage account must be created as a computer object in the on-premises AD domain. If your domain object doesn't meet this requirement, delete it and create a new domain object that does. Note that Service Logon Accounts do not support AES256 encryption.
-Replace `<domain-object-identity>` and `<domain-name>` with your values, then run the following cmdlet to configure AES-256 support:
+Replace `<domain-object-identity>` and `<domain-name>` with your values, then run the following cmdlet to configure AES-256 support. You must have AD PowerShell cmdlets installed and execute the cmdlet in PowerShell 5.1 with elevated privileges.
```powershell Set-ADComputer -Identity <domain-object-identity> -Server <domain-name> -KerberosEncryptionType "AES256"
AzureStorageID:<yourStorageSIDHere>
## Next steps
-You've now successfully enabled the feature on your storage account. To use the feature, you must assign share-level permissions. Continue to the next section.
+You've now successfully enabled the feature on your storage account. To use the feature, you must assign share-level permissions for users and groups. Continue to the next section.
[Part two: assign share-level permissions to an identity](storage-files-identity-ad-ds-assign-permissions.md)
synapse-analytics Restore Sql Pool From Deleted Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/backuprestore/restore-sql-pool-from-deleted-workspace.md
In this article, you learn how to restore a dedicated SQL pool in Azure Synapse Analytics after an accidental drop of a workspace using PowerShell. > [!NOTE]
-> This guidance is for Synapse Workspace dedicated sql pools only. For standalone dedicated sql pool (formerly SQL DW) please follow guidance [Restore sql pool from deleted server](../sql-data-warehouse/sql-data-warehouse-restore-from-deleted-server.md).
+> This guidance is for dedicated SQL pools in Azure Synapse workspaces only. For standalone dedicated SQL pools (formerly SQL DW), follow guidance [Restore sql pool from deleted server](../sql-data-warehouse/sql-data-warehouse-restore-from-deleted-server.md).
## Before you begin
Connect-AzAccount
Set-AzContext -SubscriptionID $SubscriptionID # Define the approximate point in time the workspace was dropped as DroppedDateTime "yyyy-MM-ddThh:mm:ssZ" (ex. 2022-01-01T16:15:00Z)
-$PointInTime=ΓÇ¥<DroppedDateTime>ΓÇ¥
+$PointInTime="<DroppedDateTime>"
$DroppedDateTime = Get-Date -Date $PointInTime
synapse-analytics Restore Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/backuprestore/restore-sql-pool.md
Previously updated : 04/11/2022 Last updated : 08/24/2022
In this article, you learn how to restore an existing dedicated SQL pool in Azur
## Restore an existing dedicated SQL pool through the Synapse Studio 1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Navigate to your Synapse workspace.
-3. Under Getting Started -> Open Synapse Studio, select **Open**.
+2. Navigate to your Azure Synapse workspace.
+3. Under **Getting Started** -> **Open Synapse Studio**, select **Open**.
![ Synapse Studio](../media/sql-pools/open-synapse-studio.png) 4. On the left hand navigation pane, select **Data**. 5. Select **Manage pools**.
-6. Select **+ New** to create a new dedicated SQL pool.
-7. In the Additional Settings tab, select a Restore Point to restore from.
+6. Select **+ New** to create a new dedicated SQL pool in the Azure Synapse Analytics workspace.
+7. In the **Additional Settings** tab, select a **Restore Point** to restore from.
If you want to perform a geo-restore, select the workspace and dedicated SQL pool that you want to recover.
In this article, you learn how to restore an existing dedicated SQL pool in Azur
![Restore points](../media/sql-pools/restore-point.PNG)
- If the dedicated SQL pool doesn't have any automatic restore points, wait a few hours or create a user defined restore point before restoring. For User-Defined Restore Points, select an existing one or create a new one.
+ If the dedicated SQL pool doesn't have any automatic restore points, wait a few hours, or create a user defined restore point before restoring. For User-Defined Restore Points, select an existing one or create a new one.
- If you are restoring a geo-backup, simply select the workspace located in the source region and the dedicated SQL pool you want to restore.
+ If you are restoring a geo-backup, select the workspace located in the source region and the dedicated SQL pool you want to restore.
9. Select **Review + Create**.
In this article, you learn how to restore an existing dedicated SQL pool in Azur
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Navigate to the dedicated SQL pool that you want to restore from.
-3. At the top of the Overview blade, select **Restore**.
+3. At the top of the **Overview** page, select **Restore**.
![ Restore Overview](../media/sql-pools/restore-sqlpool-01.png)
In this article, you learn how to restore an existing dedicated SQL pool in Azur
## Restore an existing dedicated SQL pool through PowerShell
-1. Open PowerShell.
+1. Open a PowerShell terminal.
2. Connect to your Azure account and list all the subscriptions associated with your account.
In this article, you learn how to restore an existing dedicated SQL pool in Azur
6. Restore the dedicated SQL pool to the desired restore point using [Restore-AzSynapseSqlPool](/powershell/module/az.synapse/restore-azsynapsesqlpool?toc=/azure/synapse-analytics/toc.json&bc=/azure/synapse-analytics/breadcrumb/toc.json) PowerShell cmdlet. 1. To restore the dedicated SQL pool to a different workspace, make sure to specify the other workspace name. This workspace can also be in a different resource group and region.
- 2. To restore to a different subscription, see the [below section](#restore-an-existing-dedicated-sql-pool-to-a-different-subscription-through-powershell).
+ 2. To restore to a different subscription, see [Restore an existing dedicated SQL pool to a different subscription through PowerShell](#restore-an-existing-dedicated-sql-pool-to-a-different-subscription-through-powershell) later in this article.
7. Verify that the restored dedicated SQL pool is online.
$PointInTime="<RestorePointCreationDate>"
$SQLPool = Get-AzSynapseSqlPool -ResourceGroupName $ResourceGroupName -WorkspaceName $WorkspaceName -Name $SQLPoolName # Transform Synapse SQL pool resource ID to SQL database ID because currently the restore command only accepts the SQL database ID format. $DatabaseID = $SQLPool.Id -replace "Microsoft.Synapse", "Microsoft.Sql" `
- -replace "workspaces", "servers" `
- -replace "sqlPools", "databases"
+ -replace "workspaces", "servers" `
+ -replace "sqlPools", "databases"
# Restore database from a restore point $RestoredDatabase = Restore-AzSynapseSqlPool ΓÇôFromRestorePoint -RestorePoint $PointInTime -ResourceGroupName $SQLPool.ResourceGroupName `
$RestoredDatabase.status
``` ## Restore an existing dedicated SQL pool to a different subscription through PowerShell
-When performing a cross-subscription restore, a synapse workspace dedicated SQL pool can only restore to a standalone dedicated SQL pool (formerly SQL DW). The PowerShell below is similar to the above however there are three main differences:
+
+When performing a cross-subscription restore, a dedicated SQL pool in an Azure Synapse workspace can only restore directly to a standalone dedicated SQL pool (formerly SQL DW). If it is required to restore a dedicated SQL pool in an Azure Synapse workspace to a workspace in the destination subscription, an additional restore step is required.
+
+The PowerShell below is similar to the above, however there are three main differences:
- After retrieving the SQL Pool object to be restored, the subscription context needs to be switched to the destination (or target) subscription name. - When performing the restore, use the Az.Sql modules instead of the Az.Synapse modules. -- If it is required to restore the dedicated SQL pool to a Synapse workspace in the destination subscription, an additional restore step is required.
+- The below sample code has additional steps for restoring to an Azure Synapse workspace in the destination subscription. Uncomment the PowerShell commands as described in the sample.
Steps:
-1. Open PowerShell.
+1. Open a PowerShell terminal.
2. Update Az.Sql Module to 3.8.0 (or greater) if needed
Steps:
5. List the restore points for the dedicated SQL pool.
-6. Pick the desired restore point using the RestorePointCreationDate.
+6. Pick the desired restore point using the **RestorePointCreationDate**.
7. Select the destination subscription in which the SQL pool should be restored.
Steps:
10. **If the desired destination is a Synapse Workspace, uncomment the code to perform the additional restore step.** 1. Create a restore point for the newly created data warehouse.
- 2. Retrieve the last restore point created by using the "Select -Last 1" syntax.
- 3. Perform the restore to the desired Synapse workspace.
+ 2. Retrieve the last restore point created by using the `Select -Last 1` syntax.
+ 3. Perform the restore to the desired Azure Synapse workspace.
```powershell $SourceSubscriptionName="<YourSubscriptionName>"
$TargetSubscriptionName="<YourTargetSubscriptionName>"
$TargetResourceGroupName="<YourTargetResourceGroupName>" $TargetServerName="<YourTargetServerNameWithoutURLSuffixSeeNote>" # Without sql.azuresynapse.net $TargetDatabaseName="<YourDatabaseName>"
-#$TargetWorkspaceName="<YourTargetWorkspaceName>" # uncomment if restore to a synapse workspace is required
+#$TargetWorkspaceName="<YourTargetWorkspaceName>" # uncomment if restore to an Azure Synapse workspace is required
# Update Az.Sql module to the latest version (3.8.0 or above) # Update-Module -Name Az.Sql -RequiredVersion 3.8.0
$PointInTime="<RestorePointCreationDate>"
$SQLPool = Get-AzSynapseSqlPool -ResourceGroupName $SourceResourceGroupName -WorkspaceName $SourceWorkspaceName -Name $SourceSQLPoolName # Transform Synapse SQL pool resource ID to SQL database ID because currently the restore command only accepts the SQL database ID format. $DatabaseID = $SQLPool.Id -replace "Microsoft.Synapse", "Microsoft.Sql" `
- -replace "workspaces", "servers" `
- -replace "sqlPools", "databases"
+ -replace "workspaces", "servers" `
+ -replace "sqlPools", "databases"
# Switch context to the destination subscription Select-AzSubscription -SubscriptionName $TargetSubscriptionName
$RestoredDatabase.status
## Troubleshooting A restore operation can result in a deployment failure based on a "RequestTimeout" exception. + ![Screenshot from resource group deployments dialog of a timeout exception.](../media/sql-pools/restore-sql-pool-troubleshooting-01.png)
-This timeout can be ignored. Review the dedicated SQL pool blade in the portal and it may still have status of "Restoring" and eventually will transition to "Online".
+
+This timeout can be ignored. Review the dedicated SQL pool page in the Azure portal and it may still have status of "Restoring" and eventually will transition to "Online".
+ ![Screenshot of SQL pool dialog with the status that shows restoring.](../media/sql-pools/restore-sql-pool-troubleshooting-02.png) ## Next Steps - [Create a restore point](sqlpool-create-restore-point.md)
+- [Restore-AzSqlDatabase](/powershell/module/az.sql/restore-azsqldatabase?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json)
synapse-analytics Concepts Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/concepts-lake-database.md
The new database designer gives you the possibility to create a data model for y
## Data storage
-Lake databases use a data lake on the Azure Storage account to store the data of the database. The data can be stored in Parquet or CSV format and different settings can be used to optimize the storage. Every lake database uses a linked service to define the location of the root data folder. For every entity, separate folders are created by default within this database folder on the data lake. By default all tables within a lake database use the same format but the formats and location of the data can be changed per entity if that is requested.
+Lake databases use a data lake on the Azure Storage account to store the data of the database. The data can be stored in Parquet, Delta or CSV format and different settings can be used to optimize the storage. Every lake database uses a linked service to define the location of the root data folder. For every entity, separate folders are created by default within this database folder on the data lake. By default all tables within a lake database use the same format but the formats and location of the data can be changed per entity if that is requested.
## Database compute
synapse-analytics How To Set Up Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-set-up-access-control.md
You can grant access to a **single**, dedicated, SQL pool database. Use these st
```sql --Create user in the database CREATE USER [<alias@domain.com>] FROM EXTERNAL PROVIDER;
+ -- For Service Principals you would need just the display name and @domain.com is not required
```
+
2. Grant the user a role to access the database:
synapse-analytics Sql Data Warehouse Manage Compute Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-compute-rest-api.md
REST APIs for managing compute for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. > [!NOTE]
-> The REST APIs that are described in this article are for standalone dedicated SQL pools (formerly SQL DW) and are not applicable to a dedicated SQL pool that's created in an Azure Synapse Analytics workspace. For information about REST APIs to use specifically for an Azure Synapse Analytics workspace, see [Azure Synapse Analytics workspace REST API](/rest/api/synapse/).
+> The REST APIs that are described in this article are for standalone dedicated SQL pools (formerly SQL DW) and are not applicable to a dedicated SQL pool in an Azure Synapse Analytics workspace. For information about REST APIs to use specifically for an Azure Synapse Analytics workspace, see [Azure Synapse Analytics workspace REST API](/rest/api/synapse/).
## Scale compute
synapse-analytics Sql Data Warehouse Overview What Is https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md
Dedicated SQL pool (formerly SQL DW) represents a collection of analytic resourc
Once your dedicated SQL pool is created, you can import big data with simple [PolyBase](/sql/relational-databases/polybase/polybase-guide?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) T-SQL queries, and then use the power of the distributed query engine to run high-performance analytics. As you integrate and analyze the data, dedicated SQL pool (formerly SQL DW) will become the single version of truth your business can count on for faster and more robust insights. > [!NOTE]
-> Not all features of the dedicated SQL pool in Azure Synapse workspaces apply to dedicated SQL pool (formerly SQL DW), and vice versa. To enable workspace features for an existing dedicated SQL pool (formerly SQL DW) refer to [How to enable a workspace for your dedicated SQL pool (formerly SQL DW)](workspace-connected-create.md). Explore the [Azure Synapse Analytics documentation](../overview-what-is.md) and [Get Started with Azure Synapse](../get-started.md).
->
+> Not all features of the dedicated SQL pool in Azure Synapse workspaces apply to dedicated SQL pool (formerly SQL DW), and vice versa. To enable workspace features for an existing dedicated SQL pool (formerly SQL DW) refer to [How to enable a workspace for your dedicated SQL pool (formerly SQL DW)](workspace-connected-create.md). For more information, see [What's the difference between Azure Synapse dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in an Azure Synapse Analytics workspace?](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/msft-docs-what-s-the-difference-between-synapse-formerly-sql-dw/ba-p/3597772). Explore the [Azure Synapse Analytics documentation](../overview-what-is.md) and [Get Started with Azure Synapse](../get-started.md).
## Key component of a big data solution
The analysis results can go to worldwide reporting databases or applications. Bu
- [Load sample data](./load-data-from-azure-blob-storage-using-copy.md). - Explore [Videos](https://azure.microsoft.com/documentation/videos/index/?services=sql-data-warehouse) - [Get Started with Azure Synapse](../get-started.md)
+- [What's the difference between Azure Synapse dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in an Azure Synapse Analytics workspace?](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/msft-docs-what-s-the-difference-between-synapse-formerly-sql-dw/ba-p/3597772)
-Or look at some of these other Azure Synapse resources.
+Or look at some of these other Azure Synapse resources:
- Search [Blogs](https://azure.microsoft.com/blog/tag/azure-sql-data-warehouse/) - Submit a [Feature requests](https://feedback.azure.com/d365community/forum/9b9ba8e4-0825-ec11-b6e6-000d3a4f07b8)
synapse-analytics Sql Data Warehouse Restore From Deleted Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-from-deleted-server.md
Previously updated : 04/01/2022 Last updated : 08/24/2022
In this article, you learn how to restore a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics after an accidental drop of a server using PowerShell. > [!NOTE]
-> This guidance is for standalone dedicated sql pools (formerly SQL DW) only. For synapse workspace dedicated sql pools please follow guidance [Restore SQL pool from deleted workspace](../backuprestore/restore-sql-pool-from-deleted-workspace.md).
+> This guidance is for standalone dedicated SQL pools (formerly SQL DW) only. For dedicated SQL pools in an Azure Synapse Analytics workspace, see [Restore SQL pool from deleted workspace](../backuprestore/restore-sql-pool-from-deleted-workspace.md).
## Before you begin
In this article, you learn how to restore a dedicated SQL pool (formerly SQL DW)
## Restore the SQL pool from the deleted server
-1. Open PowerShell
+1. Open PowerShell.
2. Connect to your Azure account.
Connect-AzAccount
Set-AzContext -SubscriptionId $SubscriptionID # Define the approximate point in time the server was dropped as DroppedDateTime "yyyy-MM-ddThh:mm:ssZ" (ex. 2022-01-01T16:15:00Z)
-$PointInTime=ΓÇ¥<DroppedDateTime>ΓÇ¥
+$PointInTime="<DroppedDateTime>"
$DroppedDateTime = Get-Date -Date $PointInTime # construct the resource ID of the database you wish to recover. The format required Microsoft.Sql. This includes the approximate date time the server was dropped.
synapse-analytics Sql Data Warehouse Service Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-service-capacity-limits.md
Maximum values allowed for various components of dedicated SQL pool in Azure Syn
| Database connection |Maximum Concurrent open sessions |1024<br/><br/>The number of concurrent open sessions will vary based on the selected DWU. DWU1000c and above support a maximum of 1024 open sessions. DWU500c and below, support a maximum concurrent open session limit of 512. Note, there are limits on the number of queries that can execute concurrently. When the concurrency limit is exceeded, the request goes into an internal queue where it waits to be processed. | | Database connection |Maximum memory for prepared statements |20 MB | | [Workload management](resource-classes-for-workload-management.md) |Maximum concurrent queries |128<br/><br/> A maximum of 128 concurrent queries will execute and remaining queries will be queued.<br/><br/>The number of concurrent queries can decrease when users are assigned to higher resource classes or when the [data warehouse unit](memory-concurrency-limits.md) setting is lowered. Some queries, like DMV queries, are always allowed to run and do not impact the concurrent query limit. For more information on concurrent query execution, see the [concurrency maximums](memory-concurrency-limits.md) article. |
-| [tempdb](sql-data-warehouse-tables-temporary.md) |Maximum GB |399 GB per DW100c. At DWU1000c, tempdb is sized to 3.99 TB. |
+| [tempdb](sql-data-warehouse-tables-temporary.md) |Maximum GB |399 GB per DW100c. For example, at DWU1000c, tempdb is sized to 3.99 TB. |
|||| ## Database objects
DMV's will reset when a dedicated SQL pool is paused or when it is scaled.
## Next steps
-For recommendations on using Azure Synapse, see the [Cheat Sheet](cheat-sheet.md).
+For recommendations on using Azure Synapse, see the [Cheat Sheet](cheat-sheet.md).
synapse-analytics Sql Data Warehouse Workload Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-classification.md
FROM sys.database_role_members rm
JOIN sys.database_principals AS r ON rm.role_principal_id = r.principal_id JOIN sys.database_principals AS m ON rm.member_principal_id = m.principal_id WHERE r.name IN ('mediumrc','largerc','xlargerc','staticrc10','staticrc20','staticrc30','staticrc40','staticrc50','staticrc60','staticrc70','staticrc80');
+```
for each row returned run
-sp_droprolemember '[Resource Class]', membername
+```sql
+--for each row returned run in the previous query
+EXEC sp_droprolemember '[Resource Class]', membername;
``` ## Next steps
synapse-analytics Workspace Connected Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/workspace-connected-create.md
Title: Enabling Synapse workspace features
-description: This document describes how a user can enable the Synapse workspace features on an existing dedicated SQL pool (formerly SQL DW).
+ Title: Enabling Azure Synapse workspace features
+description: This document describes how a user can enable the Azure Synapse workspace features on an existing dedicated SQL pool (formerly SQL DW).
Last updated 03/07/2022
-# Enable Synapse workspace features for a dedicated SQL pool (formerly SQL DW)
+# Enable Azure Synapse workspace features for a dedicated SQL pool (formerly SQL DW)
-All SQL data warehouse users can now access and use an existing dedicated SQL pool (formerly SQL DW) instance via the Synapse Studio and Workspace. Users can use the Synapse Studio and Workspace without impacting automation, connections, or tooling. This article explains how an existing Azure Synapse Analytics user can enable the Synapse workspace features for an existing dedicated SQL pool (formerly SQL DW). The user can expand their existing analytics solution by taking advantage of the new feature-rich capabilities now available via the Synapse workspace and Studio.
+All SQL data warehouse users can now access and use an existing dedicated SQL pool (formerly SQL DW) instance via the Synapse Studio and Azure Synapse workspace. Users can use the Synapse Studio and Workspace without impacting automation, connections, or tooling. This article explains how an existing Azure Synapse Analytics user can enable the Synapse workspace features for an existing dedicated SQL pool (formerly SQL DW). The user can expand their existing analytics solution by taking advantage of the new feature-rich capabilities now available via the Synapse workspace and Studio.
-Not all features of the dedicated SQL pool in Azure Synapse workspaces apply to dedicated SQL pool (formerly SQL DW), and vice versa. This article is a guide to enable workspace features for an existing dedicated SQL pool (formerly SQL DW).
+Not all features of the dedicated SQL pool in Azure Synapse workspaces apply to dedicated SQL pool (formerly SQL DW), and vice versa. This article is a guide to enable workspace features for an existing dedicated SQL pool (formerly SQL DW). For more information, see [What's the difference between Azure Synapse dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in an Azure Synapse Analytics workspace?](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/msft-docs-what-s-the-difference-between-synapse-formerly-sql-dw/ba-p/3597772).
## Prerequisites Before you enable the Synapse workspace features on your data warehouse, you must ensure you have:
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Here's the solution:
### Operation isn't allowed for a replicated database
-If you're trying to create SQL objects, users, or change permissions in a database, you might get errors like "Operation CREATE USER is not allowed for a replicated database." This error is returned when you try to create objects in a database that's [shared with Spark pool](../metadat). The databases that are replicated from Apache Spark pools are read only. You can't create new objects into a replicated database by using T-SQL.
+If you're trying to create SQL objects, users, or change permissions in a database, you might get errors like "Operation is not allowed for a replicated database." This error might be returned when you try to modify a Lake database that's [shared with Spark pool](../metadat). The Lake databases that are replicated from the Apache Spark pool are managed by Synapse and you cannot create objects like in SQL Databases by using T-SQL.
+Only the following operations are allowed in the Lake databases:
+- Creating, dropping, or altering views, procedures, and inline table-value functions (iTVF) in the schemas other than `dbo`. If you are creating a SQL object in `dbo` schema (or omitting schema and using the default one that is usually `dbo`), you will get the error message.
+- Creating and dropping the database users from Azure Active Directory.
+- Adding or removing database users from `db_datareader` schema.
-Create a separate database and reference the synchronized [tables](../metadat) by using three-part names and cross-database queries.
+Other operations are not allowed in Lake databases.
### Can't create Azure AD sign-in or user
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md
Previously updated : 08/24/2022 Last updated : 08/25/2022
Once you're connected to your remote app or desktop, you may be prompted for aut
### In-session passwordless authentication (preview) > [!IMPORTANT]
-> In-session passwordless authentication is currently in public preview.
+> In-session passwordless authentication is currently in Insider preview.
> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
virtual-desktop Configure Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-single-sign-on.md
Previously updated : 08/24/2022 Last updated : 08/25/2022 # Configure single sign-on for Azure Virtual Desktop > [!IMPORTANT]
-> Single sign-on using Azure AD authentication is currently in public preview.
+> Single sign-on using Azure AD authentication is currently in Insider preview.
> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
Previously updated : 07/29/2021 Last updated : 08/24/2022
Automatic OS upgrade has the following characteristics:
- [Extension sequencing](virtual-machine-scale-sets-extension-sequencing.md) is supported. - Can be enabled on a scale set of any size.
+> [!NOTE]
+>Before enabling automatic OS image upgrades, check [requirements section](#requirements-for-configuring-automatic-os-image-upgrade) of this documentation.
+ ## How does automatic OS image upgrade work? An upgrade works by replacing the OS disk of a VM with a new disk created using the latest image version. Any configured extensions and custom data scripts are run on the OS disk, while data disks are retained. To minimize the application downtime, upgrades take place in batches, with no more than 20% of the scale set upgrading at any time.
The following platform SKUs are currently supported (and more are added periodic
## Requirements for configuring automatic OS image upgrade - The *version* property of the image must be set to *latest*.-- Use application health probes or [Application Health extension](virtual-machine-scale-sets-health-extension.md) for non-Service Fabric scale sets, or Service Fabric scale sets on Bronze durability with Stateless-only node types.
+- Must use application health probes or [Application Health extension](virtual-machine-scale-sets-health-extension.md) for non-Service Fabric scale sets. For Service Fabric requirements, see [Service Fabric requirement](#service-fabric-requirements).
- Use Compute API version 2018-10-01 or higher. - Ensure that external resources specified in the scale set model are available and updated. Examples include SAS URI for bootstrapping payload in VM extension properties, payload in storage account, reference to secrets in the model, and more. - For scale sets using Windows virtual machines, starting with Compute API version 2019-03-01, the property *virtualMachineProfile.osProfile.windowsConfiguration.enableAutomaticUpdates* property must set to *false* in the scale set model definition. The *enableAutomaticUpdates* property enables in-VM patching where "Windows Update" applies operating system patches without replacing the OS disk. With automatic OS image upgrades enabled on your scale set, an extra patching process through Windows Update is not required.
The following platform SKUs are currently supported (and more are added periodic
### Service Fabric requirements If you are using Service Fabric, ensure the following conditions are met:-- Service Fabric [durability level](../service-fabric/service-fabric-cluster-capacity.md#durability-characteristics-of-the-cluster) is Silver or Gold, and not Bronze (except Stateless-only node types, which do support automatic OS image upgrades).
+- Service Fabric [durability level](../service-fabric/service-fabric-cluster-capacity.md#durability-characteristics-of-the-cluster) is Silver or Gold. If Service Fabric durability is Bronze, only Stateless-only node types supports automatic OS image upgrades).
- The Service Fabric extension on the scale set model definition must have TypeHandlerVersion 1.1 or above. - Durability level should be the same at the Service Fabric cluster and Service Fabric extension on the scale set model definition. - An additional health probe or use of application health extension is not required for Silver or Gold durability. Bronze durability with Stateless-only node types requires an additional health probe.
Automatic OS image upgrade is supported for custom images deployed through [Azur
To configure automatic OS image upgrade, ensure that the *automaticOSUpgradePolicy.enableAutomaticOSUpgrade* property is set to *true* in the scale set model definition. > [!NOTE]
-> **Upgrade Policy mode** and **Automatic OS Upgrade Policy** are separate settings and control different aspects of the scale set. When there are changes in the scale set template, the Upgrade Policy `mode` will determine what happens to existing instances in the scale set. However, Automatic OS Upgrade Policy `enableAutomaticOSUpgrade` is specific to the OS image and tracks changes the image publisher has made and determines what happens when there is an update to the image.
+> **Upgrade Policy mode** and **Automatic OS Upgrade Policy** are separate settings and control different aspects of the scale set. When there are changes in the scale set template, the Upgrade Policy `mode` will determine what happens to existing instances in the scale set. However, Automatic OS Upgrade Policy `enableAutomaticOSUpgrade` is specific to the OS image and tracks changes the image publisher has made and determines what happens when there is an update to the image.
### REST API The following example describes how to set automatic OS upgrades on a scale set model:
virtual-machines Boot Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/boot-diagnostics.md
Last updated 11/06/2020
Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. ## Boot diagnostics storage account
-When creating a VM in Azure portal, boot diagnostics is enabled by default. The recommended boot diagnostics experience is to use a managed storage account, as it yields significant performance improvements in the time to create an Azure VM. This is because an Azure managed storage account will be used, removing the time it takes to create a new user storage account to store the boot diagnostics data.
+When you create a VM in Azure portal, boot diagnostics is enabled by default. The recommended boot diagnostics experience is to use a managed storage account, as it yields significant performance improvements in the time to create an Azure VM. This is because an Azure managed storage account will be used, removing the time it takes to create a new user storage account to store the boot diagnostics data.
> [!IMPORTANT] > The boot diagnostics data blobs (which comprise of logs and snapshot images) are stored in a managed storage account. Customers will be charged only on used GiBs by the blobs, not on the disk's provisioned size. The snapshot meters will be used for billing of the managed storage account. Because the managed accounts are created on either Standard LRS or Standard ZRS, customers will be charged at $0.05/GB per month for the size of their diagnostic data blobs only. For more information on this pricing, see [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/). Customers will see this charge tied to their VM resource URI.
-An alternative boot diagnostic experience is to use a user managed storage account. A user can either create a new storage account or use an existing one.
-> [!NOTE]
-> User managed storage accounts associated with boot diagnostics require the storage account and the associated virtual machines reside in the same region and subscription and accessible from all networks.
+An alternative boot diagnostic experience is to use a custom storage account. A user can either create a new storage account or use an existing one. When the storage firewall is enabled on the custom storage account (**Enabled from all networks** option isn't selected), you must:
+
+- Make sure that access through the storage firewall is allowed for the Azure platform to publish the screenshot and serial log. To do this, go to the custom boot diagnostics storage account in the Azure portal and then select **Networking** from the **Security + networking** section. Check if the **Allow Azure services on the trusted services list to access this storage account** checkbox is selected.
+- Allow storage firewall for users to view the boot screenshots or serial logs. To do this, add your network or the client/browser's Internet IPs as firewall exclusions. For more information, see [Configure Azure Storage firewalls and virtual networks](https://github.com/genlin/azure-docs-pr/blob/patch-5/articles/storage/common/storage-network-security.md).
+To configure the storage firewall for Azure Serial Console, see [Use Serial Console with custom boot diagnostics storage account firewall enabled](/troubleshoot/azure/virtual-machines/serial-console-windows#use-serial-console-with-custom-boot-diagnostics-storage-account-firewall-enabled).
+
+> [!NOTE]
+> The custom storage account associated with boot diagnostics requires the storage account and the associated virtual machines reside in the same region and subscription.
## Boot diagnostics view
-Located in the virtual machine blade, the boot diagnostics option is under the *Support and Troubleshooting* section in the Azure portal. Selecting boot diagnostics will display a screenshot and serial log information. The serial log contains kernel messaging and the screenshot is a snapshot of your VMs current state. Based on if the VM is running Windows or Linux determines what the expected screenshot would look like. For Windows, users will see a desktop background and for Linux, users will see a login prompt.
+Go to the virtual machine blade in the Azure portal, the boot diagnostics option is under the *Support and Troubleshooting* section in the Azure portal. Selecting boot diagnostics will display a screenshot and serial log information. The serial log contains kernel messaging and the screenshot is a snapshot of your VMs current state. Based on if the VM is running Windows or Linux determines what the expected screenshot would look like. For Windows, users will see a desktop background and for Linux, users will see a login prompt.
:::image type="content" source="./media/boot-diagnostics/boot-diagnostics-linux.png" alt-text="Screenshot of Linux boot diagnostics"::: :::image type="content" source="./media/boot-diagnostics/boot-diagnostics-windows.png" alt-text="Screenshot of Windows boot diagnostics":::
Located in the virtual machine blade, the boot diagnostics option is under the *
Managed boot diagnostics can be enabled through the Azure portal, CLI and ARM Templates. ### Enable managed boot diagnostics using the Azure portal
-When creating a VM in the Azure portal, the default setting is to have boot diagnostics enabled using a managed storage account. To view this, navigate to the *Management* tab during the VM creation.
+When you create a VM in the Azure portal, the default setting is to have boot diagnostics enabled using a managed storage account. To view this, navigate to the *Management* tab during the VM creation.
:::image type="content" source="./media/boot-diagnostics/boot-diagnostics-enable-portal.png" alt-text="Screenshot enabling managed boot diagnostics during VM creation."::: ### Enable managed boot diagnostics using CLI
-Boot diagnostics with a managed storage account is supported in Azure CLI 2.12.0 and later. If you don't input a name or URI for a storage account, a managed account will be used. For more information and code samples see the [CLI documentation for boot diagnostics](/cli/azure/vm/boot-diagnostics).
+Boot diagnostics with a managed storage account is supported in Azure CLI 2.12.0 and later. If you don't input a name or URI for a storage account, a managed account will be used. For more information and code samples, see the [CLI documentation for boot diagnostics](/cli/azure/vm/boot-diagnostics).
### Enable managed boot diagnostics using PowerShell
-Boot diagnostics with a managed storage account is supported in Azure PowerShell 6.6.0 and later. If you don't input a name or URI for a storage account, a managed account will be used. For more information and code samples see the [PowerShell documentation for boot diagnostics](/powershell/module/az.compute/set-azvmbootdiagnostic).
+Boot diagnostics with a managed storage account is supported in Azure PowerShell 6.6.0 and later. If you don't input a name or URI for a storage account, a managed account will be used. For more information and code samples, see the [PowerShell documentation for boot diagnostics](/powershell/module/az.compute/set-azvmbootdiagnostic).
### Enable managed boot diagnostics using Azure Resource Manager (ARM) templates Everything after API version 2020-06-01 supports managed boot diagnostics. For more information, see [boot diagnostics instance view](/rest/api/compute/virtualmachines/createorupdate#bootdiagnostics).
virtual-machines Configure Oracle Asm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/configure-oracle-asm.md
Previously updated : 08/02/2018 Last updated : 07/13/2022
**Applies to:** :heavy_check_mark: Linux VMs
-Azure virtual machines provide a fully configurable and flexible computing environment. This tutorial covers basic Azure virtual machine deployment combined with the installation and configuration of Oracle Automated Storage Management (ASM). You learn how to:
+Azure virtual machines provide a fully configurable and flexible computing environment. This tutorial covers basic Azure virtual machine deployment combined with the installation and configuration of Oracle Automatic Storage Management (ASM). You learn how to:
> [!div class="checklist"] > * Create and connect to an Oracle Database VM
-> * Install and configure Oracle Automated Storage Management
+> * Install and configure Oracle Automatic Storage Management
> * Install and configure Oracle Grid infrastructure > * Initialize an Oracle ASM installation > * Create an Oracle DB managed by ASM
+For an overview of the value proposition of ASM, see the [documentation at Oracle](https://aka.ms/oracle/asm).
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.0.4 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
For this tutorial, the default user is *grid* and the default group is *asmadmin
Writing Oracle ASM library driver configuration: done ```
+ >[!NOTE]
+ >The `/usr/sbin/oracleasm configure -i` command asks for the user and group that default to owning the ASM driver access point.
+ >The database will be running as the `grid` user and the `asmadmin` group.
+ >By selecting **Start Oracle ASM library driver on boot = 'y'**, the system will always load the module and mount the filesystem on boot.
+ >By selecting **Scan for Oracle ASM disks on boot = 'y'**, the system will always scan the Oracle ASM disks on boot.
+ >The last two configurations are very important, otherwise, you will run into disk reboot problems.
+ 2. View the disk configuration: ```bash
For this tutorial, the default user is *grid* and the default group is *asmadmin
11 0 1048575 sr0 ```
+ >[!NOTE]
+ >Note that, in the following configuration, please use the exact commands as this document shows.
+ >Make sure you are calling the Oracle ASM service with `service oracleasm`.
+ 6. Check the Oracle ASM service status and start the Oracle ASM service: ```bash
For this tutorial, the default user is *grid* and the default group is *asmadmin
Marking disk "FRA" as an ASM disk: [ OK ] ```
+ >[!NOTE]
+ >Disks are marked for ASMLib using a process described in [ASMLib Installation](https://www.oracle.com/linux/technologies/install-asmlib.html).
+ >ASMLib learns what disk are marked during a process called disk scanning. ASMLib runs this scan every time it starts up. The system administrator can also force a scan via the `service oracleasm scandisks` command.
+ >ASMLib examines each disk in the system. It checks if the disk has been marked for ASMLib. Any disk that has been marked will be made available to ASMLib.
+ >You can visit documents [Configuring Storage Device Path Persistence Using Oracle ASMLIB](https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/configuring-storage-device-path-persistence-using-oracle-asmlib.html#GUID-6B1DA5DB-2E93-4616-B517-18ABDEE72AE4) and [Configuring Oracle ASMLib on Multipath Disks](https://www.oracle.com/linux/technologies/multipath-disks.html) for more informations.
+ 8. List Oracle ASM disks: ```bash
For this tutorial, the default user is *grid* and the default group is *asmadmin
chmod 600 /dev/sdf1 ``` + ## Download and prepare Oracle Grid Infrastructure To download and prepare the Oracle Grid Infrastructure software, complete the following steps:
To install Oracle Grid Infrastructure, complete the following steps:
To set up your Oracle ASM installation, complete the following steps:
-1. Ensure you are still signed in as **grid**, from your X11 session. You might need to hit `enter` to revive the terminal. Then launch the Oracle Automated Storage Management Configuration Assistant:
+1. Ensure you are still signed in as **grid**, from your X11 session. You might need to hit `enter` to revive the terminal. Then launch the Oracle Automatic Storage Management Configuration Assistant:
```bash cd /u01/app/grid/product/12.1.0/grid/bin
The Oracle database software is already installed on the Azure Marketplace image
## Delete the VM
-You have successfully configured Oracle Automated Storage Management on the Oracle DB image from the Azure Marketplace. When you no longer need this VM, you can use the following command to remove the resource group, VM, and all related resources:
+You have successfully configured Oracle Automatic Storage Management on the Oracle DB image from the Azure Marketplace. When you no longer need this VM, you can use the following command to remove the resource group, VM, and all related resources:
```azurecli az group delete --name myResourceGroup
virtual-machines Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/cal-s4h.md
The online library is continuously updated with Appliances for demo, proof of co
## Deployment of S/4HANA system for productive usage through SAP Cloud Appliance Library
-You now can also deploy S4H systems for productive usage through SAP Cloud Appliance Library. Within a few clicks, you can have your SAP system for productive usage up and running. The following links highlight the solutions that you can quickly deploy on Azure. Just select the "Deploy System" under "Products" link.
-
-You will need to authenticate with your S-User.
+You can now also deploy SAP S/4HANA systems with High Availability (HA), non-HA or single server architecture through SAP Cloud Appliance Library. The offering comprises default SAP S/4HANA software stacks including FPS levels as well as an integration into Maintenance Planner to enable creation and installation of custom SAP S/4HANA software stacks.
+The following links highlight the Product stacks that you can quickly deploy on Azure. Just select ΓÇ£Deploy SystemΓÇ¥.
| All products | Link | | -- | : | | **SAP S/4HANA 2021 FPS01 for Productive Deployments** | [Deploy System](https://cal.sap.com/catalog#/products) |
-|This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. You will need a valid license for deployment initiation. |
-| **SAP S/4HANA 2021 Initial Shipment Stack for Productive Deployments** | [Deploy System](https://cal.sap.com/catalog#/products) |
+|This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. |
+| **SAP S/4HANA 2021 FPS00 for Productive Deployments, Initial Shipment Stack** | [Deploy System](https://cal.sap.com/catalog#/products) |
|This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. |
virtual-machines Dbms_Guide_Sapase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms_guide_sapase.md
tags: azure-resource-manager
Previously updated : 08/24/2022 Last updated : 08/23/2022 - + # SAP ASE Azure Virtual Machines DBMS deployment for SAP workload In this document, covers several different areas to consider when deploying SAP ASE in Azure IaaS. As a precondition to this document, you should have read the document [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms_guide_general.md) and other guides in the [SAP workload on Azure documentation](./get-started.md). This document covers SAP ASE running on Linux and on Windows Operating Systems. The minimum supported release on Azure is SAP ASE 16.0.02 (Release 16 Support Pack 2). It is recommended to deploy the latest version of SAP and the latest Patch Level. As a minimum SAP ASE 16.0.03.07 (Release 16 Support Pack 3 Patch Level 7) is recommended. The most recent version of SAP can be found in [Targeted ASE 16.0 Release Schedule and CR list Information](https://wiki.scn.sap.com/wiki/display/SYBASE/Targeted+ASE+16.0+Release+Schedule+and+CR+list+Information).
Additional information about release support with SAP applications or installati
Remark: Throughout documentation within and outside the SAP world, the name of the product is referenced as Sybase ASE or SAP ASE or in some cases both. In order to stay consistent, we use the name **SAP ASE** in this documentation. ## Operating system support
-The SAP Product Availability Matrix contains the supported Operating System and SAP Kernel combinations for each SAP application. Linux distributions SUSE 12.x, SUSE 15.x, Red Hat 7.x are fully supported. Oracle Linux as operating system for SAP ASE is not supported. It is recommended to use the most recent Linux releases available. Windows customers should use Windows Server 2016 or Windows Server 2019 releases. Older releases of Windows such as Windows 2012 are technically supported but the latest Windows version is always recommended.
+The SAP Product Availability Matrix contains the supported Operating System and SAP Kernel combinations for each SAP application. Linux distributions SLES 12.x, SLES 15.x, RHEL 7.x and RHEL 8.x are fully supported. Oracle Linux as operating system for SAP ASE is not supported. It is recommended to use the most recent Linux releases available. Windows customers should use Windows Server 2016 or Windows Server 2019 releases. Older releases of Windows such as Windows 2012 are technically supported but the latest Windows version is always recommended.
## Specifics to SAP ASE on Windows
Lock Pages in Memory is a setting that will prevent the SAP ASE database buffer
## Linux operating system specific settings
-On Linux VMs, run `saptune` with profile SAP-ASE
+On SLES VMs, run `saptune` with profile SAP-ASE. Tune RHEL VMs as described in [69988](https://access.redhat.com/solutions/69988).
Linux Huge Pages should be enabled by default and can be verified with command `cat /proc/meminfo`
An example of a configuration for a small SAP ASE DB Server with a database size
| # of data devices | 4 | 4 | | | # of log devices | 1 | 1 | | | # of temp devices | 1 | 1 | More for SAP BW workload |
-| Operating system | Windows Server 2019 | SUSE 12 SP4/ 15 SP1 or RHEL 7.6 | |
+| Operating system | Windows Server 2019 | SLES 12 SP4/ 15 SP1 or RHEL 7.6/8.1 | |
| Disk aggregation | Storage Spaces | LVM2 | | | File system | NTFS | XFS | | Format block size | Needs workload testing | Needs workload testing | |
An example of a configuration for a medium SAP ASE DB Server with a database siz
| # of data devices | 8 | 8 | | | # of log devices | 1 | 1 | | | # of temp devices | 1 | 1 | More for SAP BW workload |
-| Operating system | Windows Server 2019 | SUSE 12 SP4/ 15 SP1 or RHEL 7.6 | |
+| Operating system | Windows Server 2019 | SLES 12 SP4/ 15 SP1 or RHEL 7.6/8.1 | |
| Disk aggregation | Storage Spaces | LVM2 | | | File system | NTFS | XFS | | Format block size | Needs workload testing | Needs workload testing | |
An example of a configuration for a small SAP ASE DB Server with a database size
| # of data devices | 16 | 16 | | | # of log devices | 1 | 1 | | | # of temp devices | 1 | 1 | More for SAP BW workload |
-| Operating system | Windows Server 2019 | SUSE 12 SP4/ 15 SP1 or RHEL 7.6 | |
+| Operating system | Windows Server 2019 | SLES 12 SP4/ 15 SP1 or RHEL 7.6/8.1 | |
| Disk aggregation | Storage Spaces | LVM2 | | | File system | NTFS | XFS | | Format block size | Needs workload testing | Needs workload testing | |
An example of a configuration for a small SAP ASE DB Server with a database size
| # of data devices | 32 | 32 | | | # of log devices | 1 | 1 | | | # of temp devices | 1 | 1 | More for SAP BW workload |
-| Operating system | Windows Server 2019 | SUSE 12 SP4/ 15 SP1 or RHEL 7.6 | |
+| Operating system | Windows Server 2019 | SLES 12 SP4/ 15 SP1 or RHEL 7.6/8.1 | |
| Disk aggregation | Storage Spaces | LVM2 | | | File system | NTFS | XFS | | Format block size | Needs workload testing | Needs workload testing | |
SAP Software provisioning Manager (SWPM) is giving an option to encrypt the data
- Deploy SAP ASE 16.0.03.07 or higher - Update to latest version and patches of FaultManager and SAPHostAgent-- Deploy on latest certified OS available such as Windows 2019, Suse 15.1 or Redhat 7.6 or higher
+- Deploy on latest certified OS available such as Windows 2019, SLES 15 or RHEL 8
- Use SAP Certified VMs ΓÇô high memory Azure VM SKUs such as Es_v3 or for x-large systems M-Series VM SKUs are recommended - Match the disk IOPS and total VM aggregate throughput quota of the VM with the disk design. Deploy sufficient number of disks - Aggregate disks using Windows Storage Spaces or Linux LVM2 with correct stripe size and file system - Create sufficient number of devices for data, log, temp, and backup purposes - Consider using UltraDisk for x-large systems -- Run `saptune` SAP-ASE on Linux OS
+- Run `saptune` SAP-ASE on SLES. Tune RHEL VMs per [69988](https://access.redhat.com/solutions/69988).
- Secure the database with DB Encryption ΓÇô manually store keys in Azure Key Vault - Complete the [SAP on Azure Checklist](./sap-deployment-checklist.md) - Configure log backup and full backup
virtual-machines High Availability Guide Rhel Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-netapp-files.md
Title: Azure VMs high availability for SAP NW on RHEL with Azure NetApp Files| Microsoft Docs description: Establish high availability for SAP NW on Azure virtual machines (VMs) RHEL with Azure NetApp Files.- tags: azure-resource-manager
-keywords: ''
Previously updated : 06/08/2022 Last updated : 08/24/2022 - # Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications
The following items are prefixed with either **[A]** - applicable to all nodes,
1. **[A]** Update the /usr/sap/sapservices file
- To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker must be commented out from /usr/sap/sapservices file. Do not comment out the SAP HANA instance if it will be used with HANA SR.
+ To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker must be commented out from /usr/sap/sapservices file.
``` sudo vi /usr/sap/sapservices
The following items are prefixed with either **[A]** - applicable to all nodes,
# LD_LIBRARY_PATH=/usr/sap/QAS/ERS01/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/QAS/ERS01/exe/sapstartsrv pf=/usr/sap/QAS/ERS01/profile/QAS_ERS01_anftstsapers -D -u qasadm ```
-1. **[1]** Create the SAP cluster resources
+2. **[1]** Create the SAP cluster resources
If using enqueue server 1 architecture (ENSA1), define the resources as follows:
The following items are prefixed with either **[A]** - applicable to all nodes,
# rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 ```
-1. **[A]** Add firewall rules for ASCS and ERS on both nodes
+3. **[A]** Add firewall rules for ASCS and ERS on both nodes
Add the firewall rules for ASCS and ERS on both nodes. ``` # Probe Port of ASCS
Follow these steps to install an SAP application server.
## Next steps
+* To deploy cost optimization scenario where PAS and AAS instance is deployed with SAP NetWeaver HA cluster on RHEL, see [Install SAP Dialog Instance with SAP ASCS/SCS high availability VMs on RHEL](high-availability-guide-rhel-with-dialog-instance.md)
* [HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide](./high-availability-guide-rhel-multi-sid.md) * [Azure Virtual Machines planning and implementation for SAP][planning-guide] * [Azure Virtual Machines deployment for SAP][deployment-guide]
virtual-machines High Availability Guide Rhel Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-nfs-azure-files.md
Title: Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files| Microsoft Docs description: Establish high availability for SAP NW on Azure virtual machines (VMs) RHEL with NFS on Azure Files.- tags: azure-resource-manager Previously updated : 03/28/2022 Last updated : 08/24/2022 - # High availability for SAP NetWeaver on Azure VMs on Red Hat Enterprise Linux with NFS on Azure Files
The following items are prefixed with either **[A]** - applicable to all nodes,
1. **[A]** Update the /usr/sap/sapservices file
- To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker must be commented out from /usr/sap/sapservices file. Do not comment out the SAP HANA instance if it will be used with HANA SR.
+ To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker must be commented out from /usr/sap/sapservices file.
```bash sudo vi /usr/sap/sapservices
Thoroughly test your Pacemaker cluster. [Execute the typical failover tests](./h
## Next steps
+* To deploy cost optimization scenario where PAS and AAS instance is deployed with SAP NetWeaver HA cluster on RHEL, see [Install SAP Dialog Instance with SAP ASCS/SCS high availability VMs on RHEL](high-availability-guide-rhel-with-dialog-instance.md)
* [HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide](./high-availability-guide-rhel-multi-sid.md) * [Azure Virtual Machines planning and implementation for SAP][planning-guide] * [Azure Virtual Machines deployment for SAP][deployment-guide]
virtual-machines High Availability Guide Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel.md
Title: Azure VMs high availability for SAP NW on RHEL | Microsoft Docs description: Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux- tags: azure-resource-manager
-keywords: ''
Previously updated : 03/28/2022 Last updated : 08/24/2022 - # Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux
The following items are prefixed with either **[A]** - applicable to all nodes,
1. **[A]** Update the /usr/sap/sapservices file
- To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker must be commented out from /usr/sap/sapservices file. Do not comment out the SAP HANA instance if it will be used with HANA SR.
+ To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker must be commented out from /usr/sap/sapservices file.
<pre><code> sudo vi /usr/sap/sapservices
The following items are prefixed with either **[A]** - applicable to all nodes,
# LD_LIBRARY_PATH=/usr/sap/<b>NW1</b>/ERS<b>02</b>/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/<b>NW1</b>/ERS<b>02</b>/exe/sapstartsrv pf=/usr/sap/<b>NW1</b>/ERS<b>02</b>/profile/<b>NW1</b>_ERS<b>02</b>_<b>nw1-aers</b> -D -u <b>nw1</b>adm </code></pre>
-1. **[1]** Create the SAP cluster resources
+2. **[1]** Create the SAP cluster resources
If using enqueue server 1 architecture (ENSA1), define the resources as follows:
Follow these steps to install an SAP application server.
## Next steps
+* To deploy cost optimization scenario where PAS and AAS instance is deployed with SAP NetWeaver HA cluster on RHEL, see [Install SAP Dialog Instance with SAP ASCS/SCS high availability VMs on RHEL](high-availability-guide-rhel-with-dialog-instance.md)
* [HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide](./high-availability-guide-rhel-multi-sid.md) * [Azure Virtual Machines planning and implementation for SAP][planning-guide] * [Azure Virtual Machines deployment for SAP][deployment-guide]
virtual-machines Planning Guide Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/planning-guide-storage.md
Title: 'Azure storage types for SAP workload' description: Planning Azure storage types for SAP workloads- tags: azure-resource-manager
-keywords: ''
ms.assetid: d7c59cc1-b2d0-4d90-9126-628f9c7a5538 Last updated 11/02/2021
virtual-machines Sap Deployment Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-deployment-checklist.md
Title: SAP workload planning and deployment checklist | Microsoft Docs description: Checklist for planning SAP workload deployments to Azure and deploying the workloads- tags: azure-resource-manager
-keywords: ''
Last updated 02/02/2022 - # SAP workloads on Azure: planning and deployment checklist
virtual-machines Sap Ha Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-ha-availability-zones.md
Title: SAP workload configurations with Azure Availability Zones | Microsoft Docs description: High-availability architecture and scenarios for SAP NetWeaver using Azure Availability Zones- tags: azure-resource-manager
-keywords: ''
ms.assetid: 887caaec-02ba-4711-bd4d-204a7d16b32b Last updated 11/02/2021 - # SAP workload configurations with Azure Availability Zones
virtual-machines Sap Hana Availability Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-availability-across-regions.md
Title: SAP HANA availability across Azure regions | Microsoft Docs description: An overview of availability considerations when running SAP HANA on Azure VMs in multiple Azure regions.- tags: azure-resource-manager
-keywords: ''
Last updated 09/12/2018 - # SAP HANA availability across Azure regions
virtual-machines Sap Hana Availability One Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-availability-one-region.md
Title: SAP HANA availability within one Azure region | Microsoft Docs description: Describes SAP HANA operations on Azure native VMs in one Azure region.- tags: azure-resource-manager
-keywords: ''
Last updated 07/27/2018 - # SAP HANA availability within one Azure region
virtual-machines Sap Hana Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-availability-overview.md
Title: SAP HANA availability on Azure VMs - Overview | Microsoft Docs description: Describes SAP HANA operations on Azure native VMs.- tags: azure-resource-manager
-keywords: ''
Last updated 03/05/2018 - # SAP HANA high availability for Azure virtual machines
virtual-machines Sap Planning Supported Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-planning-supported-configurations.md
Title: 'SAP on Azure: Supported Scenarios with Azure VMs' description: Azure Virtual Machines supported scenarios with SAP workload- tags: azure-resource-manager
-keywords: 'SAP'
ms.assetid: d7c59cc1-b2d0-4d90-9126-628f9c7a5538 Last updated 02/11/2022
# SAP workload on Azure virtual machine supported scenarios Designing SAP NetWeaver, Business one, `Hybris` or S/4HANA systems architecture in Azure opens many different opportunities for various architectures and tools to use to get to a scalable, efficient, and highly available deployment. Though dependent on the operating system or DBMS used, there are restrictions. Also, not all scenarios that are supported on-premises are supported in the same way in Azure. This document will lead through the supported non-high-availability configurations and high-availability configurations and architectures using Azure VMs exclusively. For scenarios supported with [HANA Large Instances](./hana-overview-architecture.md), check the article [Supported scenarios for HANA Large Instances](./hana-supported-scenario.md). - ## 2-Tier configuration An SAP 2-Tier configuration is considered to be built up out of a combined layer of the SAP DBMS and application layer that run on the same server or VM unit. The second tier is considered to be the user interface layer. In the case of a 2-Tier configuration, the DBMS, and SAP application layer share the resources of the Azure VM. As a result, you need to configure the different components in a way that these components don't compete for resources. You also need to be careful to not oversubscribe the resources of the VM. Such a configuration does not provide any high availability, beyond the [Azure Service Level agreements](https://azure.microsoft.com/support/legal/sla/) of the different Azure components involved.
For all OS/DBMS combinations supported on Azure, this type of configuration is s
> [!NOTE] > For production SAP systems, we recommend additional high availability and eventual disaster recovery configurations as described later in this document - ## 3-Tier configuration In such configurations, you separate the SAP application layer and the DBMS layer into different VMs. You usually do that for larger systems and out of reasons of being more flexible on the resources of the SAP application layer. In the most simple setup, there is no high availability beyond the [Azure Service Level agreements](https://azure.microsoft.com/support/legal/sla/) of the different Azure components involved.
virtual-machines Sap Proximity Placement Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-proximity-placement-scenarios.md
Title: Azure proximity placement groups for SAP applications | Microsoft Docs description: Describes SAP deployment scenarios with Azure proximity placement groups- tags: azure-resource-manager
-keywords: ''
Last updated 02/07/2022 - # Azure proximity placement groups for optimal network latency with SAP applications
virtual-machines Sap Supported Product On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-supported-product-on-azure.md
Title: 'SAP on Azure: What SAP software is supported on Azure' description: Explains what SAP software is supported to be deployed on Azure- tags: azure-resource-manager
-keywords: 'SAP'
ms.assetid: d7c59cc1-b2d0-4d90-9126-628f9c7a5538 Last updated 02/02/2022
virtual-network-manager Create Virtual Network Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-portal.md
In this quickstart, you'll deploy three virtual networks and use Azure Virtual N
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ## Create Virtual Network Manager
+Deploy a network manager instance with the defined scope and access you need.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. Select **+ Create a resource** and search for **Network Manager**. Then select **Create** to begin setting up Azure Virtual Network Manager.
In this quickstart, you'll deploy three virtual networks and use Azure Virtual N
| Region | Select the region for this deployment. Azure Virtual Network Manager can manage virtual networks in any region. The region selected is for where the Virtual Network Manager instance will be deployed. | | Description | *(Optional)* Provide a description about this Virtual Network Manager instance and the task it will be managing. | | [Scope](concept-network-manager-scope.md#scope) | Define the scope for which Azure Virtual Network Manager can manage. This example will use a subscription-level scope.
- | [Features](concept-network-manager-scope.md#features) | Select the features you want to enable for Azure Virtual Network Manager. Available features are *Connectivity*, *SecurityAdmin*, or *Select All*. </br> Connectivity - Enables the ability to create a full mesh or hub and spoke network topology between virtual networks within the scope. </br> SecurityAdmin - Enables the ability to create global network security rules. |
+ | [Features](concept-network-manager-scope.md#features) | Select the features you want to enable for Azure Virtual Network Manager. Available features are *Connectivity* and *SecurityAdmin*. </br> Connectivity - Enables the ability to create a full mesh or hub and spoke network topology between virtual networks within the scope. </br> SecurityAdmin - Enables the ability to create global network security rules. |
1. Select **Review + create** and then select **Create** once validation has passed.
-## Create three virtual networks
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
+## Create virtual networks
+Create five virtual networks using the portal. This example creates virtual networks named VNetA, VNetB, VNetC and VNetD in the West US location. Each virtual network will have a tag of networkType used for dynamic membership. If you already have virtual networks you want create a mesh network with, you'll need to add tags listed below to your virtual networks and then you can skip to the next section.
-1. Select **+ Create a resource** and search for **Virtual network**. Then select **Create** to begin configuring the virtual network.
+1. From the **Home** screen, select **+ Create a resource** and search for **Virtual network**. Then select **Create** to begin configuring the virtual network.
1. On the *Basics* tab, enter or select the following information.
In this quickstart, you'll deploy three virtual networks and use Azure Virtual N
| Subnet name | default | | Subnet address space | 10.0.0.0/24 |
+1. Select the **Tags** tab and enter the following values:
+
+ :::image type="content" source="./media/create-virtual-network-manager-portal/create-vnet-tag.png" alt-text="Screenshot of create a virtual network tag page.":::
+
+ | Setting | Value |
+ |- | - |
+ | Name | Enter **NetworkType** |
+ | Value | Enter **Prod**. |
+ 1. Select **Review + create** and then select **Create** once validation has passed to deploy the virtual network.
-1. Repeat steps 2-5 to create two more virtual networks with the following information:
+1. Repeat steps 2-5 to create more virtual networks with the following information:
| Setting | Value | | - | -- | | Subscription | Select the same subscription you selected in step 3. | | Resource group | Select the **myAVNMResourceGroup**. |
- | Name | Enter **VNetB** for the second virtual network and **VNetC** for the third virtual network. |
+ | Name | Enter **VNetB**, **VNetC**, and **VNetD** for each of the three extra virtual networks. |
| Region | Region will be selected for you when you select the resource group. | | VNetB IP addresses | IPv4 address space: 10.1.0.0/16 </br> Subnet name: default </br> Subnet address space: 10.1.0.0/24| | VNetC IP addresses | IPv4 address space: 10.2.0.0/16 </br> Subnet name: default </br> Subnet address space: 10.2.0.0/24|
+ | VNetD IP addresses | IPv4 address space: 10.3.0.0/16 </br> Subnet name: default </br> Subnet address space: 10.3.0.0/24|
+ | VNetB NetworkType tag | Enter **Prod**. |
+ | VNetC NetworkType tag | Enter **Prod**. |
+ | VNetD NetworkType tag | Enter **Test**. |
## Create a network group
+Virtual Network Manager applies configurations to groups of VNets by placing them in network groups. Create a network group as follows:
1. Go to Azure Virtual Network Manager instance you created.
In this quickstart, you'll deploy three virtual networks and use Azure Virtual N
1. You'll see the new network group added to the *Network Groups* page. :::image type="content" source="./media/create-virtual-network-manager-portal/network-groups-list.png" alt-text="Screenshot of network group page with list of network groups.":::
-1. From the list of network groups, select **myNetworkGroup** and select **Add** under **Static membership** on the *myNetworkGroup* page.
+1. Once your network group is created, you'll add virtual networks as members. Choose one of the options: *Static membership* or *Dynamic membership* with Azure Policy.
+
+## Define membership for a mesh configuration
+Azure Virtual Network manager allows you two methods for adding membership to a network group. Static membership involves manually adding virtual networks, and dynamic membership involves using Azure Policy to dynamically add virtual networks based on conditions. Choose the option below for your mesh membership configuration:
+### Static membership option
+Using static membership, you'll manually add three VNets for your Mesh configuration to your Network Group using the steps below:
+
+1. From the list of network groups, select **myNetworkGroup** and select **Add** under *Static membership* on the *myNetworkGroup* page.
:::image type="content" source="./media/create-virtual-network-manager-portal/add-static-member.png" alt-text="Screenshot of add a virtual network f.":::
In this quickstart, you'll deploy three virtual networks and use Azure Virtual N
:::image type="content" source="./media/create-virtual-network-manager-portal/add-virtual-networks.png" alt-text="Screenshot of add virtual networks to network group page.":::
-1. Return to the *Network groups* page, and you'll see 3 members added under **Member virtual network**.
+1. On the **Network Group** page under *Settings*, select **Group Members** to view the membership of the group you manually selected.
+ :::image type="content" source="media/create-virtual-network-manager-portal/group-members-list-thumb.png" alt-text="Screenshot of group membership under Group Membership." lightbox="media/create-virtual-network-manager-portal/group-members-list.png":::
+
+### Dynamic membership with Azure Policy
+Using [Azure Policy](concept-azure-policy-integration.md), you'll define a condition to dynamically add three VNets for your Mesh configuration to your Network Group using the steps below.
+
+1. From the list of network groups, select **myNetworkGroup**.
+
+ :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/network-group-page.png" alt-text="Screenshot of the network groups page.":::
- :::image type="content" source="./media/create-virtual-network-manager-portal/list-network-members.png" alt-text="Screenshot of network manager instance page with three member virtual networks.":::
+1. On the **Overview** page, select **Create Azure Policy** under *Create policy to dynamically add members*.
-## Create a connectivity configuration
+ :::image type="content" source="media/create-virtual-network-manager-portal/define-dynamic-membership.png" alt-text="Screenshot of Create Azure Policy button.":::
+
+1. On the **Create Azure Policy** page, select or enter the following information:
+
+ :::image type="content" source="./media/create-virtual-network-manager-portal/network-group-conditional.png" alt-text="Screenshot of create a network group conditional statements tab.":::
+
+ | Setting | Value |
+ | - | -- |
+ | Policy name | Enter **ProdVNets** in the text box. |
+ | Scope | Select **Select Scopes** and choose your current subscription. |
+ | Criteria | |
+ | Parameter | Select **Tags** from the drop-down.|
+ | Operator | Select **Exists** from the drop-down.|
+ | Condition | Enter **NetworkType** to dynamically add the three previously created virtual networks into this network group. |
+
+1. Select **Advanced (JSON) editor** to modify the JSON code.
+1. On line 5, replace **exists** with **equals** and set the value to **"Prod"** from **true**.
+1.
+ :::image type="content" source="./media/create-virtual-network-manager-portal/json-advanced-editor.png" alt-text="Screenshot of Advanced (JSON) editor.":::
+
+1. Select **Save** to deploy the group membership.
+
+1. On the *Network Group* page under **Settings**, select **Group Members** to view the membership of the group based on the conditions defined in Azure Policy.
+
+ :::image type="content" source="media/create-virtual-network-manager-portal/group-members-list-thumb.png" alt-text="Screenshot of group membership under Group Membership." lightbox="media/create-virtual-network-manager-portal/group-members-list.png":::
+
+## Create a configuration
+Now that the Network Group is created, and has the correct VNets, create a mesh network topology configuration. Replace <subscription_id> with your subscription and follow the steps below:
1. Select **Configurations** under *Settings*, then select **+ Create**.
In this quickstart, you'll deploy three virtual networks and use Azure Virtual N
1. On the *Topology* tab, select the *Mesh* topology if not selected, and leave the **Enable mesh connectivity across regions** unchecked. Cross-region connectivity isn't required for this set up since all the virtual networks are in the same region. + :::image type="content" source="./media/create-virtual-network-manager-portal/topology-configuration.png" alt-text="Screenshot of topology selection for network group connectivity configuration."::: 1. Select **+ Add** and then select the network group you created in the last section. Select **Select** to add the network group to the configuration.
In this quickstart, you'll deploy three virtual networks and use Azure Virtual N
:::image type="content" source="./media/create-virtual-network-manager-portal/create-connectivity-configuration.png" alt-text="Screenshot of create a connectivity configuration.":::
-1. Once the deployment completes, select **Refresh** and you'll see the new connectivity configuration added to the *Configurations* page.
+1. Once the deployment completes, select **Refresh**, and you'll see the new connectivity configuration added to the *Configurations* page.
:::image type="content" source="./media/create-virtual-network-manager-portal/connectivity-configuration-list.png" alt-text="Screenshot of connectivity configuration list.":::
To have your configurations applied to your environment, you'll need to commit t
:::image type="content" source="./media/create-virtual-network-manager-portal/deployment-in-progress.png" alt-text="Screenshot of configuration deployment in progress status.":::
-## Confirm configuration deployment
+## Verify configuration deployment
+Use the **Network Manager** section for each virtual machine to verify whether configuration was deployed in the steps below:
1. Select **Refresh** on the *Deployments* page to see the updated status of the configuration that you committed.
To have your configurations applied to your environment, you'll need to commit t
:::image type="content" source="./media/create-virtual-network-manager-portal/vnet-configuration-association.png" alt-text="Screenshot of connectivity configuration associated with VNetA virtual network.":::
-1. You can also confirm the same for **VNetB** and **VNetC**.
+1. You can also confirm the same for **VNetB**,**VNetC**, and **VNetD**.
## Clean up resources
If you no longer need Azure Virtual Network Manager, you'll need to make sure al
1. To remove all configurations from a region, start in the virtual network manager and select **Deploy configurations**. Select the following settings:
- :::image type="content" source="./media/create-virtual-network-manager-portal/none-configuration.png" alt-text="Screenshot of deploy a none connectivity configuration settings.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/none-configuration.png" alt-text="Screenshot of deploying a none connectivity configuration.":::
| Setting | Value | | - | -- |
If you no longer need Azure Virtual Network Manager, you'll need to make sure al
| Delete option | Select **Force delete the resource and all dependent resources**. | | Confirm deletion | Enter the name of the network manager. In this example, it's **myAVNM**. |
-1. To delete the resource group, locate the resource group and select the **Delete resource group**. Confirm that you want to delete by entering the name of the resource group, then select **Delete**
+1. To delete the resource group and virtual networks, locate the resource group and select the **Delete resource group**. Confirm that you want to delete by entering the name of the resource group, then select **Delete**
## Next steps After you've created the Azure Virtual Network Manager, continue on to learn how to block network traffic by using a security admin configuration: > [!div class="nextstepaction"]
-> [Block network traffic with security admin rules](how-to-block-network-traffic-portal.md)
+
+[Block network traffic with security admin rules](how-to-block-network-traffic-portal.md)
+[Create a secured hub and spoke network](tutorial-create-secured-hub-and-spoke.md)
virtual-network-manager Create Virtual Network Manager Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-powershell.md
$virtualnetworkC | Set-AzVirtualNetwork
> [!NOTE] > It is recommended to scope all of your conditionals to only scan for type `Microsoft.Network/virtualNetwork` for efficiency.
-```azurepowershell-interactive
-$conditionalMembership = '{
- "allof":[
- {
- "field": "type",
- "equals": "Microsoft.Network/virtualNetwork"
- }
- {
- "field": "name",
- "contains": "VNet"
- }
- ]
-}'
+ ```azurepowershell-interactive
+ $conditionalMembership = '{
+ "allof":[
+ {
+ "field": "type",
+ "equals": "Microsoft.Network/virtualNetwork"
+ }
+ {
+ "field": "name",
+ "contains": "VNet"
+ }
+ ]
+ }'
``` 1. Create the Azure Policy definition using the conditional statement defined in the last step using New-AzPolicyDefinition.
$conditionalMembership = '{
> [!IMPORTANT] > Policy resources must have a scope unique name. It is recommended to use a consistent hash of the network group. Below is an approach using the ARM Templates uniqueString() implementation.
-```azurepowershell-interactive
- function Get-UniqueString ([string]$id, $length=13)
- {
- $hashArray = (new-object System.Security.Cryptography.SHA512Managed).ComputeHash($id.ToCharArray())
- -join ($hashArray[1..$length] | ForEach-Object { [char]($_ % 26 + [byte][char]'a') })
- }
-```
-
-```azurepowershell-interactive
-$defn = @{
- Name = Get-UniqueString $networkgroup.Id
- Mode = 'Microsoft.Network.Data'
- Policy = $conditionalMembership
-}
-
-$policyDefinition = New-AzPolicyDefinition $defn
-```
+ ```azurepowershell-interactive
+ function Get-UniqueString ([string]$id, $length=13)
+ {
+ $hashArray = (new-object System.Security.Cryptography.SHA512Managed).ComputeHash($id.ToCharArray())
+ -join ($hashArray[1..$length] | ForEach-Object { [char]($_ % 26 + [byte][char]'a') })
+ }
+ ```
+
+ ```azurepowershell-interactive
+ $defn = @{
+ Name = Get-UniqueString $networkgroup.Id
+ Mode = 'Microsoft.Network.Data'
+ Policy = $conditionalMembership
+ }
+
+ $policyDefinition = New-AzPolicyDefinition @defn
+ ```
1. Assign the policy definition at a scope within your network managers scope for it to begin taking effect.
$policyDefinition = New-AzPolicyDefinition $defn
PolicyDefinition = $policyDefinition }
- $policyAssignment = New-AzPolicyAssignment $assgn
+ $policyAssignment = New-AzPolicyAssignment @assgn
``` ## Create a configuration
virtual-network Add Dual Stack Ipv6 Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/add-dual-stack-ipv6-vm-powershell.md
+
+ Title: Add a dual-stack network to an existing virtual machine - Azure PowerShell
+
+description: Learn how to add a dual-stack network to an existing virtual machine using Azure PowerShell.
+++++ Last updated : 08/24/2022+
+ms.devlang:
++
+# Add a dual-stack network to an existing virtual machine using Azure PowerShell
+
+In this article, you'll add IPv6 support to an existing virtual network. You'll configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network will support private IPv6 addresses. The existing virtual machine network configuration will contain a public and private IPv4 and IPv6 address.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+
+- Azure PowerShell installed locally or Azure Cloud Shell
+
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+
+- An existing virtual network, public IP address and virtual machine in your subscription that is configured for IPv4 support only. For more information about creating a virtual network, public IP address and a virtual machine, see [Quickstart: Create a Linux virtual machine in Azure with PowerShell](/azure/virtual-machines/linux/quick-create-powershell).
+
+ - The example virtual network used in this article is named **myVNet**. Replace this value with the name of your virtual network.
+
+ - The example virtual machine used in this article is named **myVM**. Replace this value with the name of your virtual machine.
+
+ - The example public IP address used in this article is named **myPublicIP**. Replace this value with the name of your public IP address.
+
+## Add IPv6 to virtual network
+
+In this section, you'll add an IPv6 address space and subnet to your existing virtual network.
+
+Use [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork) to update the virtual network.
+
+```azurepowershell-interactive
+## Place your virtual network into a variable. ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'myResourceGroup'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+## Place address space into a variable. ##
+$IPAddressRange = '2404:f800:8000:122::/63'
+
+## Add the address space to the virtual network configuration. ##
+$vnet.AddressSpace.AddressPrefixes.Add($IPAddressRange)
+
+## Save the configuration to the virtual network. ##
+Set-AzVirtualNetwork -VirtualNetwork $vnet
+```
+
+Use [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) to add the new IPv6 subnet to the virtual network.
+
+```azurepowershell-interactive
+## Place your virtual network into a variable. ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'myResourceGroup'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+## Create the subnet configuration. ##
+$sub = @{
+ Name = 'myBackendSubnet'
+ AddressPrefix = '10.0.0.0/24','2404:f800:8000:122::/64'
+ VirtualNetwork = $vnet
+}
+Set-AzVirtualNetworkSubnetConfig @sub
+
+## Save the configuration to the virtual network. ##
+Set-AzVirtualNetwork -VirtualNetwork $vnet
+```
+
+## Create IPv6 public IP address
+
+In this section, you'll create a IPv6 public IP address for the virtual machine.
+
+Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create the public IP address.
+
+```azurepowershell-interactive
+$ip6 = @{
+ Name = 'myPublicIP-IPv6'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ Sku = 'Standard'
+ AllocationMethod = 'Static'
+ IpAddressVersion = 'IPv6'
+ Zone = 1,2,3
+}
+New-AzPublicIpAddress @ip6
+```
+## Add IPv6 configuration to virtual machine
+
+Use [New-AzNetworkInterfaceIpConfig](/powershell/module/az.network/new-aznetworkinterfaceipconfig) to create the IPv6 configuration for the NIC. The **`-Name`** used in the example is **myvm569**. Replace this value with the name of the network interface in your virtual machine.
+
+```azurepowershell-interactive
+## Place your virtual network into a variable. ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'myResourceGroup'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+## Place your virtual network subnet into a variable. ##
+$sub = @{
+ Name = 'myBackendSubnet'
+ VirtualNetwork = $vnet
+}
+$subnet = Get-AzVirtualNetworkSubnetConfig @sub
+
+## Place the IPv6 public IP address you created previously into a variable. ##
+$pip = @{
+ Name = 'myPublicIP-IPv6'
+ ResourceGroupName = 'myResourceGroup'
+}
+$publicIP = Get-AzPublicIPAddress @pip
+
+## Place the network interface into a variable. ##
+$net = @{
+ Name = 'myvm569'
+ ResourceGroupName = 'myResourceGroup'
+}
+$nic = Get-AzNetworkInterface @net
+
+## Create the configuration for the network interface. ##
+$ipc = @{
+ Name = 'Ipv6config'
+ Subnet = $subnet
+ PublicIpAddress = $publicIP
+ PrivateIpAddressVersion = 'IPv6'
+}
+$ipconfig = New-AzNetworkInterfaceIpConfig @ipc
+
+## Add the IP configuration to the network interface. ##
+$nic.IpConfigurations.Add($ipconfig)
+
+## Save the configuration to the network interface. ##
+$nic | Set-AzNetworkInterface
+```
+
+## Next steps
+
+In this article, you learned how to add a dual-stack network to an existing virtual machine.
+
+For more information about IPv6 and IP addresses in Azure, see:
+
+- [Overview of IPv6 for Azure Virtual Network.](ipv6-overview.md)
+
+- [What is Azure Virtual Network IP Services?](ip-services-overview.md)
++
virtual-network Virtual Networks Udr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-udr-overview.md
Each route contains an address prefix and next hop type. When traffic leaving a
|Default|172.16.0.0/12 |None | |Default|192.168.0.0/16 |None | |Default|100.64.0.0/10 |None |
-|Default|172.16.0.0/12 |None |
The next hop types listed in the previous table represent how Azure routes traffic destined for the address prefix listed. Explanations for the next hop types follow:
virtual-wan Howto Openvpn Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-openvpn-clients.md
Title: 'Configure OpenVPN clients for Azure Virtual WAN' description: Learn how to configure OpenVPN clients for Azure Virtual WAN. This article includes Windows, Mac, iOS, and Linux client configuration steps.- Previously updated : 04/27/2021 Last updated : 08/24/2022 # Configure an OpenVPN client for Azure Virtual WAN
-This article helps you configure **OpenVPN &reg; Protocol** clients. You can also use the Azure VPN Client for Windows 10 to connect via OpenVPN protocol. For more information, see [Configure a VPN client for P2S OpenVPN connections](openvpn-azure-ad-client.md).
+This article helps you configure **OpenVPN &reg; Protocol** clients. You can also use the Azure VPN Client to connect via OpenVPN protocol. For more information, see [Configure a VPN client for P2S OpenVPN connections](openvpn-azure-ad-client.md).
## Before you begin
vpn-gateway About Vpn Profile Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-vpn-profile-download.md
Title: 'P2S VPN client profile config files - Azure AD authentication'
+ Title: 'P2S VPN client profile configuration files - Azure AD authentication'
description: Learn how to generate P2S VPN client profile configuration files for Azure AD authentication. Previously updated : 05/04/2022 Last updated : 08/24/2022
-# Generate P2S Azure VPN Client profile config files - Azure AD authentication
+# Generate P2S Azure VPN Client profile configuration files - Azure AD authentication
-After you install the Azure VPN Client, you configure the VPN client profile. Client profile config files contain information that's necessary to configure a VPN connection. This article helps you obtain and understand the information needed to configure an Azure VPN Client profile for Azure VPN Gateway point-to-site configurations that use Azure AD authentication.
+This article helps you generate and extract VPN client profile configuration files. Client profile configuration files contain information that's used to configure your VPN client. The sections in this article explain the information needed to configure the Azure VPN Client profile for Azure VPN Gateway point-to-site configurations that use Azure AD authentication.
## <a name="generate"></a>Generate profile files
-You can generate VPN client profile configuration files using PowerShell, or by using the Azure portal. Either method returns the same zip file.
+You can generate VPN client profile configuration files either with PowerShell, or the Azure portal. Either method returns the same zip file.
### Portal
-1. In the Azure portal, navigate to the virtual network gateway for the virtual network that you want to connect to.
+1. In the Azure portal, go to the virtual network gateway for the virtual network that you want to connect to.
1. On the virtual network gateway page, select **Point-to-site configuration**. 1. At the top of the point-to-site configuration page, select **Download VPN client**. It takes a few minutes for the client configuration package to generate. 1. Your browser indicates that a client configuration zip file is available. It's named the same name as your gateway. Unzip the file to view the folders.
To generate using PowerShell, you can use the following example:
Extract the zip file. The file contains the following folders:
-* **AzureVPN**: The AzureVPN folder contains the **Azurevpnconfig.xml** file.
+* **AzureVPN**: The AzureVPN folder contains the **Azurevpnconfig.xml** file that is used to configure the Azure VPN Client.
* **Generic**: The generic folder contains the public server certificate and the VpnSettings.xml file. The VpnSettings.xml file contains information needed to configure a generic client ## <a name="get"></a>Retrieve file information
-In the **AzureVPN** folder, navigate to the ***azurevpnconfig.xml*** file and open it with Notepad. Make a note of the text between the following tags. You may need this information later when configuring the Azure VPN Client.
+In the **AzureVPN** folder, go to the ***azurevpnconfig.xml*** file and open it with Notepad. Make a note of the text between the following tags. This information is used later when configuring the Azure VPN Client.
``` <audience> </audience>
In the **AzureVPN** folder, navigate to the ***azurevpnconfig.xml*** file and op
When you add a connection, use the information you collected in the previous step for the profile details page. The fields correspond to the following information: * **Audience:** Identifies the recipient resource the token is intended for.
-* **Issuer:** Identifies the Security Token Service (STS) that emitted the token, as well as the Azure AD tenant.
+* **Issuer:** Identifies the Security Token Service (STS) that emitted the token, and the Azure AD tenant.
* **Tenant:** Contains an immutable, unique identifier of the directory tenant that issued the token. * **FQDN:** The fully qualified domain name (FQDN) on the Azure VPN gateway. * **ServerSecret:** The VPN gateway preshared key. ## Next steps
+Configure VPN clients.
+
+* [Windows - Azure VPN Client - Azure AD](openvpn-azure-ad-client.md).
+* [macOS - Azure VPN Client - Azure AD](openvpn-azure-ad-client-mac.md).
+ For more information about point-to-site, see [About point-to-site](point-to-site-about.md).
web-application-firewall Waf Front Door Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-tuning.md
The Microsoft-managed Default Rule Set is based on the [OWASP Core Rule Set (CRS)](https://github.com/SpiderLabs/owasp-modsecurity-crs/tree/v3.1/dev) and includes Microsoft Threat Intelligence Collection rules. It is often expected that WAF rules need to be tuned to suit the specific needs of the application or organization using the WAF. This is commonly achieved by defining rule exclusions, creating custom rules, and even disabling rules that may be causing issues or false positives. There are a few things you can do if requests that should pass through your Web Application Firewall (WAF) are blocked.
+> [!Note]
+>
+> Managed Rule Set is not available for Azure Front Door Standard SKU. For more information about the different tier SKUs, refer to [Feature comparison between tiers](/azure/frontdoor/standard-premium/tier-comparison#feature-comparison-between-tiers)
+ First, ensure youΓÇÖve read the [Front Door WAF overview](afds-overview.md) and the [WAF Policy for Front Door](waf-front-door-create-portal.md) documents. Also, make sure youΓÇÖve enabled [WAF monitoring and logging](waf-front-door-monitor.md). These articles explain how the WAF functions, how the WAF rule sets work, and how to access WAF logs. ## Understanding WAF logs
web-application-firewall Web Application Firewall Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/web-application-firewall-logs.md
Activity logging is automatically enabled for every Resource Manager resource. Y
5. Type a name for the settings, confirm the settings, and select **Save**.
-### Activity log
+## Activity log
Azure generates the activity log by default. The logs are preserved for 90 days in the Azure event logs store. Learn more about these logs by reading the [View events and activity log](../../azure-monitor/essentials/activity-log.md) article.
-### Access log
+## Access log
The access log is generated only if you've enabled it on each Application Gateway instance, as detailed in the preceding steps. The data is stored in the storage account that you specified when you enabled the logging. Each access of Application Gateway is logged in JSON format, as shown in the following example for v1:
For Application Gateway and WAF v2, the logs show a little more information:
} ```
-### Performance log
+## Performance log
The performance log is generated only if you have enabled it on each Application Gateway instance, as detailed in the preceding steps. The data is stored in the storage account that you specified when you enabled the logging. The performance log data is generated in 1-minute intervals. It is available only for the v1 SKU. For the v2 SKU, use [Metrics](../../application-gateway/application-gateway-metrics.md) for performance data. The following data is logged:
The performance log is generated only if you have enabled it on each Application
> [!NOTE] > Latency is calculated from the time when the first byte of the HTTP request is received to the time when the last byte of the HTTP response is sent. It's the sum of the Application Gateway processing time plus the network cost to the back end, plus the time that the back end takes to process the request.
-### Firewall log
+## Firewall log
The firewall log is generated only if you have enabled it for each application gateway, as detailed in the preceding steps. This log also requires that the web application firewall is configured on an application gateway. The data is stored in the storage account that you specified when you enabled the logging. The following data is logged:
The firewall log is generated only if you have enabled it for each application g
```
-### View and analyze the activity log
+## View and analyze the activity log
You can view and analyze activity log data by using any of the following methods: * **Azure tools**: Retrieve information from the activity log through Azure PowerShell, the Azure CLI, the Azure REST API, or the Azure portal. Step-by-step instructions for each method are detailed in the [Activity operations with Resource Manager](../../azure-monitor/essentials/activity-log.md) article. * **Power BI**: If you don't already have a [Power BI](https://powerbi.microsoft.com/pricing) account, you can try it for free. By using the [Power BI template apps](/power-bi/service-template-apps-overview), you can analyze your data.
-### View and analyze the access, performance, and firewall logs
+## View and analyze the access, performance, and firewall logs
[Azure Monitor logs](../../azure-monitor/insights/azure-networking-analytics.md) can collect the counter and event log files from your Blob storage account. It includes visualizations and powerful search capabilities to analyze your logs.